Craft or Science? Software Engineering Evolution

(c) Jon Collins, Alcatel, 1994


The last thirty years of the technological revolution have seen the development of a number of high-technology industries. The computer industry saw an explosion in its software needs as long as 25 years ago, and has been evolving accordingly ever since. Other hardware industries such as telecommunications which have felt more recently the effects of this “revolution” are learning the same lessons: experiencing the evolution of software development from a “craft” (a creative act based partially on rules of thumb and hypothesis), to a true branch of engineering using scientific and mathematical principles. This article presents how far software development has come, explaining how, although software engineering not yet a truly “engineered” process, it is certainly en route to becoming so. Finally it gives examples of how other engineering disciplines can help software engineering along its evolutionary path.


The concept of software has been around for as long as there have been procedures to automate. We could go back to Herman Hollerith’s punched card census system, or Charles Babbage’s difference engines. Lists of instructions, or “programs”, in the form of punched cards, could be said to date right back to Jacquard’s looms (as programmed by Ada Lovelace, daughter of Lord Byron, who gave her name to the Ada programming language). Software drives hardware: software programs are there to enable the hardware to run.

The twentieth century is seeing a technological revolution without precedent. Cars, telephones, radio, computers and satellites have been invented with barely a pause for breath. And there are no signs of anything slowing down: the infinitely small becomes smaller, and what was impossible to do yesterday has become impossible to do without. One basic need has not changed: to get the most out of all this modern machinery, procedures must be written and programmed. The technological revolution has not dismissed the need for software. However the increase in hardware complexity has obliged the development of ever more elaborate software packages.

More recently a specific type of machine – the computer – has evolved with the primary purpose of running software programs. Software programs started to exist in their own right, independent of the hardware: companies were started that produced only software, and a new industry was born. Specialised techniques have been, and continue to be defined for software development. To ensure that necessary consideration is given to these mechanisms, a new discipline called Software Engineering is being developed which provides a firm framework based on engineering principles.

Repeatedly pioneers of the hardware revolution are finding that software plays an ever more important role in their products. The software no longer comes second; it becomes as important as the hardware on which it runs. Software packages cease to be firmware instructions blown onto eproms, becoming complex, workstation based graphic systems involving many person years of development effort. In short, the same software engineering principles used by the software industry are becoming necessary for the hardware manufacturers as well.

As hardware companies start down the path of large-scale software production they risk making mistakes which have already been made once over. The still relatively immature, “trial and error” state of software technology adds to this danger. As it evolves towards a true engineering discipline, the understanding of fundamental software concepts improves, the approach to development becomes more disciplined and the techniques employed become better defined and used. But it is not there yet.

Perhaps the established software companies are further along this evolutionary path. But by noting the advances they have already made, and by avoiding their mistakes, hardware manufacturers may join in without losing time.


As software products have grown in complexity, so have the techniques and procedures used to develop them. The term ‘Software Engineering’ was coined to suggest an approach with the necessary discipline to control the many activities involved. But is the software we produce really the result of an engineered process? Can we justify this use of the term at all, or is it wishful thinking? Maybe this question can be answered by looking at the definition of “engineering” itself.

Engineering has been defined as “the application of science and mathematics to the design and construction of artefacts which are useful to humanity”.

In other words, engineering provides a means to create something useful based on sound scientific principles. Can we apply this definition to software production? It implies :

1. The creation of software which fulfils its requirements of :

  • suitability of purpose, or how appropriately the software matches the needs agreed between user and analyst
  • evolubility, maintainability, the abilities of the package(s) to stay up to date without implying an ever-increasing support overhead
  • timeliness, particularly as a late product can often mean an obsolete one.
  • elegance of solution. The product should be efficient and pleasant to use

2. A development process which :

  • is based on proven scientific and mathematical principles
  • uses formally defined and commonly understood techniques
  • enables the mathematical proof of the output of any stage.

If the above were true we could confidently use the term ‘Software Engineering’. This looks feasible but we know it is not true. Virtually every available so-called software engineering textbook opens by gleefully listing examples of failed large-scale software projects. And the so-called “Software Crisis” in not over yet : as the first chapter of DeMarco’s book « Peopleware» [*] cheerfully reminds us, « somewhere, at this very minute, a project is in the process of failing» …

So what is going wrong? According to « Peopleware» , it is due to not treating developers with respect. Brown’s “Software Engineering Environments” [*] implies that a fully integrated tool set will solve many of the problems. Conger [*] explains that it is the methodology that must first be properly defined and then followed. The answer is of course that they are all right, at least partially. There are a number of principles and techniques available for software development, dealing with human, environment and quality aspects. It is not these principles in themselves which are at fault; more likely it is their use which is lacking. As explains Baber [*], “By its inherent nature, software development is an engineering discipline. It is not currently practised as such.”

So why are these techniques so difficult to apply? The reasons may be linked to the newness of the whole domain of software development.


There are several good reasons why we fail to apply correctly software engineering principles. Brown lists the problems as follows :

  • Complexity, both of the product and the development processes
  • Difficulty of establishing requirements
  • (Supposed) changeability of software
  • Invisibility
  • Development of a theory for the problem domain

These issues are detailed below.


Application requirements. The complex nature of software applications stems from the fact that what we attempt to do with software is far more difficult than what we attempted to do without it. In general the most desired products are those which enable things which would not otherwise have been possible.

Development activities. Software development involves the interaction of a number of design and implementation activities, each of which will look at the application in a different way. For example a system decomposition may be done according to architecture, functions or processes. The design must guarantee coherence between the different types of decomposition: this requirement becomes more and more difficult as the application increases in size.

Product interfacing. The added requirement that two applications work together indicates further complexity. This is a special problem for hardware manufacturers who not only have to ensure correct communication between different software modules, but also between software and hardware devices.

Organisational complexity. The co-ordinated development of software and hardware increases the co-ordination required between different development groups or even separate companies, each with their own understanding of the application field.

Difficulty of establishing requirements

To develop a package, the supplier must first identify the needs of his customers. This is very difficult as the analyst is unlikely to understand fully either the client’s domain, or the way the client talks about it. To make it worse, there is a reasonable chance that the client does not really know exactly what he wants.

Changeability of Software

We have all made the mistake (at least once) of thinking that software is easy to modify. The apparent simplicity of changing software, adding functions, making that ‘quick fix’ is undeniable. But we all know as well that it is fatal to think of software in this way. But we still do.

A spin-off problem is that, at the beginning of a software project certain attributes (for example, ‘performance’) may be left out with the intention of adding them later. Similarly the client often gives less consideration than necessary to the requirements definition, thinking that he can request a modification later.


The intangibility of software is linked to the immaturity of the field, and our own lack of experience. This makes it hard to comprehend the steps to take. We still think of programming when we think of software, when in reality programming makes up no more than one third of the total development effort.

This intangibility can cause one program to look much like another. The risk is, for example, that the prototype of an application is used as a real product. The differences between a thrown-together prototype and an engineered software package become evident when modifications are necessary later.

Development of a theory for the problem domain

Computer systems are often required to model the situations in which they are used. A failure to analyse correctly the domain of a set of requirements will result in fundamental flaws in the implementation.


Each of the above problems may be linked either to the immaturity of the field, or the lack of understanding of the problem domains to which software is applied. And these problems have no easy solutions, linked as they are to our ongoing accumulation of knowledge. Furthermore, as we enter new problem domains requiring software, for example network management, we find untreated problems which require new methods of analysis. The evolutionary process continues.

Given these difficulties, it is no surprise that mistakes are made. Common errors are :

Reinvention. To start from scratch, not profiting from the proven work of the past, is a frequent error. It is hard to know what has been done already and what is new, especially as different problems present themselves in new ways. This is highlighted when experts leave a given company and their work must be repeated. The mistake of reinvention is often made purposely as individuals do not trust or understand the progress of others: at a company level this is referred to as the NIH (« Not Invented Here» ) principle.

Misjudgement. This is the art of applying the wrong solution to a problem, either through misunderstanding or lack of competence. For example, to use a database when a flat file would do, or to apply object-oriented techniques to a particularly non-object-oriented problem. Remember the adage : A real programmer can write FORTRAN in any language.

To prevent these kind of blunders, a more formal approach must be applied to the whole of the development process. In this way the decisions of each stage can be verified and proved. Fortunately for software developers, a number of definitions and standards exist to ensure this, such as the IEEE standard for software development processes: this is described in detail below.


It is not an easy task to develop software. The newcomer to the software industry is faced with a plethora of different life cycles, complex methodologies, overlapping methods and incomplete design techniques. Added to this, a large variety of design and development tools with similar overlaps and discrepancies, serve to confuse the issue. Further confusion arises from the tool set providers’ sales claims. Maybe it is possible to follow the methodology SSADM using Oracle’s CASE tool set, but that certainly wasn’t the original intention of the tool.

In this environment of contradiction and overlap it is easy to lose sight of the fundamental elements of development which apply whatever methodology, design technique or tool is in use. Whichever way the activities of a software project are performed, it remains mandatory that each one exists and produces the expected output.

This may be considered as the lowest level of the software development process. It may be built on as follows :

Level 0: The mandatory activities and objectives of each part of the project are fixed.

Level 1: These activities are organised into processes and grouped as the phases of a software life cycle appropriate to the product in hand.

Level 2: The expected outputs of each activity, process or phase are formalised by the definition of a methodology

Level 3: To aid the production of such outputs, and to ensure their coherence, the use of proven design methods are defined.

Level 4: To accelerate and co-ordinate the production processes, software design, development and support tools are used, possibly integrated to form an environment. As complexity grows such tools become essential.

It is clearly essential to fix the lower level definitions before proceeding higher. It would be dangerous to fix a tool set before the methodology is defined, especially as tools often impose restrictions on the methodology or even the life cycle.

As Sue Conger [*] notes, « No one tool is ideal or complete. The software engineer knows how to select the tools, understanding their strengths and weaknesses. Most of all, a software engineer is not limited to a single tool he or she tries to force-fit to all situations» .

Problem solving must be handled at the lowest possible level. Incoherence in a lower level may result in flaws higher up, if it is not treated. The cause of a given problem should be checked for at the lowest level rather than ‘curing’ higher level symptoms of problems occurring lower down.

Level 0 activities are mandatory for the development and maintenance of all software packages without exception, small or large. A standard from the IEEE computer society: « IEEE Standard for Developing Software Life Cycle Processes » [*] defines the minimal set of these activities: a total of 65 mandatory activities are specified, organised into 17 processes. The processes themselves are divided into 7 different process types, as follows :

The standard imposes only the tasks to be performed. Each activity works on provided inputs (which must be available before the activity can be started) to generate output information. The form of this information is not imposed, nor is the method used to obtain it; the actual ordering of activities into life cycle phases, and the documents to be produced, are left to the project in its Level 1 and 2 process phases.

Maybe, further down the evolutionary path we will see standards defined for the higher levels of the software development process. For the moment we must content ourselves with Level 0. If all software projects conformed to this lowest level standard, we would already be a lot further towards the achievement of the engineering discipline we so need.


By examining the essential attributes of other engineering disciplines, we can both discover how much further software engineering has to go, and benefit from their experience. So what are the required attributes of a branch of engineering? A summary of Baber’s [*] essential aspects of any engineering discipline are listed below, together with their relevance to software development.

1. The existence of a substantial body of directly relevant scientific, theoretical knowledge

The theory of the development of software has been studied for almost fifty years now. Significant progress has been made but some necessary areas are seriously lacking. For example, in the field of metrics and reliability measurement, a report presented by IBM La Gaude in 1991 [*] explained that « software reliability is not a field applied [by La Gaude], which considers that the theory has not yet reached a sufficient level to produce anything of substance» .

In order that research be continued and expanded upon, knowledge should come from a wide variety of related (but different) areas. This is not a problem for software engineering science which overlaps with a number of disparate fields. As notes Baber, « many important unanswered questions remain,» ensuring that research will continue.

2. The thorough learning of such knowledge by practitioners and their management.

It seems too obvious to say that software engineers should be competent in what they do. Other engineering disciplines require many years of study before acceptance. However, given the difficulties of complexity and intangibility this is often not the case for software. Even once employed, training (one of the IEEE’s mandatory activities) can often get missed out. The La Gaude report explains its inability to use project cost evaluations such as COCOMO by saying that « the principal factor which upsets the model is the high level of variation in the quality of the programmers» .

At the management level, a lack of technical make recruitment and task allocation difficult.

3. The regular, systematic application of such knowledge to their work.

It is not enough to understand only theory: software researchers are frequently chastised for their lack of practicality. However, the work of practitioners with only a limited theoretical base is unlikely to be progressive, and will probably result in wasted time as theory is reinvented on the job.

4. The individual responsibility of engineers and their management to clients and public for correct, safe designs

Each person working on a project has individual responsibility for the reliability of his work.

As each phase of the software life cycle is normally worked on by a different set of people, this aspect implies that a group can present their outputs confident that they have been verified both systematically and analytically against their requirements.

5. The existence of professional associations having sufficient knowledge and experience prerequisites for membership.

Professional bodies exist to enable the sharing of knowledge between professionals. They also provide a commonly agreed measurement for the competence of their members.

What Benefits?

The benefits of an engineering approach are noted as :

  • a reduction of errors (to the point where software failures become a noteworthy event)
  • an increase in efficiency, as unnecessary overheads are minimised or eliminated.


The traditional craftsman uses handed down principles and rules of thumb as the basis of his work: experience gained over many years of apprenticeship. The engineer proves his designs at every stage with scientific laws and mathematics. Craft and engineering differ in their techniques but they have a common goal remain the same : to provide efficient, useful artefacts.

In « Computer Programming as an art » [*], Knuth summarises this by saying « computer programming is an art, because it applies accumulated knowledge to the world, because it requires skill and ingenuity, and especially because it produces object of beauty » . The knowledge base that we have built up over thirty years is agreed to be incomplete. Reliance is often placed on software craftsmen, commonly known as ‘gurus’ or ‘hackers’ (in the programming world, a hacker is seen as a programmer of high esteem whereas it is the cracker who breaks into computer systems) – the reputation of some gives them almost hero. Baber [*] suggests that when software development progresses to be a true engineering discipline, such characters will disappear to be replaced by ‘members of professional bodies’. If this is going to happen it is a long way off yet.

And what of software itself? Is software an art form? Knuth refers to ‘beautiful programs’, and it is true that an elegantly structured section of code may be a pleasure to look at, at least to another programmer! This point should not be taken too far: a motorway intersection may well be an object of beauty to another construction engineer, but it is probably not for the rest of us. Artistic qualities do however have practical value for software : an elegant program is reasonably likely to be a well-written, maintainable one. It would be worth considering the addition of a little ‘art’ at all stages of the software life cycle. For example :

1. In the specification phase, a common language document is used to show the producer’s understanding of the clients’ needs. A well-written specification will increase the chances that such requirements have been noted; a readable specification may well ensure that both parties read it at all.

2. Software architectures can be elegant or impractical. An elegant architecture is one which ensures that its subsystems work together smoothly and efficiently; an inefficient architecture will be the source of an unnecessary performance overhead.

3. Algorithms can be fluid, smooth in motion or clunky and slow. A well-written algorithm will be more efficient than a badly written one.

4. The code itself can read like a good book or a child’s first essay. If the code is not readable it will quickly become unmaintainable, especially if its author is no longer available to explain it. Code should be self-documenting. If it is not clear exactly what a given code section is doing then it should be rewritten. In the maintenance phase, the code should also explain how it has been modified, by whom and when.

The quality of the product is dependent on the experience of the software developers – and their management – at their craft. Such craftsmanship must be learned. And as we have already seen, there is not always the time or resources available to provide such training. « Peopleware » makes the point that developers should be considered as experts rather than resources, and should be given enough space and facilities to permit them to excel. In DeMarco and Lister’s experience, it is this which ensures the optimal productivity of development groups.

Today’s software developer is somewhere between a craftsman and an engineer, using both science and common sense in his work. Tomorrow’s developer may well be the perfectly trained engineer proving everything as he goes. But he is not there yet. At least for the moment, gurus are a necessity and ‘software beauty’ keeps our minds on the goal.


As the evolution towards true engineering continues, we attempt to increase our understandings of the workings of software by comparing it to the real-world. Such modelling happens at all levels, for example :

* Algorithms are written based on boiling and cooling techniques to enable the « shaking down» of data (simulated annealing).

* Prototyping of software products reflects the tradition of building scale models to demonstrate the feasibility of a design.

* Specification paradigms such as neural networks are used to define learning algorithms.

Approaches to engineering production have also been mapped onto the software world. To indicate the diversity of such models, three such techniques are presented here : the first two come from the world of electronics – the modelling of programmed modules as discrete components, each with a defined behaviour and interfaces (like Integrated Circuits) – and the creation of an error-free environment (or « cleanroom» ) for software production. Finally, measurement techniques have been borrowed from materials engineering as a means of increasing the tangibility of software during its development.

– Software Components and ICs

The principle behind Software Components was first presented by Brooks back in 1975. Using the same principles as in electrical engineering, low-level programmed modules are formalised with a defined external behaviour and interfaces. An example of this approach is the NAG library of programmed functions for numerical computation (or the PHIGS 3D graphics library). More recently, with the advent of object oriented technology this approach has been extended to the concept of Software Integrated Circuits – ICs – which are binary files which implement objects.

As writes Brad Cox [*], « the possibility that the software industry might obtain some of the benefits that the silicon chip brought to the hardware industry; the ability of a supplier to deliver a tightly encapsulated unit of functionality that is specialised for its intended function, yet independent of any particular application» . It is indeed a nice thought, but is it possible in practice? Note that within the chip, it is a different story as the same problems as with software occur between the different modules on the silicon.

A related technology is that of the « code reuse» approach to software development, in which software is designed with an explicit aim to use previously written software components, and any new software components are designed and programmed in a way to ensure their capacity for reuse. Note that this is very different to developing a new software product based on a previously developed, seemingly similar application. – this approach often ends up costing more than it would to rewrite the supposedly shared packages.

– Cleanroom software Engineering

Possibly the best way to go but not so sure that we are ready for this, yet. It is evolution, not revolution, that is necessary.

– Software Metrics

It is not enough to transpose the rules of electrical engineering onto software, to reapply the same management techniques and to expect them to work.

What is interesting is that such techniques, proved and re-proved, have met with little interest. This is probably due to the NIH syndrome again. Coupled with this is the lack of understanding about software in general, and so methods of optimising parts of the process will not necessarily be taken into account.


It is clear is that the software production will become an engineering discipline. In its current state it is not lacking in itself but in the lack of sound underlying principles, both for software and for the understanding of the problems to be solved. As to whether we can call Software Engineering by that name, the debate continues; what is sure is that the implications of the term go a long way to ensure that software meets our expectations of it, both as suppliers and clients. Software Development NEEDS to be an engineering discipline, with all its implications.


[] Brad J. Cox, Andrew J. Novobilski, « Object Oriented Programming: An Evolutionary approach» , 2nd Edition Addison Wesley 1991

[] Sue A. Conger, « The new software engineering» , 1st Edition 1994 International Thompson Publishing

[] Donald E. Knuth, « Computer Programming as an Art» , ACM Turing Award Lectures: The First Twenty Years, pp. 34-46, (C) 1987 ACM Press

[] M. F. Devon, « Methodologie de Developpement des logiciels : l’experience IBM La Gaude» , Conference au campus Thompson le 29 Mai 1991

[] Brooks, « The mythical man-month» , 1975

[] Tom De Marco, Timothy Lister, « Peopleware : «

[] IEEE standard for developing software engineering processes.

2 thoughts on “Craft or Science? Software Engineering Evolution

Leave a Reply

Your email address will not be published. Required fields are marked *