There’s this paper from August 1970, called Managing the Development of Large Software Systems, that’s considered something of a classic (either for good or for bad, depending on your worldview). The discussion often goes something like this:
Let’s say I have some software to build, and I think it’s going to take about a year to build it. Few people are going to happily say “go away for a year and tell me when its done”. Instead, most people will want to break down that year into smaller chunks, so they can monitor progress and have confidence that things are on track. The question then is how do we perform this break down?
The waterfall style, as suggested by the Royce sketch, does it by the activity we are doing. So our 1 year project might be broken down into 2 months of analysis, followed by 4 months design, 3 months of coding, and 3 months of testing.
Martin Fowler, Waterfall Process
Within the phase space of Royce paper discussions, there are those who say that he argues for the strict sequential process as Fowler does, based on Figure 2 in the paper. There are those who say that he predicts some iterative, evolutionary, (dare we say it?) agile contribution to the process, based on Figure 4 in the paper. But many agree with Fowler when he says “this paper seems to be universally acknowledged as the source of the notion of waterfall”.
It isn’t. The overall process had already been described in 1956 as a “structured” (ever wonder where the phrase “structured programming” comes from, when it clearly doesn’t refer to having data structures?), “top-down” approach, by Herb Benington in Production of Large Computer Programs. Methodologists contemporary with and later than Royce, including luminaries like Barry W. Boehm and Agilistas like Alan Moran, knew about this paper, so even if we don’t use it in our software mythologies any more, it isn’t obscure and wasn’t obscure in Royce’s time.
Both Benington and Royce write within a context of large-scale government-funded projects: Benington from his experience with SAGE (the Semi-Automatic Ground Environment) and Royce at the aerospace contractor TRW. Both talk about phased approaches with dependencies between tasks (so you can’t do coding until you’ve done design, for example). Both knew about the value of prototyping, though in a mistake that makes Hoare’s introduction of NULL look like loose change, Benington didn’t mention it until 1983:
I do not mention it in the attached paper, but we undertook the programming only after we had assembled an experimental prototype of 35,000 instructions of code that performed all of the bare-bone functions of air defense. Twenty people understood in detail the performance of those 35,000 instructions; they knew what each module would do, they understood the interfaces, and they understood the performance requirements. People should be very cautious about writing top-down specs without having this detailed knowledge, so that the decision-maker who has the “requirement” can make the proper trade-offs between performance, cost, and risk.
To underscore this point, the biggest mistake we made in producing the SAGE computer program was that we attempted to make too large a jump from the 35,000 instructions we had operating on the much simpler Whirlwind I computer to the more than 100,000 instructions on the much more powerful IBM SAGE computer. If I had it to do over again, I would have built a framework that would have enabled us to handle 250,000 instructions, but I would have transliterated almost directly only the 35,000 instructions we had in hand on this framework. Then I would have worked to test and evolve a system. I estimate that this evolving approach would have reduced our overall software development costs by 50 percent.
Herb Benington, Production of Large Computer Programs
Royce, on the other hand, describes the “write one to throw away” approach, in which a prototype informs the design of the final system but doesn’t become part of it:
A preliminary program design phase has been inserted between the software requirements generation phase and the analysis phase. This procedure can be criticized on the basis that the program designer is forced to design in the relative vacuum of initial software requirements without any existing analysis..As a result, his preliminary design may be substantially in error as compared to his design if he were to wait until the analysis was complete. This criticism is correct but it misses the point. By this technique the program designer assures that the software will not fail because of storage, timing, and data flux reasons. As the analysis proceeds in the succeeding phase the program designer must impose on the analyst the storage, timing, and operational constraints in such a way that he senses the consequences. When he justifiably requires more of this kind of resource in order to implement his equations it must be simultaneously snatched from his analyst compatriots. In this way all the analysts and all the program designers will contribute to a meaningful design process which will culminate in the proper allocation of execution time and storage resources. If the total resources to be applied are insufficient or if the embryo operational design is wrong it will be recognized at this earlier stage and the iteration with requirements and preliminary design can be redone before final design, coding and test commences.
Winston Royce, Managing the Development of Large Software Systems
Royce’s goal with this phase was to buttress the phased development approach, which he believed to be “fundamentally sound”, by adding data in earlier phases that informed the later phases. Indeed, if Royce finds any gap in the documentation on a project “my first recommendation is simple. Replace project management. Stop all activities not related to documentation. Bring the documentation up to acceptable standards. Management of software is simply impossible without a very high degree of documentation.”
So we have a phased, top-down, sequential development process in 1956, that survived until 1970 when more phases were added to reduce the risk accepted in lower phases. Good job those lightweight methodologists came along in the 1990s and saved us with their iterative, incremental development, right?
Not quite. Before Boehm knew of Benington’s paper, he’d already read Hosier’s Pitfalls and Safeguards in Real-Time Digital Systems with Emphasis on Programming from June 1961. Hosier presents (in figure 2) a flow chart of program development, including feedback from much of the process and explicit “revision” steps in the control routine, unit programming, and assembly.
It’s not so much that nobody knew about iterative development, or that nobody did it, or that nobody shared their knowledge. Perhaps it just wasn’t in vogue.
1947: “Planning and coding of problems for an electronic computing instrument” by H. H. Goldstine and J. von Neumann.
Iterative development has been independently discovered many times: “Iterative and incremental development: A brief history” by C. Larman and V. R. Basil, where the primary author of DoD-Std-2167 expressed regret for creating the strict waterfall-based standard,
Short history of software methods by D. F. Rico