The gaps between the processes

Knowledge management—not just in software engineering and not just digital knowledge management—has long had to account for tacit knowledge: the things that people know, but never say.

“A lesser blog would reproduce the Donald Rumsfeld statement about known unknowns at this point”—Graham Lee.

Where do people pick up this tacit knowledge? In conversations, at conferences, by reading books: it’s not really tacit in that it’s literally never spoken. Tacit knowledge is tacit in that people rely on it in particular scenarios without being explicit that they’re relying on it.

Every software engineering process or methodology works (or not) in the presence of tacit knowledge. How they work in its presence is illuminating, mostly for telling us something about how the people who created the process think about the world.

The waterfall-as-practiced (i.e. not the Royceian “do it twice” version, but the DoD and everyone in the 1980s “do it once” version) model seems to rely on all knowledge being explicit, and available. There are truths about the universe that are your software’s requirements, and if you spend enough time and do a good enough job at requirements gathering you can know what those requirements are before you get to the point where you use those requirements in writing a specification.

Every iterative and incremental model, from the Royceian waterfall-as-described process where you write one to throw away then write it again, through the spiral model, to the various scrums and scrumbuts in practice today, allows for people saying “oh, right, no, not that, because…” and bringing that tacit knowledge into the open. They might not express that that’s why they have iterations, they might say it’s for early and continuous delivery of value or to identify and mitigate risks, but they still do give space for “what I meant to say was…”.

Structured programming approaches expect everything to be written down. A use case (in the original, technical meaning, not the current fancy-way-to-say-“situation” meaning) is a pedantic document that describes all of the interactions someone might have with “the system” in pursuit of a particular goal.

Lightweight approaches expect the bare minimum to be written down, and everything to be elucidated. A user story (in the original, technical meaning, not the current fancy-way-to-say-“Jira ticket” meaning) is a placeholder for a conversation. You’re meant to write the story on one side of an index card, then, as you have the conversations, capture the rest of the information that everybody needs on the other side.

Internal wikis trade on the idea that if only it were really easy to edit a knowledge base, we’d all just write down our tacit knowledge and make it explicit, while also somehow making it salient, searchable, and up to date. Intranets trade on the idea that nobody knows anything outside of their lane and that it’s up to the Department of XYZ to make the Policy for XYing your Z available to all employees or no Z will ever get XYd in the correct way.

Interestingly a recent development in software engineering methodology seems to be both positivist and dependent on tacit knowledge. That’s the “web-scale” methodology, where you never talk to customers, but you run A/B tests, you create metrics, and optimise for those metrics. Positivist, because there is assumed to be a success factor that can be measured numerically and attained by moving the numbers. Tacit, because no customer ever says what they want or why they want it: instead, organisations create numeric goals that represent what a million, or a billion, customers are guessed to want, and strive to make the numbers go up (or down, as appropriate).

I suspect that this approach may be the biggest methodological innovation in software in more than two decades, and also one that methodologists seem to be currently quiet on.

Posted in whatevs | Leave a comment

In which things are not known

In the last episode—Is software engineering a thing?—I (apparently controversially) suggested that software is the reification of thought, and that software engineering is thus the art of reifying thought, and that thus there can’t be any single one-size-fits-all software engineering approach. Let’s dig in.

One of the big questions in a software project, hence one of the big topics in software engineering, is requirements: who wants the software to do something, do we need to pay attention to them, and what do they want it to do? We’re already well into the realm of individuals and interactions—whether the people building the thing and the people telling them what to build can agree on what one of the two groups thinks they mean—and haven’t got as far as building software yet. There’s plenty of software engineering ink spilled in this domain, but it can be hard to even decide whether to agree at a metaphysical level with some of it.

Taking a convenience sample (i.e. what’s the nearest book on my shelf that I think might mention software requirements), Shari Pfleeger’s “Software Engineering: the Production of Quality Software” asks “What is a requirement?” and supplies its own answer:

A requirement is a feature of the system or a description of something the system is capable of doing in order to fulfill the system’s purpose.

Enquiring minds have many questions, but let’s focus on questions pertaining to reality. Does the system have an objective, positive purpose that can be said to exist? Does the requirement support that purpose, or does someone just think or hope that it does? Does the requirement-as-description accurately capture the expectation of the person who thought it?

With this level of reflection, we can still expect a field of software engineering to say something about requirements, and for understanding that to help with constructing software, but not for it to supply a single solution to “how to requirements”. And without that, much of the rest of software engineering naturally bifurcates or multifurcates. For example, verification and validation is about whether the software does what it ought—or whether someone thinks the software does what they think it ought—but we’re back to asking our question of whether we have (or can) accurately capture that.

Posted in software-engineering | Leave a comment

Is software engineering a thing?

In the title I’m kindof punning on the word “a” (it’s my blog, and I get to do what I want). Is there a single thing, software engineering, that all people making software should (or could, or would find to be beneficial) do?

It’s a question that’s sat with me all through my doctoral research, and has sat with the field for much longer. In 2007, Diane Kelly argued that there’s a “chasm” between scientific computing and commercial software, and that leaders should get together and identify their differences to come up with discipline-specific approaches to writing software. In other words, that there isn’t a universal “software engineering” that’s the best approach to writing software.

A decade later, Tim Storer described the chasm as a skills gap that needed to be closed. In this view, software engineering is the application of software knowledge to the production of software, and computational scientists don’t have enough of that knowledge to do it correctly.

There’s a whole community of research devoted to uncovering a grand unified theory of software engineering, in analogy to the Grand Unified Theories of physics that unite the electromagnetic and weak and strong nuclear forces. Members of this community (which goes by the name SEMAT: Software Engineering Methods and Theory) start not by constructing their theory but by deconstructing others.

They argue (convincingly) that any particular software engineering methodology is flawed, because it recommends a whole suite of practices but we don’t know which are relevant and useful, or how they interact, we just know that the methodologists managed to trademark their particular grab bag of practices and argue that if you’re not making enough software, you’re not doing it the way they propose. While there might be something to daily stand-ups, or to organising work by sprints, or to holding retrospectives, there’s nothing to Scrum because selecting all of these practices together is entirely arbitrary.

What the SEMAT folks argue for instead is more of a systems approach to software (in a Dana Meadows sense rather than a Jerry Weinberg sense): the team use their way of working to do work to generate a software system that realises requirements to exploit an opportunity identified by stakeholders; which part of that process is the most broken and what can you do to make it less broken than some other part?

I think that’s a great way to think about it, and I also don’t think that a GUT of software engineering will arise from it. To me, software is the reification of thought in a reusable and somewhat abstracted structure: we understand something about a context and try to capture our (or, commonly, someone else’s) understanding of that context in a way that can be automatically calculated using digital electronic equipment. To say that a universal theory of making software exists is to say that a universal theory of understanding thought exists, and we aren’t there.

Many of the open problems in software engineering boil down to not being able to capture thoughts precisely. Software engineering ethics is the inability to define universal rights and wrongs (which may not exist anyway). Software quality management is the inability to agree what the understanding was and whether we’ve captured it correctly. The fact that we don’t agree on whether object-oriented, structured, functional, or some other approach to analysis and design is the best choice is a sign that we don’t agree on how to encode thought in a way that we can think about.

In other words, software construction is thinking about thought, it is meta-thought. And we don’t agree enough on how thought works to be able to get consensus on the best way to think about thought, let alone the best way to encapsulate that thinking about thought in a saleable product.

Posted in software-engineering | 12 Comments

Resolutions

Although I didn’t make any resolutions this new year, it’s still a time for change. That’s because I finally submit my D.Phil. thesis (if I’m on time, that will be before January 18th), so I’ve already been putting things in place that mean that I’ll start doing things differently this year, without having to decide that I’m going to do things differently on January 1st (and invariably last until about the 3rd).

In my case, this year is about society. Writing a doctoral thesis means a lot of time alone, so I’ve joined committees for some clubs I’m in, and will be looking to do other things which mean spending more time finding out what other people want to do and helping them to do it.

Posted in whatevs | Leave a comment

On the Consolation of Software Engineering

I’m currently reading Boethius’s writing on the consolation of philosophy. Imprisoned awaiting the death penalty in 523 (for treason against King Theodoric), Boethius imagined a conversation with the personification of Philosophy herself, a woman of variable height whose fine dress was torn by various previous philosophers who had snatched tatters from it and imagined that they had the whole thing. The work has been available in English translation since the ninth century, when King Alfred commanded it be translated, though I’m reading a much more recent Penguin Classics translation.

The work leads me to imagine De Consolatione Ingenariae Computatraliae, in which someone is visited by the anthropomorphic personification of software engineering (in my headcanon this is Bruce Boxleitner as Tron), and the visitor uses the Socratic method to explore why the narrator chose to ignore so much knowledge of computing as they cut corners to close ticket after ticket in their career.

Posted in book | Leave a comment

In which a life re-emerges

While it’s far from finished, my PhD thesis is now complete: there are no to-do items left, no empty sections, no placeholders. Now the proof-reading, editing and corrections continue in earnest.

I look forward to poking my head out of that rabbit hole too, and finding out what else has been happening in the world since October 2020.

Posted in academia | Leave a comment

The Cataract of Software Delivery

There’s this paper from August 1970, called Managing the Development of Large Software Systems, that’s considered something of a classic (either for good or for bad, depending on your worldview). The discussion often goes something like this:

Let’s say I have some software to build, and I think it’s going to take about a year to build it. Few people are going to happily say “go away for a year and tell me when its done”. Instead, most people will want to break down that year into smaller chunks, so they can monitor progress and have confidence that things are on track. The question then is how do we perform this break down?

The waterfall style, as suggested by the Royce sketch, does it by the activity we are doing. So our 1 year project might be broken down into 2 months of analysis, followed by 4 months design, 3 months of coding, and 3 months of testing.

Martin Fowler, Waterfall Process

Within the phase space of Royce paper discussions, there are those who say that he argues for the strict sequential process as Fowler does, based on Figure 2 in the paper. There are those who say that he predicts some iterative, evolutionary, (dare we say it?) agile contribution to the process, based on Figure 4 in the paper. But many agree with Fowler when he says “this paper seems to be universally acknowledged as the source of the notion of waterfall”.

It isn’t. The overall process had already been described in 1956 as a “structured” (ever wonder where the phrase “structured programming” comes from, when it clearly doesn’t refer to having data structures?), “top-down” approach, by Herb Benington in Production of Large Computer Programs. Methodologists contemporary with and later than Royce, including luminaries like Barry W. Boehm and Agilistas like Alan Moran, knew about this paper, so even if we don’t use it in our software mythologies any more, it isn’t obscure and wasn’t obscure in Royce’s time.

Both Benington and Royce write within a context of large-scale government-funded projects: Benington from his experience with SAGE (the Semi-Automatic Ground Environment) and Royce at the aerospace contractor TRW. Both talk about phased approaches with dependencies between tasks (so you can’t do coding until you’ve done design, for example). Both knew about the value of prototyping, though in a mistake that makes Hoare’s introduction of NULL look like loose change, Benington didn’t mention it until 1983:

I do not mention it in the attached paper, but we undertook the programming only after we had assembled an experimental prototype of 35,000 instructions of code that performed all of the bare-bone functions of air defense. Twenty people understood in detail the performance of those 35,000 instructions; they knew what each module would do, they understood the interfaces, and they understood the performance requirements. People should be very cautious about writing top-down specs without having this detailed knowledge, so that the decision-maker who has the “requirement” can make the proper trade-offs between performance, cost, and risk.

To underscore this point, the biggest mistake we made in producing the SAGE computer program was that we attempted to make too large a jump from the 35,000 instructions we had operating on the much simpler Whirlwind I computer to the more than 100,000 instructions on the much more powerful IBM SAGE computer. If I had it to do over again, I would have built a framework that would have enabled us to handle 250,000 instructions, but I would have transliterated almost directly only the 35,000 instructions we had in hand on this framework. Then I would have worked to test and evolve a system. I estimate that this evolving approach would have reduced our overall software development costs by 50 percent.

Herb Benington, Production of Large Computer Programs

Royce, on the other hand, describes the “write one to throw away” approach, in which a prototype informs the design of the final system but doesn’t become part of it:

A preliminary program design phase has been inserted between the software requirements generation phase and the analysis phase. This procedure can be criticized on the basis that the program designer is forced to design in the relative vacuum of initial software requirements without any existing analysis..As a result, his preliminary design may be substantially in error as compared to his design if he were to wait until the analysis was complete. This criticism is correct but it misses the point. By this technique the program designer assures that the software will not fail because of storage, timing, and data flux reasons. As the analysis proceeds in the succeeding phase the program designer must impose on the analyst the storage, timing, and operational constraints in such a way that he senses the consequences. When he justifiably requires more of this kind of resource in order to implement his equations it must be simultaneously snatched from his analyst compatriots. In this way all the analysts and all the program designers will contribute to a meaningful design process which will culminate in the proper allocation of execution time and storage resources. If the total resources to be applied are insufficient or if the embryo operational design is wrong it will be recognized at this earlier stage and the iteration with requirements and preliminary design can be redone before final design, coding and test commences.

Winston Royce, Managing the Development of Large Software Systems

Royce’s goal with this phase was to buttress the phased development approach, which he believed to be “fundamentally sound”, by adding data in earlier phases that informed the later phases. Indeed, if Royce finds any gap in the documentation on a project “my first recommendation is simple. Replace project management. Stop all activities not related to documentation. Bring the documentation up to acceptable standards. Management of software is simply impossible without a very high degree of documentation.”

So we have a phased, top-down, sequential development process in 1956, that survived until 1970 when more phases were added to reduce the risk accepted in lower phases. Good job those lightweight methodologists came along in the 1990s and saved us with their iterative, incremental development, right?

Not quite. Before Boehm knew of Benington’s paper, he’d already read Hosier’s Pitfalls and Safeguards in Real-Time Digital Systems with Emphasis on Programming from June 1961. Hosier presents (in figure 2) a flow chart of program development, including feedback from much of the process and explicit “revision” steps in the control routine, unit programming, and assembly.

It’s not so much that nobody knew about iterative development, or that nobody did it, or that nobody shared their knowledge. Perhaps it just wasn’t in vogue.

Posted in whatevs | 1 Comment

Non-standard components

Another day, another exercise from Software: A Technical History…

A software engineering project might include both standard and nonstandard engineering components. Give an example of a software engineering project where this would be appropriate.

Kim W. Tracy, Software: A Technical History (p. 43)

Buy vs. build (or, in the age of free software, acquire vs. build) is perhaps the most important question in any software engineering endeavor. I would go so far as to say that the solution to the software crisis wasn’t object-oriented programming, or agile software development, or any other change in the related methods and tools of software—those have largely been fad-driven. It was the confluence of these two seminal events:

  • The creation of the GNU project by Richard Stallman, which popularized to the Four Freedoms, which led to the Debian Social Contract, which led to the Open Source Definition.
  • The dot-com crash, which popularized not having money to spend on software licenses or developers, which led to adopting free software components.

This, the creation of de facto standards in software commons, then drove adoption of the LAMP stack on the technology side, and fast-feedback processes including the lightweight methodologies that became known as agile, lean startup, lean software, and so on.

Staffing costs aside, software development can be very inexpensive at the outset, provided that the developers control the scope of their initiative to avoid “boiling the ocean”. Therefore it can be easy and, to some extent, low-impact, to get the buy-vs-build calculus wrong and build things it’d be better to buy. But, as code is a liability, making the wrong choice is cheap today and expensive tomorrow.

One technique that helps to identify whether to use a standard component is a Wardley map, which answers the question “how closely-related is this part of our solution to our value proposition?” If it’s something you need, but not something that’s core to your unique provision, there’s little need for a unique component. If it’s an important part of your differentiation, it probably ought to be different.

Another is Cynefin, which answers the question “what does this problem domain look like”? If it’s an obvious problem, or a complicated problem, the solution is deterministic and you can look to existing examples. If it’s complex or chaotic, you need to be more adaptive, so don’t want to be as constrained by what other people saw.

Bringing this all together into an example: the Global.health project has a goal to provide timely access to epidemiological data to researchers, the press, and the public. “Providing timely access to…” is a well-solved problem, so the project uses standard components there: Linux, HTTPS, hosted databases, event-driven processing. “Epidemiological data” is a complex problem that became chaotic during COVID-19 (and does again with other outbreaks), so the project uses nonstandard components there: its own schemata, custom code, and APIs for researchers to write their own integrations.

Posted in history | Tagged | Leave a comment

Specific physical phenomena

Continuing the theme of exploring the exercises in Software: A Technical History:

Give an example of a specific physical phenomenon that software depends
on in order to run. Can a different physical phenomenon be used? If so, give
another example phenomenon. If not, explain why that’s the only physical
phenomenon that can be used.

Kim W. Tracy, Software: A Technical History (p. 43)

My short, but accurate, answer is “none”. Referring back to the definition of software I quoted in Related methods and tools, nothing in that definition implies or requires any particular physical device, technology, or other phenomenon.

Exploring the history of computing, it’s clear that the inventors and theoreticians saw computers as automation (or perhaps more accurately flawless repetition) of thought:

We may compare a man in the process of computing a real number to a machine which is only capable of a finite number of conditions…

Alan M. Turing, On Computable Numbers, with an Application to the Entscheidungsproblem (§1)

Or earlier:

Whenever engines of this kind exist in the capitals and universities of the world, it is obvious that all those enquirers who wish to put their theories to the test of number, will apply their efforts so to shape the analytical results at which they have arrived, that they shall be susceptible of calculation by machinery in the shortest possible time, and the whole course of their analysis will be directed towards this object. Those who neglect the indication will find few who will avail themselves of formulae whose computation requires the expense and the error attendant on human aid.

Charles Babbage, On the Mathematical Powers of the Calculating Engine

For any particular physical tool you see applied to computing—mercury delay line memory, “silicon” chips (nowadays the silicon wafer is mostly a substrate for other semiconductors and metals), relays, thermionic valves, brass cogs, hydraulic tubes—you can replace it with other tools or even with a person using no tools at all.

So it was then, when the mechanical or digital computers automated the work of human computers. As it was in the last century, when the “I.T.” wave displaced human clerical assistants and rendered the typing pool redundant, and desktop publishing closed the type shop. Thus we see today, that categorization systems based on “A.I.” are validated on their performance when compared with human categorizers.

Nothing about today’s computers is physically necessary for their function, although through a process of iterating discovery with development we’ve consolidated on a physical process (integrated semiconductor circuits) that has particular cost, power, performance, manufacturing, and staffing qualities. A more interesting question to ask would be: what are the human relations that software depends on in order to run? In other words, what was it about these computers, typists, typesetters, paraprofessionals, and so on that made their work the target of software? Can a different human relation be used?

Posted in history | Tagged | Leave a comment

Related methods and tools

The book Software: A Technical History has plenty of exercises and projects at the end of each chapter, to get readers thinking about software and its history and to motivate additional research. For example, here’s exercise 1 (of 27 exercises and 8 projects) from chapter 1 (Introduction to Software History):

Why does the definition of software in this text include “related methods and tools?” What does knowing about the methods and tools used to develop software tell us about the software?

Kim W. Tracy, Software: A Technical History (p. 43)

The definition is this: Software is the set of programs, concepts, tools, and methods used to produce a running system on computing devices. (p. 2)

Those tools and methods are sometimes “a running system on computing devices” themselves, in which case they trivially fall into the definition of software: an Integrated Development Environment is a software system that people use to produce software systems.

Sometimes, the tools aren’t themselves “a running system on computing devices”. Flowcharts, UML diagrams, whiteboards, card punches, graph-paper bitmaps, and other artifacts are not themselves applications of computing, and neither are methods and methodologies like Object-Oriented Programming, the Personal Software Process, or XP.

Both the computer-based and non-computer-based tools and methods influence the system that people create. The system is a realization on a computing machine of an abstract design that’s intended to address some set of desires or needs. The design itself is an important part of the software because it’s the thing that the running system is intended to realize.

The tools and methods are themselves part of the design because they influence and constrain how people think about the system they’re realizing, the attributes of the realized system, and how people collaborate to produce that system. As an example, software developers using Simula-67 create classes as types that encapsulated part of their design, and instantiate objects as example members of those types within their system. Software developers using Smalltalk do the same thing, but documentation about Simula-67 encourages thinking about hierarchical type systems within a structured programming paradigm, and documentation about Smalltalk encourages thinking about active objects within an object-oriented paradigm. So people in a community of Simula-67 programmers and people in a community of Smalltalk programmers would design different systems that work in different ways, even though using very similar tools.

When I read the definition of software and the exercise, I initially thought that it was wrong to think of the tools and methods as part of the software, even though it’s important to include the tools and methods (and the associated social and cultural context of their legitimacy, popularity, and importance) in a consideration of software history. I considered a tighter definition, that includes the running programs, the (intangible) artifacts that comprise those programs, and any source code used in creating those artifacts. Reflecting on the exercise and writing this response, I see that this is an arbitrary place to draw the boundary, and that a definition of software that includes the design and methods used in creating the artifacts is also workable.

Posted in books, history | Tagged | 2 Comments