YX problem

Software people are always all up in the XY problem: someone asks about how to do X when what they’re really trying to solve is Y. I find the YX problem much more frustrating: where software people decide that they want to answer question Y even though what someone asks is question X.

I’ve seen a few different manifestations of this pattern:

  • Respondent doesn’t know the answer to X, but does know the answer to Y, and hopes that answering Y demonstrates expertise/usefulness.
  • Respondent doesn’t know the answer to X, but riffs on what the answer probably would be, and ends up answering Y.
  • Respondent doesn’t believe that the querent should be trying X and thinks they should be trying Y instead; respondent didn’t ask querent the context for X but jumped straight to answering Y.
  • Respondent knows of a process Y that leads up to the querent trying X and decides to enumerate the steps of that process Y; even though they know that the querent is already trying X.
  • Respondent misunderstood question X to be question Y.

The common advice on questions for software people is How to ask questions the smart way. The problem with this advice is that it’s written from the perspective of an asymmetric relationship: the respondent is a busy expert, the querent is an idle dilettante; the querent has a responsibility to frame their question in the optimum way for the expert to impart wisdom to the idler.

Frequently the situation is more symmetric: we’re both busy experts, and we both have incomplete knowledge of both the question domain and what we’re trying to achieve. Have some patience with other people (whichever side of the interaction you’re on), and assume good faith on the part of all involved until they present contrary evidence. That means starting from the assumption that someone asked question X because they want an answer to question X.

Posted in learning | 2 Comments

Floating point numbers aren’t weird

When people say “floating point numbers are weird”, they typically mean that the IEEE 754 floating point representation for numbers doesn’t meet their needs, or maybe that it meets their needs but it is surprising in its behaviour because it doesn’t match their intuitive understanding of how numbers work.

IEEE 754 isn’t weird, it’s just designed for a specific scenario. One where having lots of different representations of NaN makes sense, because they can carry information about what calculation led to NaN, so you shouldn’t do equality comparisons on NaN. One where having positive and negative representations of 0 makes sense. One where…you get the idea.

(What is that specific scenario? It’s one where developers need to represent real numbers, and need reliable error-signaling, and don’t necessarily understand the limitations well enough to design a good system or handle corner cases themselves. Once you’ve got an idea that IEEE 754 does something weird, you’ve probably graduated and are ready to move on.)

But you were sold a general-purpose programming language running on a general-purpose computer, and the only representations of numbers they support are actually not general purpose: fixed-width integers with optional 2s complement negatives, and IEEE 754 floating point. You don’t have to use those options, and you don’t have to feel like you must be holding it wrong because your supposedly general-purpose programming environment only lets you use specific types of numbers that don’t fit your purpose.

Check out decimal representations, arbitrary precision representations, posits, fractional representations, alternative rounding choices, and open up the possibility of general-purpose numbers in your general-purpose computer.

Posted in software-engineering | Leave a comment

Still no silver bullet?

In his 1986 article No Silver Bullet—Essence and Accident in Software Engineering, Fred Brooks suggests that there’ll never be a single tool, technique, or fad that realises an order-of-magnitude improvement in software engineering productivity. His reason is simple: if there were, it would be because current practices make software engineering ten times more onerous than they need to be, and there’s no evidence that this is the case. Instead, software engineering is complex because it provides complex solutions to complex problems, and that complexity can’t be removed without failing to solve the complex problem.

Unfortunately, the “hopes for the silver” that he described as not being silver bullets in the 1980s are still sold as silver bullets.

  • Ada and other high-level language advances. “Ada will not prove to be the silver bullet that slays the software productivity monster. It is, after all, just another high-level language, and the big payoff from such languages came from the first transition, up from the accidental complexities of the machine into the more abstract statement of step-by-step solutions.” Why, then, do we still have a pre-Cambrian explosion of new programming languages, and evangelism strike forces pooh-poohing all software that wasn’t written in the new hotness? On the plus side, Brooks identifies that “switching to [Ada will be seen to have] occasioned training programmers in modern software design techniques”. Is that happening in strike force land?
  • Object-oriented programming. “Such advances can do no more than to remove all the accidental difficulties from the expression of the design. The complexity of the design itself is essential; and such attacks make no change whatever in that.” The same ought to go for the recent resurgence in function programming as a silver bullet idea: unless our programs were 10x as complex as they need to be, applying new design constraints makes equally complex programs, specified in a different way.
  • Artificial intelligence. “The hard thing about building software is deciding what to say, not saying it. No facilitation of expression can give more than marginal gains.” This is still true.
  • Expert systems. “The most powerful contribution of expert systems will surely be to put at the service of the inexperienced programmer the experience and accumulated wisdom of the best programmers. This is no small contribution.” This didn’t happen, and expert systems are no longer pursued. Perhaps this silver bullet has been dissolved.
  • “Automatic” programming. “It is hard to see how such techniques generalize to the wider world of the ordinary software system, where cases with such neat properties [as ready characterisation by few parameters, many known methods of solution, and existing extensive analysis leading to rules-based techniques for selecting solutions] are the exception. It is hard even to imagine how this breakthrough in generalization could conceivably occur.”
  • Graphical programming. “Software is very difficult to visualize. Whether we diagram control flow, variable scope nesting, variable cross-references, data blow, hierarchical data structures, or whatever, we feel only one dimension of the intricately interlocked software elephant.” And yet visual “no-code solutions” proliferate.
  • Program verification. “The hardest part of the software task is arriving at a complete and consistent specification, and much of the essence of building a program is in fact the debugging of the specification.” Indeed program verification is applied more widely now, but few even among its adherents would call it a silver bullet.
  • Environments and tools. “By its very nature, the return from now on must be marginal.” And yet software developers flock to favoured IDEs like gnus to watering holes.
  • Workstations. “More powerful workstations we surely welcome. Magical enhancements from them we cannot expect.” This seems to have held; remember that at the time Rational was a developer workstation company, who then moved into methodologies.

Meanwhile, of his “promising attacks on the conceptual essence”, all have accelerated in adoption since his time.

  • Buy versus build. Thanks to free software, we now have don’t-buy versus build.
  • Requirements refinement and rapid prototyping. We went through Rapid Application Development, and now have lean startup and minimum viable products.
  • Incremental development—grow, not build, software. This has been huge. Even the most staid of enterprises pay at least some lip service to an Agile-style methodology, and can validate their ideas in a month where they used to wait multiple years.
  • Great designers. Again, thanks to free software, a lot more software is developed out in the open, so we can crib designs that work and avoid those that don’t. Whether or not we do is a different matter; I think Brooks’s conclusions on this point, which conclude the whole paper, are still valid today.
Posted in design, software-engineering | 3 Comments

On whiteboard coding

Another day in which someone lamented to me the demeaning nature of the interview coding challenge. It is indeed embarrassing, when someone with more than two decades of software engineering experience is asked to complete a gotcha-style programming task under the watchful eye of an unhelpful interviewer. It ought to be embarassing for both of them if the modern IDE available to the candidate is a whiteboard, and a selection of coloured markers for syntax highlighting.

But here’s the problem: consistently, throughout those decades and longer, recruiting managers who hire programmers have been beset by candidates who can’t program. It’s such a desirable career path that plenty of people will try to enter, even those who hope to pick up on whatever it is they’re supposed to do once they get the job. And, indeed, that can be a good way to learn: what is Pete McBreen’s “Software Craftsmanship” other than an imperative for on-the-job learning and mentoring?

Many companies don’t have the capacity or ability to bring a keen learner up from scratch, or are hiring into roles where they expect more familiarity with the skill. Thus, the uncomfortable truth: to fix programmer interviews, you first need to fix programmer screening. Demonstrate that all candidates coming through the door are capable programmers at the required level, and hirers no longer need to test their programming skills.

Note: or do they? Maybe someone can program, but uses techniques that the rest of the team consider to be unnatural. Or they work best solo/paired/in a mob, and the team works best paired/in a mob/solo. Still, let's roll with it: remove the need for a test in the interview by only interviewing candidates who would pass the test.

The problem is that every approach to screening for programming comes with its own downsides.

The economic approach, as currently practised: keep people away by making the career less desirable, by laying off hundreds of thousands of practitioners. The problem here is plunging many people into financial uncertainty, and reducing the psychological safety of anyone who does remain.

Moving the problem upstream: sending out pre-interview coding challenges. This suffers many of the same problems as live coding, except that the candidate doesn’t have to meet the dull gaze of a bored interviewer, and the interviewer doesn’t know it was actually the candidate who completed the challenge. I suppose they could require the candidate to sign their submission, then share their key fingerprint in the interview. An additional problem is that the candidate needs time outside of the interview to complete the challenge, which can be difficult. Not as a difficult as finding the time to:

Maintain a public portfolio. This biases towards people with plenty of spare time, or who get to publish their day-job work, or at least don’t have an agreement with their day-job employer that they don’t work on outside projects.

Our last possibility is the more extreme: de-emphasize the importance of the programming skill, so that the reason employers don’t need to screen for it is that it’s less essential as a hiring criterion. This was tried before, with 1990s-style software engineering and particularly Computer-Aided Software Engineering (CASE). It didn’t get very far that time, but could do on a second go around.

Posted in whatevs | Leave a comment

On software engineering hermeneutics

When I use a word it means just what I choose it to mean — neither more nor less.

Humpty-Dumpty in Alice through the Looking Glass

In my recent round of TDD clarifications, one surprising experience is that folks out there don’t agree on the definition of TDD. I made it as clear as possible in my book. I thought it was clear. Nope. My bad.

Kent Beck in Canon TDD

I invented the term Object-Oriented, and I can tell you I did not have C++ in mind.

Alan Kay in The Computer Revolution Hasn’t Happened Yet

I could provide many other examples, where a term was introduced to the software engineering state of the art meaning one thing, and ended up meaning “programming as it’s currently done, but with this small change that’s a readily-observable property of what the person who introduced the term described”. Off the cuff: “continuous integration” to mean “running automated checks on VCS branches”; “Devops” to mean “hiring Devops people”; “refactoring” to mean “editing”; “software engineering” to mean “programming”.

I could also provide examples where the dilution of the idea was accompanied by a dilution of the phrasing. Again, just typing the first ideas that come into my head: Free Software -> Open Source -> Source Available; various 1990s lightweight methodologies -> Agile Software Development -> Agile.

Researchers of institutions and their structures give us tools that help understand what’s happening here. It isn’t that software engineers are particularly bad at understanding new ideas. It’s that software engineering organisations are set up to reproduce the ceremonies of software engineering, not to be efficient at producing software.

For an institution to thrive, it needs to be legitimate: that is, following the logic that the institution proposes needs to be a good choice out of the available choices. Being the rationally most effective or most efficient choice is one legitimising factor. So is being the thing that everybody else does; after all, it works for them, so why not for us? So is being the thing that we already do; after all, it got us this far, so why not further?

With these factors of legitimacy in mind, it’s easy to see how the above shifts in meaning can occur. Let’s take the TDD example. Canon TDD says to write a list of test scenarios; turn one item into a runnable test; change the code to make that test and all previous tests pass; optionally refactor to improve the design; then iterate from the second step.

First person comes along, and has heard that maybe TDD is more effective (rational legitimacy). They decide to try it, but their team has heard “working software over comprehensive documentation” so they don’t want to embarrass themselves by writing a list of test scenarios (cognitive legitimacy). So they skip that step. They create a runnable test; change the code to make that test pass; optionally refactor. That works well! They share this workflow under the name TDD (Red-Green-Refactor).

Second person comes along, and has heard that the cool kids (Kent Beck and first person) are doing TDD, so they should probably do it too (normative legitimacy). They decide to try it, but they notice that if they write the code they want, then write the tests they want, they end up in the same place (they have code, and they have tests, and the tests pass) that Canon TDD and TDD (Red-Green-Refactor) end up in. So where’s the harm? Now they’re doing TDD too! They show their colleagues how easy it is.

Now everybody is doing a slightly different TDD, but it’s all TDD. Their descriptions of what they do construct the reality in which they’re doing TDD, which is an example of what the researchers call performative discourse. TDD itself has become ceremonial; the first and subsequent people are doing whatever they want to do and declaring it TDD because the legitimate thing to do is called TDD.

This does give those people who want to change software engineering some pointers on how to do it. Firstly, overshoot, because everybody’s going to meet you a short way along the path. Secondly, don’t only talk up the benefits of your proposed change, but the similarities with what people already do, to reduce the size of the gap. Thirdly, make sure that the likely partial adoptions of the change are improvements over the status quo ante. Fourthly, don’t get too attached to the words you use and your choice of their meanings: they mean just what anybody chooses them to mean—no more and no less.

Posted in philosophy after a fashion, social-science, software-engineering | Leave a comment

On rational myths

In my research field, one characteristic of institutions is their “rational myths”; ideas that people tell each other are true, and believe are true, but which are under-explored, unverified, and under-challenged. Belief in these myths leads to supposedly rational actions that don’t necessarily improve efficiency or performance, but are done because everyone else does them, and everyone collectively believes they’re what one does.

We know, from Derek Jones’s Evidence-based software engineering, that what we know about software engineering is not very much. So what are the rational myths where you work? Do you recognise them? Could you change them? What would it take to support or undermine your community’s rational myths, and would you want to take that risk?

Posted in academia, social-science, software-engineering | 4 Comments

We shall return one day

On this day 80 years ago, 16th November 1943, the villagers of Tyneham near Lulworth was evacuated to allow Allied military forces to prepare for D-Day. Despite promises that the evacuation was temporary, the UK lurched directly from the second world war into the Cold War and decided to keep the land to practice against the new “enemy” and former ally, the Soviet Union. Tyneham remains uninhabited, and remains within a live firing range. People may only visit when the Ministry of Defence are ready for them.

In a time when people are still being displaced by war across the world, we remember the villagers of Tyneham, and an occasion when the country displaced its own citizens. The ten tracks on this album contain music, song, and storytelling from around Dorset. With no voices left in Tyneham, all parts are performed by the same person, but throughout we hear the message from the locals: “We shall return one day”.

Listen here: https://soundcloud.com/user-343604096/sets/we-shall-return-one-day

Posted in music | 1 Comment

In which things are given names

I recently joined in a very interesting discussion with some of my peers on the thorny subject of naming variables in programs. The core question was whether it’s OK to give a variable a temporary name while you’re still working out what it’s for, or whether you should pause and think out what it’s called before you move on.

The question raises other questions, and those are much more interesting to consider. For example, there’s an aphorism in computing that naming things is one of the hardest problems we have. That isn’t true. We’ve been naming things for 60,000-100,000 years, and writing down names for things for 5,000 years.

If you know about the thing you’re naming, and you know what the name should convey, then naming things is easy. For example, this blog post is on the topic of naming things, and communicating the topic is an important part of the title, so calling the post “On naming things” was very easy. Then, because I’m a comedian, I decided to go back and use a different name.

If naming something is hard, either we don’t know something about it, or we don’t know something about communicating that knowledge. The second of those is usually much simpler than the first, when it comes to variable names. The variable name is only used by other programmers: either inspecting the program text, or understanding dynamic behaviour in a debugger. The variable represents a snapshot of a part of the program state, and its name should communicate how that snapshot contributes to the valuable computation the program encapsulates.

It’s therefore likely that when we struggle to name a variable, it’s not because we haven’t identified the audience. It’s because we haven’t identified what the variable is, or what it’s for.

In most software design methodologies, we derive the existence of variables from aspects of the design. In the incremental refinement approaches described by people like Tony Hoare and Niklaus Wirth, as we refine a specification we identify the invariants that hold at each level, and the variables we need to preserve those invariants.

In DeMarco’s structured analysis, we design our systems by mapping the data flow: our variables hold those data and enable their transformation.

In Object-Oriented analysis and design, we design objects that take on particular roles in an interaction that models the domain problem. Our variables represent the responsibilities and collaborators known to each object.

In Test-Driven Development, we identify a desirable change in the system’s behaviour, and then enact that change. Our variables represent contributions to that behaviour.

It’s likely that when we can’t name a variable, it’s because we haven’t designed enough to justify introducing the variable yet.

As a specific example, if we’re thinking about an algorithm in a process-centric manner, we might have a detailed view of the first few steps, and decide to write a variable that stores the outcome of those steps and is used as input in the subsequent steps. In such a case, the variable doesn’t represent anything in the solution model, and is going to be hard to name. It represents “where I got to in the design before I started typing”, which isn’t a useful variable name. The solution in this case is neither to come up with a good name, nor to drop in a temporary name and move on. The solution is to remove the variable, and go back to designing the rest of the algorithm.

Posted in code-level | 1 Comment

I’ve vastly misunderstood the Single Responsibility Principle

So have a lot of other people; my understanding of it accords with much writing I’ve seen about the principle on the web, and elsewhere. For example, in Michael Feathers’ “Working Effectively with Legacy Code”:

Every class should have a single responsibility: It should have a single purpose in the system, and there should be only one reason to change it.

Michael Feathers, Working Effectively with Legacy Code (p.246)

I came to question that understanding today when I read Parnas’s article On the Criteria to be Used in Decomposing Systems into Modules. His top criterion is decision hiding: everywhere you make a choice about the system implementation, put that choice into a module with an interface that doesn’t expose the choice made.

If you combine that principle with my incorrect SRP (“one reason to change per module”), you get the absurd situation that each module may contain only one change point. In other words, each bit of software architecture information must exist in a separate module.

So, I went back to my understanding of the SRP. I found that it was flawed, and that the person who coined the phrase Single Responsibility Principle (Robert Martin) even said as much. He said it a long time ago, but years after the incorrect interpretation had got its shoes on and run twice around the world.

When you write a software module, you want to make sure that when changes are requested, those changes can only originate from a single person, or rather, a single tightly coupled group of people representing a single narrowly defined business function.

Robert Martin, The Single Responsibility Principle

So the principle is that the module’s behaviour is the responsibility of a single actor. It’s not that the module has a single reason to change, but that a single entity will request the changes. This is much easier to resolve alongside Parnas’ version of modularity.

This isn’t some new deep revelation or hidden insight, by the way. That post by Martin is referenced on the wikipedia entry for SRP, which states the true (well, the as-given) definition of the principle. The fact that I and so many others can hold a completely different view, for so long, in the face of such obvious contradictory evidence, tells us something about knowledge transfer in software engineering that we probably ought to attend to.

Posted in architecture of sorts | Tagged | 7 Comments

Programming, language

Programming languages represent two things: programming, and language.

Programming languages were previously designed very much with the former in mind. For Algol-style, imperative languages, design followed one of a few, mathematically-led approaches:

  • Denotational semantics: encourages a designer to identify a mathematical structure that correctly expresses transformations that programs in the desired language should represent, and design the language such that each operation realises a transformation in this structure.
  • Axiomatic semantics: encourages a designer to design operations that transform program state, and again to design the language so that it represents combinations of those operations.

Other semantics are available: for example if you operationalise the lambda calculus you end up with LISP, and if you operationalise the pi calculus you find occam-π. Indeed, when Backus complained about the imperative programming style in Can programming be liberated from the von Neumann style?: a functional style and its algebra of programs he wasn’t asking us to give up on a mathematical basis for programming languages.

Quite the reverse: he thought the denotational or axiomatic bases were too complex for programmers who weren’t also expert mathematicians to grasp, and that languages designed that way, with their word-at-a-time transformations of memory, led to “flabby” programs that don’t compose well. He called for finding a different mathematical basis for programming, using the composition of functions as a (relatively, much simpler) starting point but incorporating the history sensitivity required to permit stateful operations.

So much for programming. On the other hand, programming languages are also languages: constructions for communicating information between people, and between people and computers. Programming languages must be able to carry the information that people want to convey: and if they want to convey it to the computer, that information can’t reside in a comment.

Thus we get approaches to programming language design that ask people what they want to say, and how they want to say it. At one extreme, almost nihilist in its basis, is the Perl-style postmodernist philosophy: it’s not up to the language designer to constrain expression so the language gives you all the tools to say anything, however you want.

More common are varying degrees of participatory process, in which people who use the language collaborate on designing new features for the language. We could identify multiple forms of organisation, of which these are a few examples:

  • Jurocracy: rule of law. People submit requests to a central committee, who then decide what to accept and produce new versions of the language.
  • Tyrrany: rule of the one or the few. Whatever happens within the community, an individual or controlling group direct the language the way they want.
  • Megalofonocracy: rule of the loud voices. People submit requests to a notice board, and whichever ones get noticed, get implemented.

There are other structures within this region.

Both approaches have their merits, and address different needs that should both be reflected in the resultant programming languages. A language with no mathematical basis offers no confirmation that constructs are valid or correct, so may not represent programming. Programming with no agreed-upon vocabulary offers no confirmation that constructs are understood correctly by machine or human audiences, so may not represent language.

Unfortunately it may be the case that we previously went through a fashion for mostly-semantic programming language design, and are currently experiencing a fashion for mostly-linguistic programming language design.

Posted in tool-support | Leave a comment