Your reminder that “British English” and “American English” are fictional constructs

Low-stakes conspiracy theory: they were invented by word processing marketers to justify spell-check features that weren’t necessary.

Evidence: the Oxford English Dictionary (Oxford being in Britain) entry for “-ise” suffix’s first sense is “A frequent spelling of -ize suffix, suffix forming verbs, which see.” So in a British dictionary, -ize is preferred. But in a computer, I have to change my whole hecking country to be able to write that!

Posted in Englisc | Leave a comment

On Scarcity

It’s called scarcity, and we can’t wait to see what you do with it.

Let’s start with the important bit. I think that over the last year, with acceleration toward the end of the year, I have heard of over 100,000 software engineers losing their jobs in some way. This is a tragedy. Each one of those people is a person, whose livelihood is at the whim of some capricious capitalist or board of same. Some had families, some were working their dream jobs, others had quiet quit and were just paying the bills. Each one of them was let down by a system that values the line going up more than it values their families, their dreams, and their bills.

While I am sad for those people, I am excited for the changes in software engineering that will come in the next decade. Why? Because everything I like about computers came from a place of scarcity in computering, and everything I dislike about computers came from a place of abundance in computering.

The old, waterfall-but-not-quite, measure-twice-and-cut-once approach to project management came from a place of abundance. It’s cheaper, so the idea goes, to have a department of developers sitting around waiting for a functional specification to be completed and signed off by senior management than for them to be writing working software: what if they get it wrong?

The team at Xerox PARC – 50 folks who were just told to get on with it – designed a way of thinking about computers that meant a single child (or, even better, a small group of children) could think about a problem and solve it in a computer themselves. Some of those 50 people also designed the computer they’d do it on, alongside a network and some peripherals.

This begat eXtreme Programming, which burst onto the scene in a time of scarcity (the original .com crash). People had been doing it for a while, but when everyone else ran out of money they started to listen: a small team of maybe 10 folks, left to get on with it, were running rings around departments of 200 people.

Speaking of the .com crash, this is the time when everyone realised how expensive those Oracle and Solaris licenses were. Especially if you compared them with the zero charged for GNU, Linux, and MySQL. The LAMP stack – the beginning of mainstream adoption for GNU and free software in general – is a software scarcity feature.

One of the early (earlier than the .com crash) wins for GNU and the Free Software Foundation was getting NeXT to open up their Objective-C compiler. NeXT was a small team taking off-the-shelf and free components, building a system that rivalled anything Microsoft, AT&T, HP, IBM, Sun, or Digital were doing – and that outlived almost all of them. Remember that the NeXT CEO wouldn’t become a billionaire until his other company released Toy Story, and that NeXT not only did the above, but also defined the first wave of dynamic websites and e-commerce: the best web technology was scarcity web technology.

What’s happened since those stories were enacted is that computerists have collectively forgotten how scarcity breeds innovation. You don’t need to know how 10 folks round a whiteboard can outsmart a 200 engineer department if your department hired 200 engineers _this month_: just put half of them on solving your problems, and half of them on the problems caused by the first half.

Thus we get SAFe and Scrumbut: frameworks for paying lip service to agile development while making sure that each group of 10 folks doesn’t do anything that wasn’t signed off across the other 350 groups of 10 folks.

Thus we get software engineering practices designed to make it easier to add code than to read it: what’s the point of reading the existing code if the one person who wrote it has already been replaced 175 times over, and has moved teams twice?

Thus we get not DevOps, but the DevOps department: why get your developers and ops folks to talk to each other if it’s cheaper to just hire another 200 folks to sit between them?

Thus we get the npm ecosystem: what’s the point of understanding your code if it’s cheaper just to randomly import somebody else’s and hire a team of 30 to deal with the fallout?

Thus we get corporate open source: what’s the point of software freedom when you can hire 100 people to push out code that doesn’t fulfil your needs but makes it easier to hire the next 500 people?

I am sad for the many people whose lives have been upended by the current downturn in the computering economy, but I am also sad for how little gets done within that economy. I look forward to the coming wave of innovation, and the ability to once again do more with less.

Posted in software-engineering | Leave a comment

Transcendence

I was at the RSE conference in Newcastle, along with many people whom I have met, worked with, and enjoyed talking to in the past. Many more people whom I have met, worked with, and enjoyed talking to in the past were at an entirely different conference in Aberystwyth, and I am disappointed to have missed out there.

One of the keynote speakers at RSEcon22, Marlene Manghami, talked about the idea of transcendence through community membership. They cited evidence that fans of soccer teams go through the same hormonal shifts at the same intensity during the match as the players themselves. Effectively the fans are on the pitch, playing the game, feeling the same feelings as their comrades on the team, even though they are in the stands or even at home watching on TV.

I do not know that I have felt that sense of transcendence, and believe I am probably missing out both on strong emotional connections with others and on an ability to contribute effectively to society (to a society, to any society) by lacking the strong motivation that comes from knowing that making other people happier makes me happier, because I am with them.

Leave a comment

The Image Model

I was reflecting on things that I know now, a couple of decades in to my career, that I wish I had been told at the beginning. Many things came to mind, but the most immediate from a technological perspective was Smalltalk’s image model.

It’s not even the technology of the Smalltalk image that’s relevant, but the model of thinking that works well with it. In Smalltalk, there are two (three) important files for a given machine: the VM is the machine that can run Smalltalk; the image is a snapshot of all of the Smalltalk objects on the machine(; and the sources are the source code for the classes and methods in that image).

This has weird implications for how you work that differ greatly from “compile this text stream” or “interpret this text stream” programming environments. People who have used the ENVY/Developer tool generally seem to wax lyrical and wonder why it was never reinvented, like the rest of software engineering is the beach with the ruins of the Statue of Liberty poking out from the end of the Planet of the Apes. But the bit I wish I had been told about: the image model puts the “personal” in “personal computer” as far as programming is concerned. Every piece of software you write is part of your image: a peer of the rest of the software you wrote, of the software that other people wrote that you added, and of the software that was already there when you first booted the machine.

I wish I had been told to think like that: that each tool or project is not a separate tool or project, but a cumulative addition to the image. To keep everything I wrote, so that the next time I needed something I might not need to write it. To make sure, when using new things, that I could integrate them with the image (it didn’t exist at the time, but TruffleSQUEAK is very much this idea). To give up asking “how can I write software to solve this problem”, and to start asking “how can I solve this problem with software, writing some if necessary”?

It would be the difference between twenty of years of experience and one year of experience, twenty times over.

Posted in advancement of the self, smalltalk | Tagged | Leave a comment

Phrases in computing that might need retiring

The upcoming issue of the SICPers newsletter is all about phrases that were introduced to computing to mean one thing, but seem to get used in practice to mean another. This annoys purists, pedants, and historians: it also annoys the kind of software engineer who dives into the literature to see how ideas were discussed and used and finds that the discussions and usages were about something entirely different.

So should we just abandon all technical terminology in computing? Maybe. Here’s an irreverent guide.

Object-Oriented Programming

Luckily the industry doesn’t really use this term any more so we can ignore the changed meaning. The small club of people who still care can use it correctly, everybody else can carry on not using it. Just be aware when diving through the history books that it might mean “extreme late binding of all things” or it might mean “modules, but using the word class” depending on the age of the text.

Agile

Nope, this one’s in the bin, I’m afraid. It used to mean “not waterfall” and now means “waterfall with a status meeting every day and an internal demo every two weeks”. We have to find a new way to discuss the idea that maybe we focus on the working software and not on the organisational bureaucracy, and that way does not involve the word…

DevOps

If you can hire a “DevOps engineer” to fulfil a specific role on a software team then we have all lost at using the phrase DevOps.

Artificial Intelligence

This one used to mean “psychologist/neuroscientist developing computer models to understand how intelligence works” and now means “an algorithm pushed to production by a programmer who doesn’t understand it”. But there is a potential for confusion with the minor but common usage “actually a collection of if statements but last I checked AI wasn’t a protected term” which you have to be aware of. Probably OK, in fact you should use it more in your next grant bid.

Technical Debt

Previously something very specific used in the context of financial technology development. Now means whatever anybody needs it to mean if they want their product owner to let them do some hobbyist programming on their line-of-business software, or else. Can definitely be retired.

Behaviour-Driven Development

Was originally the idea that maybe the things your software does should depend on the things the customers want it to do. Now means automated tests with some particular syntax. We need a different term to suggest that maybe the things your software does should depend on the things the customers want it to do, but I think we can carry on using BDD in the “I wrote some tests at some point!” sense.

Reasoning About Software

Definitely another one for the bin. If Tony Hoare were not alive today he would be turning in his grave.

Posted in whatevs | 3 Comments

On entering programming in 2022

I recently taught an introduction to Python course, to final-year undergraduate students. These students had little to zero programming experience, and were all expected to get set up with Python (using the Anaconda environment, which we had determined to be the easiest way to get a reasonable baseline configuration) on laptops they had brought themselves.

What follows is not a slight on these people, who were all motivated, intelligent, and capable. It is a slight on the world of programming in the current ages, if you are seeking to get started with putting a general-purpose computer to your own purposes and merely own a general-purpose computer.

One person had a laptop that, being a mere six (6) years old, was too old to run the current version of Anaconda Distribution. We had to crawl through the archives, guessing what older version might work (i.e. might both run on their computer and still permit them to follow the course).

Another had a modern laptop and the same same version of Python/tools that everyone else was using, except that their IDE would crash if they tried to plot a graph in dark mode.

Another had, seemingly without having launched the shell while they owned their computer, got their profile into a state where none of the system binary folders were on their PATH. Hmm, python3 doesn’t work, let’s use which python to find out why not. Hmm, which doesn’t work, let’s use ls to find out why not. Hmmm…

Many, through not having used terminal emulators before, did not yet know that terminal emulators are modal. There are shell commands, which you must type when you see a $ (or a % or a >) and will not work when you can see a >>>. There are Python commands, which are the other way around. If you type a command that launches nano/pico, there are other rules.

By the way, condo and pip (and poetry, if you try to read anything online about setting up Python) are Python things but you cannot use them as Python commands. They are shell commands.

By the other way, everyone writes those shell commands with a $ at the front. You do not write the $. Oh, and by the other other way: they don’t necessarily tell you to open the Terminal to do it.

Different environments—the shell, visual studio code, Spyder, PyCharm—will do different things with respect to your “current working directory” when you run a script. They will not tell you that they have done this, nor that it is important, nor that it is why your script can’t find a data file that’s RIGHT THERE.

This is all way before we get to the dark art of comprehending exception traces.

When I were a lad and Silicon Valley were all fields, you turned a computer on and it was ready for some programming. I’m not suggesting returning to that time, computers were useless then. But I do think it is needlessly difficult to get started with “a programming language that lets you work quickly” in this time of ubiquitous programs.

Posted in edjercashun | 1 Comment

Episode 54: professionalism and responsibility

The idea that increased autonomy and privilege for software engineers can only come when we have better confidence that software engineers are working in the best interests of society.

Thanks for listening! Don’t forget to check out my mailing list for fortnightly info on how programmers can become software engineers.

Leave a comment

More detail on software requirements

My talk at AppDevCon discussed the Requirements Trifecta but turned it into a Quadrinella: you need leadership vision, market feedback, and technical reality to all line up as listed in the trifecta, but I’ve since added a fourth component. You also need to be able to tell the people who might be interested in paying for this thing that you have it and it might be worth paying for. If you don’t have that then, if anybody has heard of you at all, it will be as a company that went out of business with a product “five years ahead of its time”: you were able to build it, it did something people could benefit from, in an innovative way, but nobody realised that they needed it.

A 45 minute presentation was enough to introduce that framework and describe it, but not to go into any detail. For example we say that we need “market feedback”, i.e. to know that the thing we are going to build is something that some customer somewhere will actually want. But how do we find out what they want? The simple answer, “we ask them”, turns out to uncover some surprising complexity and nuance.

At one end, you have the problem of mass-market scale: how do you ask a billion people what they want? It’s very expensive to do, and even more expensive to collate and analyse those billion answers. We can take some simplifying steps that reduce the cost and complexity, in return for finding less out. We can sample the population: instead of asking a billion people what they think, we can ask ten thousand people what they think and apply what we learn to all billion people.

We have to know that the way in which we select those 10,000 people is unbiased, otherwise we’re building for an exclusive portion of the target billion. Send a survey to people’s work email addresses on a Friday, and some will not pick it up until Sunday as their weekend is Fri-Sat. Others will be on holiday, or not checking their email that day, or feeling grumpy and inclined to answer with the opposite of their real thoughts, or getting everything done quickly before the weekend and disinclined to think about your questions at all.

Another technique we use is to simplify the questions—or at least the answers we’ll accept to those questions, to make it easier to combine and aggregate those answers. Now we have not asked “what do you think about this” at all; we have asked “which of these ways in which you might think about this do you agree with?” Because people are inclined to avoid conflict, they tend to agree with us. Ask “to what extent do you agree that spline reticulation is the biggest automation opportunity in widget frobnication?” and you’ll learn something different from the person who asked “to what extent do you agree that spline reticulation is the least important automation opportunity in widget frobnication?”

We’ll get richer information from deeper, qualitative interactions with people, and that tends to mean working with fewer people. At the extreme small end we have one person: an agent talks to their client about what that client would like to see. This is quite an easy case to deal with, because you have exactly one viewpoint to interpret.

Of course, that viewpoint could well be inconsistent. Someone can tell you that they get a lot of independence in how they work, then in describing their tasks list all the approvals and sign-offs they have to get. It can also be incomplete. A manager might not fully know all of the details of the work their reports do; someone may know their own work very well but not the full context of the process in which that work occurs. Additionally, someone may not think to tell you everything about their situation: many activities rely on tacit knowledge that’s hard to teach and hard to explain. So maybe we watch them work, rather than asking them how they work. Now, are they doing what they’re doing because that’s how they work, or because that’s how they behave when they’re being watched?

Their viewpoint could also be less than completely relevant: maybe the client is the person paying for the software, but are they the people who are going to use it? Or going to be impacted by the software’s outputs and decisions? I used the example in the talk of expenses software: very few people when asked “what’s the best software you’ve ever used” come up with the tool they use to submit receipts for expense claims. That’s because it’s written for the accounting department, not for the workers spending their own money.

So, we think to involve more people. Maybe we add people’s managers, or reports, or colleagues, from their own and from other departments. Or their customers, or suppliers. Now, how do we deal with all of these people? If we interview them each individually, then how do we resolve contradiction in the information they tell us? If we bring them together in a workshop or focus group, we potentially allow those contradictions to be explored and resolved by the group. But potentially they cause conflict. Or don’t get brought up at all, because the politics of the situation lead to one person becoming the “spokesperson” for their whole team, or the whole group.

People often think of the productiveness of a software team as the flow from a story being identified as “to do” to working software being released to production. I contend that many of the interesting and important decisions relating to the value and success of the software were made before that limited part of the process.

Posted in software-engineering | Leave a comment

Why mock objects aren’t popular this week

The field of software engineering doesn’t change particularly quickly. Tastes in software engineering change all the time: keeping up with them can quickly result in seasickness or even whiplash. For example, at the moment it’s popular to want to do server-side rendering of front end applications, and unpopular to do single-page web apps. Those of us who learned the LAMP stack or WebObjects are back in fashion without having to lift a finger!

Currently it’s fashionable to restate “don’t mock an interface you don’t own” as the more prescriptive, taste-driven statement “mocks are bad”. Rather than change my practice (I use mocks and I’m happy with that from 2014 is still OK), I’ll ask why has this particular taste arisen.

Mock objects let you focus on the ma, the interstices between objects. You can say “when my case controller receives this filter query, it asks the case store for cases satisfying this predicate”. You’re designing a conversation between independent programs, making restrictions about the messages they use to communicate.

But many people don’t think about software that way, and so don’t design software that way either. They think about software as a system that holistically implements a goal. They want to say “when my case controller receives this filter query, it returns a 200 status and the JSON representation of cases matching that query”. Now, the mocks disappear, because you don’t design how the controller talks to the store, you design the outcome of the request which may well include whatever behaviour the store implements.

Of course, tests depending on the specific behaviour of collaborators are more fragile, and the more specific prescription “don’t mock what you don’t control” uses that fragility: if the behaviour of the thing you don’t control changes, you won’t notice because your mock carries on working the way it always did.

That problem is only a problem if you don’t have any other method of auditing your dependencies for fitness for purpose. If you’re relying on some other interface working in a particular way then you should probably also have contract tests, acceptance tests, or some other mechanism to verify that it does indeed work in that way. That would be independent of whether your reliance is captured in tests that use mock objects or some other design.

It’ll only be a short while before mock objects are cool again. Until then, this was an interesting diversion.

Posted in TDD | Tagged | 1 Comment

Bizarrely, the Guinness book of world records lists the “first microcomputer” as 1980’s Xenix. This doesn’t seem right to me:

  1. Xenix is an operating system, not a microcomputer.
  2. Xenix was announced in 1980 but not shipped until 1981.
  3. The first computer to be designed around a microprocessor is also the first computer to be described in patents and marketing materials as a “microcomputer”—the Micral N.
Posted on by Graham | 1 Comment