The missing principle in agile software development

The biggest missing feature in the manifesto for agile software development and the principles behind it is anyone other than the makers and their customer. We get autonomous, self-organising delivery teams but without the sense of responsibility to a broader society one would expect from autonomous professional agents.

Therefore it’s no surprise to find developers working to turn their colleagues into a below-minimum-wage precariat; to rig elections at scale; or to implement family separation policies or travel bans on religious minorities. A principled agile software developer only needs to ask “how can I deliver this, early and continuously, to the customer?” and “how can we adjust our behaviour to become more effective at this?”: they do not need to ask “is this a good idea?” or “should this be delivered at all?”

Principle Zero ought to read something like this.

We build software that transforms the workplace, leisure, interpersonal interaction, and society at large: we do so in consultation with as broad a representation of those interests as possible and ensure that our software is beneficial to those whose lives are being transformed. Within that context, we follow these principles:

Posted in agile, Responsibility | Leave a comment

Episode 41: Professional Software

We talk about software engineering as a profession.

Tagged | Leave a comment

Episode 40: Falsehoods Programmers Believe About Computer Programs

This episode is about truisms that aren’t, in the world of the computer. I’ve already written an article, falsehoods programmers believe about programming, on a similar topic, but in this episode I go into way more depth on the counter-examples to one falsehood, rather than trying to supply a bulleted list of many falsehoods.

Other “falsehoods programmers believe” articles include falsehoods programmers believe about time (I said dates on the recording), and falsehoods programmers believe about addresses.

This episode’s falsehood is that go to statements are harmful, based on the assertion by Dijkstra (or was it?), Go To Statement Considered Harmful. Along the way we meet A Discipline of Programming, and the Psychological Study of Programming.

I also mention that you can support the podcast!

Leave a comment

Apple and Bug Bounties

I know that there are bigger problems to discuss about Apple’s approach to business and partnerships at the mo, but their handling of security researchers seems particularly cynical and hypocritical. See, for example, this post about four reported iPhone 0days that went ignored and the nine other cases linked in that article.

Apple advertise themselves as the privacy company. By this, they really mean that their products are designed to share as much of your data with Apple as they are comfortable with, and that beyond that you should probably assume that nobody else is involved. But their security development lifecycle tells another story.

“Wait, did you just pivot from talking about privacy to security?” No! You can have security without privacy: this blog has that, on a first glance. All of the posts and pages are public, anybody can read them, but I want to make sure that what you read is actually what I wrote (the integrity of the posts) and that nothing stops you from reading it when you want (the availability). On a closer examination, I also care that there are things you don’t have access to: any of the account passwords, configuration settings, draft posts, etc. So in fact the blog has privacy requirements too, and those are handled in security by considering and protecting the confidentiality of those private assets. You can have security without privacy, but not privacy without security.

Something, I’m not sure what from the outside, is wrong with the security development lifecycle at Apple. As a privacy-focused company they should also be a security-focused company, but they evidently never had the same “trustworthy computing” moment that Microsoft did. I’m not going to do any kind of deep dive into CVE counts here, just provide the following high-level support for the case that Apple is, at best, doing no better than anybody else in the industry at this.

Meanwhile, they fail to acknowledge external contributors to their product security, do not pay out agreed bounties, and sue security researchers or ban them from their store. Apple say that the bounty program has doubled in 2019-2020 and continues to grow. You could say that maybe they aren’t doing any better, but they certainly aren’t doing any worse. Every new product announcement, senior managers at Apple up to their CEO tell everyone how great they are at privacy. Their intent is that people believe they are doing the best at this, when they are around the middle of the pack. This is disingenuous.

A bug bounty program is a security process of last resort. You didn’t design, implement, or fix flaws out of your product before it got to customers and attackers: and that happens, that’s fine, but these escapee threats that are realised as vulnerabilities should be a small fraction of the total possible problems, and the lower severity ones at that. You also didn’t detect it yourself once the customers and attackers had access to the product: that also happens and is fine, but again the vulnerabilities that escape your detection should be the lowest down the stack. Once someone else is discovering your vulnerabilities, the correct thing to do is to say thank you for bringing this to our attention before exploiting it yourself, here is compensation for the time and work you put into making our product better.

Apple is not doing this. As seen from the various linked stories above, they are leaving security researchers with a bitter taste and a questioning feeling over whether they would want to work with Apple again, but not doing the heavy lifting to ensure their SDLC catches the highest-severity problems on campus, before or after release. I don’t know what is at fault here, but I expect it’s systemic rather than individual leader/department/activity. The product security folks at Apple are good at their jobs, the software engineers are good at their jobs…and yet here we are.

I suspect a certain amount of large-company effect is at play. “As Tim told you, our products are best in class for privacy,” says anonymous and fictional somewhat high up marketing person, “and if you had any specific complaint I couldn’t hear it over all the high-volume stock cash register sound effects we play in the board room to represent our success in the marketplace.”

Posted in AAPL, Privacy, security | Leave a comment

In which I misunderstood Objective-C

I was having a think about this short history of Objective-C, and it occurred to me that perhaps I had been thinking about ObjC wrong. Now, I realise that by thinking about ObjC at all I mark myself out as a bit of an oddball, but I do it a lot. I co-host the [objc retain]; stream with Steven Baker, discussing cross-platform free software Objective-C every week. Hell of a time to realise I’ve been doing it wrong.

My current thinking is that the idea of ObjC is not to write “apps” in ObjC, or even in successor languages (sorry, fans of successor languages). Summed up in that history are Brad Cox’s views which you can read in more length in his books. I’ve at least tangentially covered each book here: Object-Oriented Programming: an Evolutionary Approach and Superdistribution: Objects as Property on the Electronic Frontier. In these he talks about Object-Oriented Programming as the “software industrial revolution”, in which the each-one-is-bespoke way of writing software from artisinally-selected ones and lightly-sparkling zeroes is replaced with a catalogue of re-usable parts, called Software ICs (integrated circuits). As an integrator, I might take the “window” IC, the “button” IC, the “text field” IC, and a “data store” IC and make a board for entering expenses.

So far, so npm. The key bit is the next bit. As a computer owner, you might take that board and integrate it into your computer so that you can do your home finances, or so that you can submit your business expense claims, or so that your characters in The Sims can claim for their CPU time, or all three of those things. The key is that this isn’t some app developer, this is the person whose computer it is.

From that perspective, Objective-C is an intermediary tool, and not a particularly important or long-lasting one. Its job is to turn legacy code into objects so that it can be accessed by people using their computers by sticking software together using objects (hello NSFileManager). To the extent it has an ongoing job, that is to turn algorithms into objects, for the same reason (but the algorithms have been made out of not-objects, because All Hail the Perform Ant).

You can make your software on your computer by glueing objects together, whether they’re made of ObjC (a common and important case), Eiffel (an uncommon and important case), Smalltalk (ditto) or whatever. Objective-C is the shiny surface we’re missing over the tar pit. It is the gear system on the bicycle for the mind; the tool that frees computer users from the tyranny of the app vendor and the app store.

I apologise for taking this long to work that out.

Posted in cocoa, design, freesoftware, gnustep, nextstep, objc | Tagged | 1 Comment

Episode 39: Monetising the Hobby

This episode is about what happens when you let people who are interested in programming (the process) define how you do programming (creating a program).

Links:

Please remember you can support me on Patreon! You can also check out my other projects: [objc retain]; and Dos Amigans. Thank you!

Leave a comment

Episode 38: the Cost of Dependencies

This post is all about whether dependencies are expensive or valuable to a software project (the answer is “yes” in a lot of cases). It was motivated by Benefits of dependencies in software projects as a function of effort by Eli Bendersky.

Leave a comment

Why you didn’t like that thing that company made

There’s been a bit of a thing about software user experience going off the rails lately. Some people don’t like cross-platform software, and think that it isn’t as consistent, as well-integrated, or as empathetic as native software. Some people don’t like native software, thinking that changes in the design of the browser (Apple), the start menu (Microsoft), or everything (GNOME) herald the end of days.

So what’s going on? Why did those people make that thing that you didn’t like? Here are some possibilities.

My cheese was moved

Plenty of people have spent plenty of time using plenty of computers. Some short-sighted individual promised “a computer on every desktop”, made it happen, and this made a lot of people rather angry.

All of these people have learned a way of using these computers that works for them. Not necessarily the one that you or anybody else expects, but one that’s basically good enough. This is called satisficing: finding a good enough way to achieve your goal.

Now removing this satisficing path, or moving it a few pixels over to the left, might make something that’s supposedly better than what was there before, but is actually worse because the learned behaviour of the people trying to use the thing no longer achieves what they want.
It may even be that the original thing is really bad. But because we know how to use it, we don’t want it to change.

Consider the File menu. In About Face 3: The Essentials of Interaction Design, written in 2007, Alan Cooper described all of the problems with the File menu and its operations: New, Open, Save, Save As…. Those operations are implementation focused. They tell you what the computer will do, which is something the computer should take care of.

He described a different model, based on what people think about how their documents work. Anything you type in gets saved (that’s true of the computer I’m typing this in to, which works a lot like a Canon Cat). You can rename it if you want to give it a different name, and you can duplicate it if you want to give it a different name while keeping the version at the original name.

This should be better, because it makes the computer expose operations that people want to do, not operations that the computer needs to do. It’s like having a helicopter with an “up” control instead of a cyclic and collective controls.

Only, replacing the Open/Save/Save As… stuff with the “better” stuff is like removing the cyclic and collective controls and giving a trained helicopter pilot with years of experience the “up” button. It doesn’t work the way they expect, they have to think about it which they didn’t have to do with the cyclic/collective controls (any more), therefore it’s worse (for them).

Users are more experienced and adaptable now

But let’s look at this a different way. More people have used more computers now than at any earlier point in history, because that’s literally how history works. And while they might not like having their cheese moved, they’re probably OK with learning how a different piece of cheese works because they’ve been doing that over and over each time they visit a new website, play a new game, or use a new app.

Maybe “platform consistency” and “conform with the human interface/platform style guidelines” was a thing that made sense in 1984, when nobody who bought a computer with a GUI had ever used one before and would have to learn how literally everything worked. But now people are more sophisticated in their use of computers, regularly flit between desktop applications, mobile apps, and websites, across different platforms, and so are more flexible and adaptable in using different software with different interactions than they were in the 1980s when you first read the Amiga User Interface Style Guide.

We asked users; they don’t care

At first glance, this explanation seems related to the previous one. We’re doing the agile thing, and talking to our customers, and they’ve never mentioned that the UI framework or the slightly inconsistent controls are an issue.

But it’s actually quite different. The reason users don’t mention that there’s extra cognitive load is that these kinds of mental operations are tacit knowledge. If you’re asked about “how can we improve your experience filing taxes”, you’ll start thinking tax-related questions, before you think “I couldn’t press Ctrl-A to get to the beginning of that text field”. I mean, unless you’re a developer who goes out of their way to look for that sort of inconsistency in software.

The trick here is to stop asking, and start watching. Users may well care, even if they don’t vocalise that caring. They may well suffer, even if they don’t realise it hard enough to care.

We didn’t ask users

Yes, that happens. I’ve probably gone into enough depth on why it happens in various places, but here’s the summary: the company has a customer proxy who doesn’t proxy customers.

Posted in UI | Leave a comment

Episode 37: systemic failures in software

Here we talk about things that can go wrong in a whole software organisation such that even if everybody does their job to the best of their ability, and they have a good ability, the result is far from optimal.

Identifying these sorts of things relies on being able to see the whole system as, well, a system. A great resource for learning about this is the Donella Meadows project, and you can start with her book Thinking in Systems: a Primer.

Leave a comment

On programmer behaviours that make Scrum so bad

Respectable persons of this parish of Internet have been, shall we say, critical of Scrum and its ability to help makers (particularly software developers) to make things (particularly software). Ron Jeffries and GeePaw Hill have both deployed the bullshit word.

My own view, which I have described before, is that Scrum is a baseline product delivery process and a process improvement framework. Unfortunately, not many of us are trained or experienced in process improvement, not many of us know what we should be measuring to get a better process, so the process never improves.

At best, then, many Scrum organisations stick with the baseline process. That is still scadloads better than what many software organisations were trying before they had heard of Agile and Scrum, or after they had heard of it and before their CIO’s in-flight magazine made it acceptable. As a quick recap for those who were recruited into the post-Agile software industry, we used to spend two months working out all the tasks that would go into our six month project. Then we’d spend 15 months trying to implement them, then we’d all point at everybody else to explain why it hadn’t worked. Then, if there was any money left, we’d finally remember to ship some software to the customer.

Scrum has improved on that, by distributing both the shipping and the blaming over the whole life span of the project. This was sort of a necessary condition for software development to survive the dot-com crash, when “I have an idea” no longer instantly led to office space in SF, a foosball table, and thirty Herman Miller chairs. You have to deliver something of value early because you haven’t got enough money any more to put it off.

So when these respectable Internet parishioners say that Scrum is bad, this is how bad it has to be to reach that mark. Both of the authors I cite have been drinking from this trough way longer than I have, so have much more experience of the before times.

In this article I wanted to look at some of the things I, and software engineering peers around me, have done to make Scrum this bad. Because, as Ron says, Scrum is systemically bad, and we are part of the system.

Focus on Velocity

One thing it’s possible to measure is how much stuff gets shovelled in per iteration. This can be used to guide a couple of process-improvement ideas. First up is whether our ability to foresee problems arising in implementation is improving: this is “do estimates match actuals” but acknowledging that they never do, and the reason they never do is that we’re too optimistic all the time.

Second is whether the amount of stuff you’re shovelling is sustainable. This has to come second, because you have to have a stable idea of how much stuff there is in “some stuff” before you can determine whether it’s the same stuff you shovelled last time. If you occasionally shovel tons of stuff, then everybody goes off sick and you shovel no stuff, that’s not sustainable pace. If you occasionally shovel tons of stuff, then have to fix a load of production bugs and outages, and do a load of refactoring before you can shovel any more stuff, that’s not a sustainable pace, even if the amount of work in the shovelling phase and the fixing phase is equivalent.

This all makes working out what you can do, and what you can tell other people about what you can do, much easier. If you’re good at working out where the problems lie, and you’re good at knowing how many problems you can encounter and resolve per time, then it’s easier to make plans. Yes, we value responding to change over following a plan, but we also value planning our response to change, and we value communicating the impact of that response.

Even all of that relies on believing that a lot of things that could change, won’t change. Prediction and forecasting aren’t so chaotic that nobody should ever attempt them, but they certainly are sensitive enough to “all things being equal” that it’s important to bear in mind what all the things are and whether they have indeed remained equal.

And so it’s a terrible mistake to assume that this sprint’s velocity should be the same as, or (worse) greater than, the last one. You’re taking a descriptive statistic that should be handled with lots of caveats and using it as a goal. You end up doing the wrong thing.

Let’s say you decide you want 100 bushels of software over the next two weeks, because you shovelled 95 bushels of software last week. The easiest way to achieve that is to take things that you think add up to 100 bushels of software, and do less work so that they contain only, say, 78 bushels, and try to get those 78 bushels done in the time.

Which bits do you cut off? The buttons that the customer presses? No, they’ll notice that. The actions that happen when the customer presses the buttons? They’ll probably notice that, too. How about all the refinement and improvement that will make it possible to add another 100 bushels next time round? Nobody needs that to shovel this 100 bushels in!

But now, next time, shovelling software is harder, and you need to get 110 bushels in to “build on the momentum”. So more corners get cut, and more evenings get worked. And now it’s even harder to get the 120 bushels in that are needed the next time. Actual rate of software is going down, claimed rate of software is going up: soon everything will explode.

Separating “technical” and “non-technical”

Sometimes also “engineering” and “the business”. Particularly in a software company, this is weird, because “the business” is software engineering, but it’s a problem in other lines of business enabled by software too.

Often, domain experts in companies have quite a lot of technical knowledge and experience. In one fintech where I used to work, the “non-technical” people had a wealth (pun intended) of technical know-how when it came to financial planning and advice. That knowledge is as important to the purpose of making financial technology software as is software knowledge. Well, at least as important.

So why do so many teams delineate “engineering” and “the business”, or “techies” and “non-technical” people? In other words, why do software teams decide that only the software typists get a say in what software gets made using what practices? Why do people divide the world into pigs and chickens (though, in fairness to the Scrum folks, they abandoned that metaphor)?

I think the answer may be defensiveness. We’ve taken our ability to shovel 50 bushels of software and our commitment (it used to be an estimate, but now it’s a commitment) to shovel 120 bushels, and it’s evident that we can’t realistically do that. Why not? Oh it must be those pesky product owners, keep bringing their demands from the VP of marketing when all they’re going to do is say “build more”, “shovel more software”, and they aren’t actually doing any of it. If the customer rep didn’t keep repping the customer, we’d have time to actually do the software properly, and that would fix everything.

Note that while the Scrum guide no longer mentions chickens and pigs, it “still makes a distinction between members of the Scrum team and those individuals who are part of the process, but not responsible for delivering products”. This is an important distinction in places, but irrelevant and harmful in others. But it’s at its most harmful when it’s too narrowly drawn. When people who are part of the process are excluded from the delivering-products cabal. You still need to hear from them and use their expertise, even if you pretend that you don’t.

The related problem I have seen, and been part of, is the software-expertise people not even gaining passing knowledge of the problem domain. I’ve even seen it working on a developer tools team where the engineers building the tools didn’t have cause to use, or particularly even understand, the tool during our day-to-day work. All of those good technical practices, like automated acceptance tests, ubiquitous language, domain-driven design; they only mean shit if they’re helping to make the software compatible with the things people want the software for.

And that means a little bit of give and take on the old job boundaries. Let the rest of the company in on some info about how the software development is going, and learn a little about what the other folks you work with do. Be ready to adopt a more nuanced position on a new feature request than “Jen from sales with another crazy demand”.

Undriven process optimisation

As I said up top, the biggest problem Scrum often encounters in the wild is that it’s a process improvement framework run by people who don’t have any expertise at process improvement, but have read a book on Scrum (if that). This is where the agile consultant / retrospective facilitators usually come in: they do know something about process improvement, and even if they know nothing about your process it’s probably failing in similar enough ways to similar processes on similar teams that they can quickly identify the patterns and dysfunctions that apply in your context.

Without that guidance, or without that expertise on the team, retrospectives are the regular talking shop in which everybody knows that something is wrong, nobody knows what, and anybody who has a pet thing to try can propose that thing because there are no cogent arguments against giving it a go (nor are there any for it, except that we’ve got to try something!).

Thus we get resume-driven development: I bet that last sprint was bad because we aren’t functionally reactive enough, so we should spend the next sprint rewriting to this library I read about on medium. Or arbitrary process changes: this sprint we should add a column to the board, because we removed a column last time and look what happened. Or process gaming: I know we didn’t actually finish anything this month, but that’s because we didn’t deploy so if we call something “done” when it’s been typed into the IDE, we’ll achieve more next month. Or more pigs and chickens: the problem we had was that sales kept coming up with things customers needed, so we should stop the sales people talking either to the customers, to us, or to both.

Work out what it is you’re trying to do (not shovelling bushels of software, but the thing you’re trying to do for which software is potentially a solution) and measure that. Do you need to do more of that? Or less of it? Or the same amount for different people? Or for less money? What, about the way you’re shovelling software, could you change that would make that happen?

We’re back at connecting the technical and non-technical parts of the work, of course. To understand how the software work affects the non-software goals we need to understand both, and their interactions. Always have, always will.

Posted in agile, process | Tagged | Leave a comment