The “return a command” trick

This is a nice trick, but we need a phrase for that thing where you implement extreme late binding of functions by invoking an active function that selects the function you want based on its name. I can imagine the pattern catching on.

Posted on by Graham | Leave a comment

The missing principle in agile software development

The biggest missing feature in the manifesto for agile software development and the principles behind it is anyone other than the makers and their customer. We get autonomous, self-organising delivery teams but without the sense of responsibility to a broader society one would expect from autonomous professional agents.

Therefore it’s no surprise to find developers working to turn their colleagues into a below-minimum-wage precariat; to rig elections at scale; or to implement family separation policies or travel bans on religious minorities. A principled agile software developer only needs to ask “how can I deliver this, early and continuously, to the customer?” and “how can we adjust our behaviour to become more effective at this?”: they do not need to ask “is this a good idea?” or “should this be delivered at all?”

Principle Zero ought to read something like this.

We build software that transforms the workplace, leisure, interpersonal interaction, and society at large: we do so in consultation with as broad a representation of those interests as possible and ensure that our software is beneficial to those whose lives are being transformed. Within that context, we follow these principles:

Posted in agile, Responsibility | Leave a comment

Episode 41: Professional Software

We talk about software engineering as a profession.

Tagged | Leave a comment

Episode 40: Falsehoods Programmers Believe About Computer Programs

This episode is about truisms that aren’t, in the world of the computer. I’ve already written an article, falsehoods programmers believe about programming, on a similar topic, but in this episode I go into way more depth on the counter-examples to one falsehood, rather than trying to supply a bulleted list of many falsehoods.

Other “falsehoods programmers believe” articles include falsehoods programmers believe about time (I said dates on the recording), and falsehoods programmers believe about addresses.

This episode’s falsehood is that go to statements are harmful, based on the assertion by Dijkstra (or was it?), Go To Statement Considered Harmful. Along the way we meet A Discipline of Programming, and the Psychological Study of Programming.

I also mention that you can support the podcast!

Leave a comment

Apple and Bug Bounties

I know that there are bigger problems to discuss about Apple’s approach to business and partnerships at the mo, but their handling of security researchers seems particularly cynical and hypocritical. See, for example, this post about four reported iPhone 0days that went ignored and the nine other cases linked in that article.

Apple advertise themselves as the privacy company. By this, they really mean that their products are designed to share as much of your data with Apple as they are comfortable with, and that beyond that you should probably assume that nobody else is involved. But their security development lifecycle tells another story.

“Wait, did you just pivot from talking about privacy to security?” No! You can have security without privacy: this blog has that, on a first glance. All of the posts and pages are public, anybody can read them, but I want to make sure that what you read is actually what I wrote (the integrity of the posts) and that nothing stops you from reading it when you want (the availability). On a closer examination, I also care that there are things you don’t have access to: any of the account passwords, configuration settings, draft posts, etc. So in fact the blog has privacy requirements too, and those are handled in security by considering and protecting the confidentiality of those private assets. You can have security without privacy, but not privacy without security.

Something, I’m not sure what from the outside, is wrong with the security development lifecycle at Apple. As a privacy-focused company they should also be a security-focused company, but they evidently never had the same “trustworthy computing” moment that Microsoft did. I’m not going to do any kind of deep dive into CVE counts here, just provide the following high-level support for the case that Apple is, at best, doing no better than anybody else in the industry at this.

Meanwhile, they fail to acknowledge external contributors to their product security, do not pay out agreed bounties, and sue security researchers or ban them from their store. Apple say that the bounty program has doubled in 2019-2020 and continues to grow. You could say that maybe they aren’t doing any better, but they certainly aren’t doing any worse. Every new product announcement, senior managers at Apple up to their CEO tell everyone how great they are at privacy. Their intent is that people believe they are doing the best at this, when they are around the middle of the pack. This is disingenuous.

A bug bounty program is a security process of last resort. You didn’t design, implement, or fix flaws out of your product before it got to customers and attackers: and that happens, that’s fine, but these escapee threats that are realised as vulnerabilities should be a small fraction of the total possible problems, and the lower severity ones at that. You also didn’t detect it yourself once the customers and attackers had access to the product: that also happens and is fine, but again the vulnerabilities that escape your detection should be the lowest down the stack. Once someone else is discovering your vulnerabilities, the correct thing to do is to say thank you for bringing this to our attention before exploiting it yourself, here is compensation for the time and work you put into making our product better.

Apple is not doing this. As seen from the various linked stories above, they are leaving security researchers with a bitter taste and a questioning feeling over whether they would want to work with Apple again, but not doing the heavy lifting to ensure their SDLC catches the highest-severity problems on campus, before or after release. I don’t know what is at fault here, but I expect it’s systemic rather than individual leader/department/activity. The product security folks at Apple are good at their jobs, the software engineers are good at their jobs…and yet here we are.

I suspect a certain amount of large-company effect is at play. “As Tim told you, our products are best in class for privacy,” says anonymous and fictional somewhat high up marketing person, “and if you had any specific complaint I couldn’t hear it over all the high-volume stock cash register sound effects we play in the board room to represent our success in the marketplace.”

Posted in AAPL, Privacy, security | Leave a comment

In which I misunderstood Objective-C

I was having a think about this short history of Objective-C, and it occurred to me that perhaps I had been thinking about ObjC wrong. Now, I realise that by thinking about ObjC at all I mark myself out as a bit of an oddball, but I do it a lot. I co-host the [objc retain]; stream with Steven Baker, discussing cross-platform free software Objective-C every week. Hell of a time to realise I’ve been doing it wrong.

My current thinking is that the idea of ObjC is not to write “apps” in ObjC, or even in successor languages (sorry, fans of successor languages). Summed up in that history are Brad Cox’s views which you can read in more length in his books. I’ve at least tangentially covered each book here: Object-Oriented Programming: an Evolutionary Approach and Superdistribution: Objects as Property on the Electronic Frontier. In these he talks about Object-Oriented Programming as the “software industrial revolution”, in which the each-one-is-bespoke way of writing software from artisinally-selected ones and lightly-sparkling zeroes is replaced with a catalogue of re-usable parts, called Software ICs (integrated circuits). As an integrator, I might take the “window” IC, the “button” IC, the “text field” IC, and a “data store” IC and make a board for entering expenses.

So far, so npm. The key bit is the next bit. As a computer owner, you might take that board and integrate it into your computer so that you can do your home finances, or so that you can submit your business expense claims, or so that your characters in The Sims can claim for their CPU time, or all three of those things. The key is that this isn’t some app developer, this is the person whose computer it is.

From that perspective, Objective-C is an intermediary tool, and not a particularly important or long-lasting one. Its job is to turn legacy code into objects so that it can be accessed by people using their computers by sticking software together using objects (hello NSFileManager). To the extent it has an ongoing job, that is to turn algorithms into objects, for the same reason (but the algorithms have been made out of not-objects, because All Hail the Perform Ant).

You can make your software on your computer by glueing objects together, whether they’re made of ObjC (a common and important case), Eiffel (an uncommon and important case), Smalltalk (ditto) or whatever. Objective-C is the shiny surface we’re missing over the tar pit. It is the gear system on the bicycle for the mind; the tool that frees computer users from the tyranny of the app vendor and the app store.

I apologise for taking this long to work that out.

Posted in cocoa, design, freesoftware, gnustep, nextstep, objc | Tagged | 1 Comment

Episode 39: Monetising the Hobby

This episode is about what happens when you let people who are interested in programming (the process) define how you do programming (creating a program).

Links:

Please remember you can support me on Patreon! You can also check out my other projects: [objc retain]; and Dos Amigans. Thank you!

Leave a comment

Episode 38: the Cost of Dependencies

This post is all about whether dependencies are expensive or valuable to a software project (the answer is “yes” in a lot of cases). It was motivated by Benefits of dependencies in software projects as a function of effort by Eli Bendersky.

Leave a comment

Why you didn’t like that thing that company made

There’s been a bit of a thing about software user experience going off the rails lately. Some people don’t like cross-platform software, and think that it isn’t as consistent, as well-integrated, or as empathetic as native software. Some people don’t like native software, thinking that changes in the design of the browser (Apple), the start menu (Microsoft), or everything (GNOME) herald the end of days.

So what’s going on? Why did those people make that thing that you didn’t like? Here are some possibilities.

My cheese was moved

Plenty of people have spent plenty of time using plenty of computers. Some short-sighted individual promised “a computer on every desktop”, made it happen, and this made a lot of people rather angry.

All of these people have learned a way of using these computers that works for them. Not necessarily the one that you or anybody else expects, but one that’s basically good enough. This is called satisficing: finding a good enough way to achieve your goal.

Now removing this satisficing path, or moving it a few pixels over to the left, might make something that’s supposedly better than what was there before, but is actually worse because the learned behaviour of the people trying to use the thing no longer achieves what they want.
It may even be that the original thing is really bad. But because we know how to use it, we don’t want it to change.

Consider the File menu. In About Face 3: The Essentials of Interaction Design, written in 2007, Alan Cooper described all of the problems with the File menu and its operations: New, Open, Save, Save As…. Those operations are implementation focused. They tell you what the computer will do, which is something the computer should take care of.

He described a different model, based on what people think about how their documents work. Anything you type in gets saved (that’s true of the computer I’m typing this in to, which works a lot like a Canon Cat). You can rename it if you want to give it a different name, and you can duplicate it if you want to give it a different name while keeping the version at the original name.

This should be better, because it makes the computer expose operations that people want to do, not operations that the computer needs to do. It’s like having a helicopter with an “up” control instead of a cyclic and collective controls.

Only, replacing the Open/Save/Save As… stuff with the “better” stuff is like removing the cyclic and collective controls and giving a trained helicopter pilot with years of experience the “up” button. It doesn’t work the way they expect, they have to think about it which they didn’t have to do with the cyclic/collective controls (any more), therefore it’s worse (for them).

Users are more experienced and adaptable now

But let’s look at this a different way. More people have used more computers now than at any earlier point in history, because that’s literally how history works. And while they might not like having their cheese moved, they’re probably OK with learning how a different piece of cheese works because they’ve been doing that over and over each time they visit a new website, play a new game, or use a new app.

Maybe “platform consistency” and “conform with the human interface/platform style guidelines” was a thing that made sense in 1984, when nobody who bought a computer with a GUI had ever used one before and would have to learn how literally everything worked. But now people are more sophisticated in their use of computers, regularly flit between desktop applications, mobile apps, and websites, across different platforms, and so are more flexible and adaptable in using different software with different interactions than they were in the 1980s when you first read the Amiga User Interface Style Guide.

We asked users; they don’t care

At first glance, this explanation seems related to the previous one. We’re doing the agile thing, and talking to our customers, and they’ve never mentioned that the UI framework or the slightly inconsistent controls are an issue.

But it’s actually quite different. The reason users don’t mention that there’s extra cognitive load is that these kinds of mental operations are tacit knowledge. If you’re asked about “how can we improve your experience filing taxes”, you’ll start thinking tax-related questions, before you think “I couldn’t press Ctrl-A to get to the beginning of that text field”. I mean, unless you’re a developer who goes out of their way to look for that sort of inconsistency in software.

The trick here is to stop asking, and start watching. Users may well care, even if they don’t vocalise that caring. They may well suffer, even if they don’t realise it hard enough to care.

We didn’t ask users

Yes, that happens. I’ve probably gone into enough depth on why it happens in various places, but here’s the summary: the company has a customer proxy who doesn’t proxy customers.

Posted in UI | Leave a comment

Episode 37: systemic failures in software

Here we talk about things that can go wrong in a whole software organisation such that even if everybody does their job to the best of their ability, and they have a good ability, the result is far from optimal.

Identifying these sorts of things relies on being able to see the whole system as, well, a system. A great resource for learning about this is the Donella Meadows project, and you can start with her book Thinking in Systems: a Primer.

Leave a comment