Episode 21: No code is better than no code

In which we recommend deleting your code.

Steve McConnell’s More Effective Agile

Ward Cunningham introduces the debt metaphor for bad code, and my guess is it won’t be familiar as he presented it if you think you’re familiar with the term “technical debt”.

Leave a comment

Episode 20: what do we know about software engineering?

I explain the gap between episodes 19 and 20, and ask whether any of the practices we follow in software engineering are defensible.

As always, I welcome discussing this topic with you! Comment here, or send email to the address I read out in the episode.

Tagged | Leave a comment

The Silent Network

People say that the internet, or maybe specifically the web, holds the world’s information and makes it accessible. Maybe there was a time when that was true. But currently it’s not: probably not because the information is missing, but because the search engines think they know better than you what you want.

I recently had cause to look up an event that I know happened: at an early point in the iPod’s development, Steve Jobs disparaged MP3 players using NAND Flash storage. What were his exact words?

Jobs also disparaged the Adobe (formerly Macromedia) Flash Player media platform, in a widely-discussed blog post on his company website many years later. I knew that this would be a closely-connected story, so I crafted my search terms to exclude it.

Steve Jobs NAND Flash iPod. Steve Jobs Flash MP3 player. Steve Jobs NAND Flash -Adobe. Did any of these work? No, on multiple search engines. Having to try multiple search engines and getting the wrong results on all of them is 1990s-era web experience. All of these search terms return lists of “Thoughts on Flash” (the Adobe player), reports on that article, later news about Flash Player linking subsequent outcomes to that article, hot takes on why Jobs was wrong in that article, and so on. None of them show me what I asked for.

Eventually I decided to search the archives of one particular blog, which didn’t make the search engines prefer relevant results but which did reduce the quantity of irrelevant results. Finally, on the second page of articles from Daring Fireball about “Steve Jobs NAND flash storage iPod”, I found Flash Gordon. I still don’t have the quote, I have an article about a later development citing a dead link story that is itself interpreting the quote.

That’s the closest modern web searching tools would let me get.

Posted in ipod, meta-interwebs | 3 Comments

Apple Silicon, Xeon Phi, and Amigas

The new M1 chip in the new Macs has 8-16GB of DRAM on the package, just like many mobile phones or single-board computers. But unlike many desktop, laptop or workstation computers (there are exceptions). In the first tranche of Macs using the chip, that’s all the addressable RAM they have (i.e. ignoring caches), just like many mobile phones or single-board computers. But what happens when they move the Apple Silicon chips up the scale, to computers like the iMac or Mac Pro?

It’s possible that these models would have a few GB of memory on-package and access to memory modules connected via a conventional controller, for example DDR4 RAM. They almost certainly would if you could deploy multiple M1 (or successor) packages on a single system. Such a Mac would be a non-uniform memory access architecture (NUMA), which (depending on how it’s configured) has implications for how software can be designed to best make use of the memory.

NUMA computing is of course not new. If you have a computer with a CPU and a discrete graphics processor, you have a NUMA computer: the GPU has access to RAM that the CPU doesn’t, and vice versa. Running GPU code involves copying data from CPU-memory to GPU-memory, doing GPU stuff, then copying the result from GPU-memory to CPU-memory.

A hypothetical NUMA-because-Apple-Silicon Mac would not be like that. The GPU shares access to the integrated RAM with the CPU, a little like an Amiga. The situation on Amiga was that there was “chip RAM” (which both the CPU and graphics and other peripheral chips could access), and “fast RAM” (only available to the CPU). The fast RAM was faster because the CPU didn’t have to wait for the coprocessors to use it, whereas they had to take turns accessing the chip RAM. Nonetheless, the CPU had access to all the RAM, and programmers had to tell `AllocMem` whether they wanted to use chip RAM, fast RAM, or didn’t care.

A NUMA Mac would not be like that, either. It would share the property that there’s a subset of the RAM available for sharing with the GPU, but this memory would be faster than the off-chip memory because of the closer integration and lack of (relatively) long communication bus. Apple has described the integrated RAM as “high bandwidth”, which probably means multiple access channels.

A better and more recently analogy to this setup is Intel’s discontinued supercomputer chip, Knight’s Landing (marketed as Xeon Phi). Like the M1, this chip has 16GB of on-die high bandwidth memory. Like my hypothetical Mac Pro, it can also access external memory modules. Unlike the M1, it has 64 or 72 identical cores rather than 4 big and 4 little cores.

There are three ways to configure a Xeon Phi computer. You can not use any external memory, and the CPU entirely uses its on-package RAM. You can use a cache mode, where the software only “sees” the external memory and the high-bandwidth RAM is used as a cache. Or you can go full NUMA, where programmers have to explicitly request memory in the high-bandwidth region to access it, like with the Amiga allocator.

People rarely go full NUMA. It’s hard to work out what split of allocations between the high-bandwidth and regular RAM yields best performance, so people tend to just run with cached mode and hope that’s faster than not having any on-package memory at all.

And that makes me think that a Mac would either not go full NUMA, or would not have public API for it. Maybe Apple would let the kernel and some OS processes have exclusive access to the on-package RAM, but even that seems overly complex (particularly where you have more than one M1 in a computer, so you need to specify core affinity for your memory allocations in addition to memory type). My guess is that an early workstation Mac with 16GB of M1 RAM and 64GB of DDR4 RAM would look like it has 64GB of RAM, with the on-package memory used for the GPU and as cache. NUMA APIs, if they come at all, would come later.

Posted in Amiga, arm, HPC, Mac | 7 Comments

Recovering from deleting your login shell on a Mac

In case you ever need it. If you’re searching for something like “deleted login shell Mac can’t open terminal”, this is the post for you.

I just deleted my login shell (because it was installed with homebrew, and I removed homebrew without remembering that I would lose my shell). That stopped me from opening a Terminal window, because it would immediately bomb out as it was unable to open the shell.

Unable to open a normal Terminal window, anyway. In the Shell menu, the “New Command…” item let me run /bin/bash -l, from which I got to a login-like bash shell. Then I could run this command:

chsh -s /bin/zsh

Enter my password, and then I have a normal shell again.

(So I could then install MacPorts, and then change my shell to /opt/local/bin/bash)

Posted in Mac | Leave a comment

Lambda: the ultimate polymath

Thinking back over the last couple of years, I’ve had to know quite a bit about a few different topics to be able to write good software. Those topics:

  • Epidemiology
  • Architecture
  • Plant Sciences
  • Histology
  • Education

Not much knowledge in each field, though definitely expert-level knowledge: enough to have conversations as a peer with experts in those fields, to be able to follow the jargon, and to be able to make reasoned assessments of the experts’ suggestions in software designed for use by those experts. And, where I’ve wanted to propose alternate suggestions, enough expert-level knowledge to identify and justify a different design.

Going back over the rest of my career in software:

  • Pensions, investments, and saving
  • Mobile Telecommunications
  • Terry Pratchett’s Discworld
  • Macromolecular crystallography
  • Synchrotron accelerators
  • Home automation
  • Yoga
  • Soda drinks dispensers
  • Banks
  • Quite a bit of physics

In fact I’d estimate that I’ve spent less than 40% of my “professional career” in jobs where knowing software or even computers in general was the whole thing.

Working in software is about understanding a system to the point where you can use a computer to make a meaningful and beneficial contribution to that system. While the systems thinking community have great tools for mapping out the behaviour of systems, they are only really good for making initial models. In order to get good at it, we have to actually understand the system on its own terms, with the same ideas and biases that the people who interact regularly with the system use.

But of course, because we’re hired for our computering skills, we get to experience and work with all of these different systems. It’s perhaps one of the best domains in which to be a polymath. To navigate it effectively, we need to accept that we are not the experts. We’re visitors, who get to explore other people’s worlds.

We should take them up on that offer, though, if we’re going to be effective. If we maintain the distinction between “technical” and “non-technical” people, or between “engineering” and “the business”, then we deny ourselves the ability to learn about the problem we’re trying to solve, and to make a good solution.

Posted in advancement of the self | Leave a comment

Discipline doesn’t scale

If programmers were just more disciplined, more professional, they’d write better software. All they need is a code of conduct telling them how to work like those of us who’ve worked it out.

The above statement is true, which is a good thing for those of us interested in improving the state of software and in helping our fellow professionals to improve their craft. However, it’s also very difficult and inefficient to apply, in addition to being entirely unnecessary. In the common parlance of our industry, “discipline doesn’t scale”.

Consider the trajectory of object lifecycle management in the Objective-C programming language, particularly the NeXT dialect. Between 1989 and 1995, the dominant way to deal with the lifecycle of objects was to use the +new and -free methods, which work much like malloc/free in C or new/delete in C++. Of course it’s possible to design a complex object graph using this ownership model, it just needs discipline, that’s all. Learn the heuristics that the experts use, and the techniques to ensure correctness, and get it correct.

But you know what’s better? Not having to get that right. So around 1994 people introduced new tools to do it an easier way: reference counting. With NeXTSTEP Mach Kit’s NXReference protocol and OpenStep’s NSObject, developers no longer need to know when everybody in an app is done with an object to destroy it. They can indicate when a reference is taken and when it’s relinquished, and the object itself will see when it’s no longer used and free itself. Learn the heuristics and techniques around auto releasing and unretained references, and get it correct.

But you know what’s better? Not having to get that right. So a couple of other tools were introduced, so close together that they were probably developed in parallel[*]: Objective-C 2.0 garbage collection (2006) and Automatic Reference Counting (2008). ARC “won” in popular adoption so let’s focus there: developers no longer need to know exactly when to retain, release, or autorelease objects. Instead of describing the edges of the relationships, they describe the meanings of the relationships and the compiler will automatically take care of ownership tracking. Learn the heuristics and techniques around weak references and the “weak self” dance, and get it correct.

[*] I’m ignoring here the significantly earlier integration of the Boehm conservative GC with Objective-C, because so did everybody else. That in itself is an important part of the technology adoption story.

But you know what’s better? You get the idea. You see similar things happen in other contexts: for example C++’s move from new/delete to smart pointers follows a similar trajectory over a similar time. The reliance on an entire programming community getting some difficult rules right, when faced with the alternative of using different technology on the same computer that follows the rules for you, is a tough sell.

It seems so simple: computers exist to automate repetitive information-processing tasks. Requiring programmers who have access to computers to recall and follow repetitive information processes is wasteful, when the computer can do that. So give those tasks to the computers.

And yet, for some people the problem with software isn’t a lack of automation but a lack of discipline. Software would be better if only people knew the rules, honoured them, and slowed themselves down so that instead of cutting corners they just chose to ignore important business milestones instead. Back in my day, everybody knew “no Markdown around town” and “don’t code in an IDE after Labour Day”, but now the kids do whatever they want. The motivations seem different, and I’d like to sort them out.

Let’s start with hazing. A lot of the software industry suffers from “I had to go through this, you should too”. Look at software engineering interviews, for example. I’m not sure whether anybody actually believes “I had to deal with carefully ensuring NUL-termination to avoid buffer overrun errors so you should too”, but I do occasionally still hear people telling less-experienced developers that they should learn C to learn more about how their computer works. Your computer is not a fast PDP-11, all you will learn is how the C virtual machine works.

Just as Real Men Don’t Eat Quiche, so real programmers don’t use Pascal. Real Programmers use FORTRAN. This motivation for sorting discipline from rabble is based on the idea that if it isn’t at least as hard as it was when I did this, it isn’t hard enough. And that means that the goalposts are movable, based on the orator’s experience.

This is often related to the term of their experience: you don’t need TypeScript to write good React Native code, just Javascript and some discipline. You don’t need React Native to write good front-end code, just JQuery and some discipline. You don’t need JQuery…

But along with the term of experience goes the breadth. You see, the person who learned reference counting in 1995 and thinks that you can only really understand programming if you manually type out your own reference-changing events, presumably didn’t go on to use garbage collection in Java in 1996. The person who thinks you can only really write correct software if every case is accompanied by a unit test presumably didn’t learn Eiffel. The person who thinks that you can only really design systems if you use the Haskell type system may not have tried OCaml. And so on.

The conclusion is that for this variety of disciplinarian, the appropriate character and quantity of discipline is whatever they had to deal with at some specific point in their career. Probably a high point: after they’d got over the tricky bits and got productive, and after you kids came along and ruined everything.

Sometimes the reason for suggesting the disciplined approach is entomological in nature, as in the case of the eusocial insect the “performant” which, while not a real word, exists in greater quantities in older software than in newer software, apparently. The performant is capable of making software faster, or use less memory, or more concurrent, or less dependent on I/O: the specific characteristics of the performant depend heavily on context.

The performant is often not talked about in the same sentences as its usual companion species, the irrelevant. Yes, there may be opportunities to shave a few percent off the runtime of that algorithm by switching from the automatic tool to the manual, disciplined approach, but does that matter (yet, or at all)? There are software-construction domains where specific performance characteristics are desirable, indeed that’s true across a lot of software. But it’s typical to focus performance-enhancing techniques on the bits where they enhance performance that needs enhancing, not to adopt them across the whole system on the basis that it was better when everyone worked this way. You might save a few hundred cycles writing native software instead of using a VM for that UI method, but if it’s going to run after a network request completes over EDGE then trigger a 1/3s animation, nobody will notice the improvement.

Anyway, whatever the source, the problem with calls for discipline is that there’s no strong motivation to become more disciplined. I can use these tools, and my customer is this much satisfied, and my employer pays me this much. Or I can learn from you how I’m supposed to be doing it, which will slow me down, for…your satisfaction? So you know I’m doing it the way it’s supposed to be done? Or so that I can tell everyone else that they’re doing it wrong, too? Sounds like a great deal.

Therefore discipline doesn’t scale. Whenever you ask some people to slow down and think harder about what they’re doing, some fraction of them will. Some will wonder whether there’s some other way to get what you’re peddling, and may find it. Some more will not pay any attention. The dangerous ones are the ones who thought they were paying attention and yet still end up not doing the disciplined thing you asked for: they either torpedo your whole idea or turn it into not doing the thing (see OOP, Agile, Functional Programming). And still more people, by far the vast majority, just weren’t listening at all, and you’ll never reach them.

Let’s flip this around. Let’s look at where we need to be disciplined, and ask if there are gaps in the tool support for software engineers. Some people want us to always write a failing test and make it pass before adding any code (or want us to write a passing test and revert our changes if it accidentally fails): does that mean our tools should not let us write code for which there’s no test? Does the same apply for acceptance tests? Some want us to refactor mercilessly; does that mean our design tools should always propose more parsimonious alternatives for passing the same tests? Some say we should get into the discipline of writing code that always reveals its intent: should the tools make a crack at interpreting the intention of the code-as-prose?

Posted in C++, learning, software-engineering | Tagged | 6 Comments

Episode 19: How many Macs?

In which I admit to having used a very large number of Apple computers, even if I exclude vintage pieces. But actually talk about setting up a second one. The Twitter poll that started it all.

Leave a comment

Reflections on an iBook G4

I had an item in OmniFocus to “write on why I wish I was still using my 2006 iBook”, and then Tim Sneath’s tweet on unboxing a G4 iMac sealed the deal. I wish I was still using my 2006 iBook. I had been using NeXTSTEP for a while, and Mac OS X for a short amount of time, by this point, but on borrowed hardware, mostly spares from the University computing lab.

My “up-to-date” setup was my then-girlfriend’s PowerBook G3 “Wall Street” model, which upon being handed down to me usually ran OpenDarwin, Rhapsody, or Mac OS X 10.2 Jaguar, which was the last release to boot properly on it. When I went to WWDC for the first time in 2005 I set up X Post Facto, a tool that would let me (precariously) install and run 10.3 Panther on it, so that I could ask about Cocoa Bindings in the labs. I didn’t get to run the Tiger developer seed we were given.

When the dizzying salary of my entry-level sysadmin job in the Uni finally made a dent in my graduate-level debts, I scraped together enough money for the entry-level 12” iBook G4 (which did run Tiger, and Leopard). I think it lasted four years until I finally switched to Intel, with an equivalent white acrylic 13” MacBook model. Not because I needed an upgrade, but because Apple forced my hand by making Snow Leopard (OS X 10.6) Intel-only. By this time I was working as a Mac developer so had bought in to the platform lock-in, to some extent.

The treadmill turns: the white MacBook was replaced by a mid-decade MacBook Air (for 64-bit support), which developed a case of “fruit juice on the GPU” so finally got replaced by the 2018 15” MacBook Pro I use to this day. Along the way, a couple of iMacs (both Intel, both aluminium, the second being an opportunistic upgrade: another hand-me-down) came and went, though the second is still used by a friend.

Had it not been for the CPU changes and my need to keep up, could I still use that iBook in 2020? Yes, absolutely. Its replaceable battery could be improved, its browser could be the modern TenFourFox, the hard drive could be replaced with an SSD, and then I’d have a fast, quiet computer that can compile my code and browse the modern Web.

Would that be a great 2020 computer? Not really. As Steven Baker pointed out when we discussed this, computers have got better in incremental ways that eventually add up: hardware AES support for transparent disk encryption. Better memory controllers and more RAM. HiDPI displays. If I replaced the 2018 MBP with the 2006 iBook today, I’d notice those things get worse way before I noticed that the software lacked features I needed.

On the other hand, the hardware lacks a certain emotional playfulness: the backlight shining through the Apple logo. The sighing LED indicating that the laptop is asleep. The reassuring clack of the keys.

Are those the reasons this 2006 computer speaks to me through the decades? They’re charming, but they aren’t the whole reason. Most of it comes down to an impression that that computer was mine and I understood it, whereas the MBP is Apple’s and I get to use it.

A significant input into that is my own mental health. Around 2014 I got into a big burnout, and stopped paying attention to the updates. As a developer, that was a bad time because it was when Apple introduced, and started rapidly iterating on, the Swift programming language. As an Objective-C and Python expert (I’ve published books on both), with limited emotional capacity, I didn’t feel the need to become an expert on yet another language. To this day, I feel like a foreign tourist in Swift and SwiftUI, able to communicate intent but not to fully immerse in the culture and understand its nuances.

A significant part of that is the change in Apple’s stance from “this is how these things work” to “this is how you use these things”. I don’t begrudge them that at all (I did in the Dark Times), because they are selling useful things that people want to use. But there is decidedly a change in tone, from the “Come in it’s open” logo on the front page of the developer website of yore to the limited, late open source drops of today. From the knowledge oriented programming guides of the “blue and white” documentation archive to the task oriented articles of today.

Again, I don’t begrudge this. Developers have work to do, and so want to complete their tasks. Task-oriented support is entirely expected and desirable. I might formulate an argument that it hinders “solutions architects” who need to understand the system in depth to design a sympathetic system for their clients’ needs, but modern software teams don’t have solutions architects. They have their choice of UI framework and a race to an MVP.

Of course, Apple’s adoption of machine learning and cloud systems also means that in many cases, the thing isn’t available to learn. What used to be an open source software component is now an XPC service that calls into a black box that makes a network request. If I wanted to understand why the spell checker on modern macOS or iOS is so weird, Apple would wave their figurative hands and say “neural engine”.

And a massive contribution is the increase in scale of Apple’s products in the intervening time. Bear in mind that at the time of the 2006 iBook, I had one of Apple’s four Mac models, access to an XServe and Airport base station, and a friend who had an iPod, and felt like I knew the whole widget. Now, I have the MBP (one of six models), an iPhone (not the latest model), an iPad (not latest, not Pro), the TV doohickey, no watch, no speaker, no home doohickey, no auto-unlock car, and I’m barely treading water.

Understanding a G4-vintage Mac meant understanding PPC, Mach, BSD Unix, launchd, a couple of directory services, Objective-C, Cocoa, I/O Kit, Carbon, AppleScript, the GNU tool chain and Jam, sqlite3, WebKit, and a few ancillary things like the Keychain and HFS+. You could throw in Perl, Python, and the server stuff like XSAN and XGrid, because why not?

Understanding a modern Mac means understanding that, minus PPC, plus x86_64, the LLVM tool chain, sandbox/seatbelt, Scheme, Swift, SwiftUI, UIKit, “modern” AppKit (with its combination of layer-backed, layer-hosting, cell-based and view-based views), APFS, JavaScript and its hellscape of ancillary tools, geocoding, machine learning, the T2, BridgeOS…

I’m trying to trust a computer I can’t mentally lift.

Posted in AAPL, carbon, cocoa, darwin, Mac | 1 Comment

Running Linux GUI apps under MacOS using Docker

I had need to test an application built for Linux, and didn’t want to run a whole desktop in a window using Virtualbox. I found the bits I needed online in various forums, but nowhere was it all in one place. It is now!

Prerequisites: Docker and XQuartz. Both can be downloaded from homebrew.

Create a Dockerfile:

FROM debian:latest


RUN apt-get update && apt-get install -y iceweasel


RUN export uid=501 gid=20 && \
    mkdir -p /home/user && \
    echo "user:x:${uid}:${gid}:User,,,:/home/user:/bin/bash" >> /etc/passwd && \
    echo "staff:x:${uid}:" >> /etc/group && \
    echo "user ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers && \
    chmod 0440 /etc/sudoers && \
    chown ${uid}:${gid} -R /home/user


USER user
ENV HOME /home/user
CMD /usr/bin/iceweasel

It’s good to mount the Downloads folder within /home/user, or your Documents, or whatever. On Catalina or later you’ll get warnings asking whether you want to give Docker access to those folders.

First time through, open XQuartz, goto preferences > Security and check the option to allow connections from network clients, quit XQuartz.

Now open XQuartz, and in the xterm type:

$ xhost + $YOUR_IP
$ docker build -f Dockerfile -t firefox .
$ docker run -it -e DISPLAY=$YOUR_IP:0 -v /tmp/.X11-unix:/tmp/.X11-unix -v $HOME/Downloads:/home/users/Downloads firefox

Enjoy firefox (or more likely, your custom app that you’re testing under Linux)!

Iceweasel on Debian on macOS

Posted in cross-platform, Mac | 2 Comments