Episode 32: freeing software from source code

I muse on a concept I’ve been thinking about for a long time: that software engineering is trapped within the confines of the fixed-width text editing paradigm. This was motivated by a discussion with Orta about his Shiki Twoslash project but it’s been rattling around in here for years.

I also talk about Pharo, the Lindisfarne Gospels, Code Bubbles, Scratch, RStudio and more!

Leave a comment

A hopefully modern description of Object-Oriented Analysis

I’ve made a lot over the years, including the book, Object-Oriented Programming the Easy Way, of my assertion that one reason people are turned off from Object-Oriented Programming is that they weren’t doing Object-Oriented Design. Smalltalk was conceived as a system for letting children explore computers by writing simulations of other problems, and if you haven’t got the bit where you’re creating simulations of other problems, then you’re just writing with no goal in mind.

Taken on its own, Object-Oriented Programming is a bit of a weird approach to modularity where a package’s code is accessed through references to that package’s record structures. You’ve got polymorphism, but no guidance as to what any of the individual morphs are supposed to be, let alone a strategy for combining them effectively. You’ve got inheritance, but inheritance of what, by what? If you don’t know what the modules are supposed to look like, then deciding which of them is a submodule of other modules is definitely tricky. Similarly with encapsulation: knowing that you treat the “inside” and “outside” of modules differently doesn’t help when you don’t know where that side ought to go.

So let’s put OOP back where it belongs: embedded in an object-oriented process for simulating a problem. The process will produce as its output an executable model of the problem that produces solutions desired by…

Well, desired by whom? The answer that Extreme Programming and Agile Software Development, approaches to thinking about how to create software that were born out of a time when OOP was ascendant, would say “by the customer”: “Our highest priority is to satisfy the customer through early and continuous delivery of valuable software”, they say.

Yes, if we’re working for money then we have to satisfy the customer. They’re the ones with the money in the first place, and if they aren’t then we have bigger problems than whether our software is satisfactory. But they aren’t the only people we have to satisfy, and Object-Oriented Design helps us to understand that.

If you’ve ever seen someone who was fully bought into the Unified Modelling Language (this is true for other ways of capturing the results of Object-Oriented Analysis but let’s stick to thinking about the UML for now), then you’ll have seen a Use Case diagram. This shows you “the system” as an abstract square, with stick figures for actors connected to “the system” via arrows annotated with use cases – descriptions of what the people are trying to do with the system.

We’ve already moved past the idea that the software and the customer are the only two things in the world, indeed the customer may not be represented in this use case diagram at all! What would that mean? That the customer is paying for the software so that they can charge somebody else for use of the software: that satisfying the customer is an indirect outcome, achieved by the action of satisfying the identified actors.

We’ve also got a great idea of what the scope will be. We know who the actors are, and what they’re going to try to do; therefore we know what information they bring to the system and what sort of information they will want to get out. We also see that some of the actors are other computer systems, and we get an idea of what we have to build versus what we can outsource to others.

In principle, it doesn’t really matter how this information is gathered: the fashion used to be for prolix use case documents but these days, shorter user stories (designed to act as placeholders for a conversation where the details can be worked out) are preferred. In practice the latter is better, because it takes into account that the act of doing some work changes the understanding of the work to be done. The later decisions can be made, the more you know at the point of deciding.

On the other hand, the benefit of that UML approach where it’s all stuck on one diagram is that it makes commonalities and pattern clearer. It’s too easy to take six user stories, write them on six Github issues, then get six engineers to write six implementations, and hope that some kind of convergent design will assert itself during code review: or worse, that you’ll decide what the common design should have been in retrospective, and add an “engineering story” to the backlog to be deferred forever more.

The system, at this level of abstraction, is a black box. It’s the opaque “context” of the context diagram at the highest level of the C4 model. But that needn’t stop us thinking about how to code it! Indeed, Behaviour-Driven Development has us do exactly that. Once we’ve come to agreement over what some user story or use case means, we can encapsulate (there’s that word again) our understanding in executable form.

And now, the system-as-simulation nature of OOP finally becomes important. Because we can use those specifications-as-scripts to talk about what we need to do with the customer (“so you’re saying that given a new customer, when they put a product in their cart, they are shown a subtotal that is equal to the cost of that product?”), refining the understanding by demonstrating that understanding in executable form and inviting them to use a simulation of the final product. But because the output is an executable program that does what they want, that simulation can be the final product itself.

A common step when implementing Behaviour-Driven Development is to introduce a translation layer between “the specs” and “the production code”. So “a new user” turns into User.new.save!, and someone has to come along and write those methods (or at least inherit from ActiveRecord). Or worse, “a new user” turns into PUT /api/v0/internal/users, and someone has to both implement that and the system that backs it.

This translation step isn’t strictly true. “A new user” can be an statement in your software implementation, your specs can be both a simulation of what the system does and the system itself, you save yourself a lot of typing and a potentially faulty translation.

There’s still a lot to Object-Oriented Analysis, but it roughly follows the well-worn path of Domain-Driven Design. Where we said “a new user”, everybody on the team should agree on what a “user” is, and there should be a concept (i.e. class) in the software domain that encapsulates (that word again!) this shared understanding and models it in executable form. Where we said the user “puts a product in their cart”, we all understand that products and carts are things both in the problem and in the software, and that a user can do a thing called “putting” which involves products, and carts, in a particular way.

If it’s not clear what all those things are or how they go together, we may wish to roleplay the objects in the program (which, because the program is a simulation of the things in the problem, means roleplaying those things). Someone is a user, someone is a product, someone is a cart. What does it mean for the user to add a product to their cart? What is “their” cart? How do these things communicate, and how are they changed by the experience?

We’re starting to peer inside the black box of the “system”, so in a future post we’ll take a proper look at Object-Oriented Design.

Posted in ooa/d | Leave a comment

Episode 31: Apple CPU transitions

I contextualise the x86_64 to arm64 transition by talking about all the other times Apple have switched CPUs in their personal computers; the timeline is roughly 6502-m68k-ppc-ppc+i386-ppc64-i386-x86_64-arm64. Sorry, Newton fans!

Leave a comment

Graham Lee Uses This

I’ve never been famous enough in tech circles to warrant a post on uses this, but the joy of running your own blog is that you get to indulge any narcissistic tendencies with no filter. So here we go!

The current setup

IMG 3357

This is the desktop setup. The idea behind this is that it’s a semi-permanent configuration so I can get really comfortable, using the best components I have access to to provide a setup I’ll enjoy using. The main features are a Herman Miller Aeron chair, which I got at a steep discount by going for a second-generation fire sale chair, and an M1 Mac Mini coupled to a 24″ Samsung curved monitor, a Matias Tactile Pro 4 keyboard and a Logitech G502 mouse. There’s a Sandberg USB camera (which isn’t great, but works well enough if I use it via OBS’s virtual camera) and a Blue Yeti mic too. The headphones are Marshall Major III, and the Philips FX-10 is used as a Bluetooth stereo.

I do all my streaming (both Dos Amigans and [objc retain];) from this desk too, so all the other hardware you see is related to that. There are two Intel NUC devices (though one is mounted behind one of the monitors), one running FreeBSD (for GNUstep) and one Windows 10 (with WinUAE/Amiga Forever). The Ducky Shine 6 keyboard and Glorious Model O mouse are used to drive whichever box I’m streaming from, which connects to the other Samsung monitor via an AVerMedia HDMI capture device.

IMG 3356

The laptop setup is on a variable-height desk (Ikea SKARSTA), and this laptop is actually provided by my employer. It’s a 12″ MacBook Pro (Intel). The idea is that it should be possible to work here, and in fact at the moment I spend most of my work time at it; but it should also be very easy to grab the laptop and take it away. To that end, the stuff plugged into the USB hub is mostly charge cables, and the peripheral hardware is mostly wireless: Apple Magic Mouse and Keyboard, and a Corsair headset. A desk-mounted stand and a music-style stand hold the tablets I need for developing a cross-platform app at work.

And it happens that there’s an Amiga CD32 with its own mouse, keyboard, and joypad alongside: that mostly gets used for casual gaming.

The general principle

Believe it or not, the pattern I’m trying to conform to here is “one desktop, one laptop”. All those streaming and gaming things are appliances for specific tasks, they aren’t a part of my regular computering setup. I’ve been lucky to be able to keep to the “one desktop, one laptop” pattern since around 2004, usually using a combination of personal and work-supplied equipment, or purchased and handed-down. For example, the 2004-2006 setup was a “rescued from the trash” PowerMac 9600 and a handed-down G3 Wallstreet; both very old computers at that time, but readily affordable to a fresh graduate on an academic support staff salary.

The concept is that the desktop setup should be the one that is most immediate and comfortable, that if I need to spend a few hours computering I will be able to get on very well with. The laptop setup should make it possible to work, and I should be able to easily pick it up and take it with me when I need to do so.

For a long time, this meant something like “I can put my current Xcode project and a conference presentation on a USB stick, copy it to the laptop, then go to a conference to deliver my talk and hack on a project in the hotel room”. These days, ubiquitous wi-fi and cloud sync products remove some of the friction, and I can usually rely on my projects being available on the laptop at time of use (or being a small number of steps away).

I’ve never been a single-platform person. Sometimes “my desktop” is a Linux PC, sometimes a Mac, it’s even been a NeXT Turbo Station and a Sun Ultra workstation before. Sometimes “my laptop” is a Linux PC, sometimes a Mac, the most outré was that G3 which ran OpenDarwin for a time. The biggest ramification of that is that I’ve never got particularly deep into configuring my tools. It’s better for me to be able to find my way around a new vanilla system than it is to have a deep custom configuration that I understand really well but is difficult to port.

When Mac OS X had the csh shell as default, I used that. Then with 10.3 I switched to bash. Then with 10.15 I switched to zsh. My dotfiles repo has a git config, and a little .emacs that enables some org-mode plugins. But that’s it.

Posted in Mac, meta-waffle | Leave a comment

On industry malaise

Robert Atkins linked to his post on industry malaise:

All over the place I see people who got their start programming with “view source” in the 2000s looking around at the state of web application development and thinking, “Hey wait a minute, this is a mess” […] On the native platform side, there’s no joy either.

This is a post from 2019, but shared in a “this is still valid” sense. To be honest, I think it is. I recognise those doldrums myself; Robert shared the post in reply to my own toot:

Honestly jealous of people who are still excited by new developments in software and have gone through wondering why, then through how to get that excitement back, now wondering if it’s possible that I ever will.

I’ve spent long enough thinking that it’s the industry that’s at fault to thinking it’s me that’s at fault, and now I know others feel the same way I can expand that from “me” to “us”.

I recognise the pattern. The idea that “we” used to do good work with computers until “we” somehow lost “our” way with “our” focus on trivialities like functional reactive programming or declarative UI technology, or actively hostile activities like adtech, blockchain, and cloud computing.

Yes, those things are all hostile, but are they unique to the current times? Were the bygone days with their shrink-wrapped “breaking this seal means agreeing to the license printed on the paper inside the sealed box” EULAs and their “can’t contact flexlm, quitting now” really so much better than the today times? Did we not get bogged down in trivialities like object-relational mapping and rewriting the world in PerlPHPPython?

It is true that “the kids today” haven’t learned all the classic lessons of software engineering. We didn’t either, and there will soon be a century’s worth of skipped classes to catch up on. That stuff doesn’t need ramming into every software engineer’s brain, like they’re Alex from A Clockwork Orange. It needs contextualising.

A clear generational difference in today’s software engineering is what we think of Agile. Those of us who lived through—or near—the transition remember the autonomy we gained, and the liberation from heavyweight, management-centric processes that were all about producing collateral for executive sign-off and not at all about producing working software that our customers valued. People today think it’s about having heavyweight processes with daily status meetings that suck the life out of the team. Fine, things change, it’s time to move forward. But contextualise the Agile movement, so that people understand at least what moving backward would look like.

So some of this malaise will be purely generational. Some of us have aged/grown/tired out of being excited about every new technology, and see people being excited about every new technology as irrelevant or immature. Maybe it is irrelevant, but if so it probably was when we were doing it too: nothing about the tools we grew up with were any more timeless than today’s.

Some of it will also be generational, but for very different reasons. Some fraction of us who were junior engineers a decade or two ago will be leads, principles, heads of division or whatever now, and responsible for the big picture, and not willing to get caught into the minutiae of whether this buggy VC-backed database that some junior heard about at code club will get sunset before that one. We’d rather use postgres, because we knew it back then and know it now. Well, if you’re in that boat, congratulations on the career progression, but it’s now your job to make those big picture decisions, make them compelling, and convince your whole org to side with you. It’s hard, but you’re paid way more than you used to get and that’s how this whole charade works.

Some of it is also frustration. I certainly sense this one. I can pretend I understood my 2006-vintage iBook. I didn’t understand the half of it, but I understood enough to claim some kind of system-level comfort. I had (and read: that was a long flight) the Internals book so I understood the kernel. The Unix stuff is a Unix system, I know this! And if you ignore classic, carbon, and a bunch of programming languages that came out of the box, I knew the frameworks and developer tools too. I understood how to do security on that computer well enough that Apple told you to consider reading my excellent book. But it turns out that they just wouldn’t fucking sit still for a decade, and I no longer understand all of that technology. I don’t understand my M1 Mac Mini. That’s frustrating, and makes me feel stupid.

So yes, there is widespread malaise, and yes, people are doing dumb, irrelevant, or evil things in the name of computering. But mostly it’s just us.

As the kids these days say, please like and subscribe.

Posted in advancement of the self | Tagged | Leave a comment

My current host name scheme at home is characters from the film Tron. So I have:

Laptop: flynn (programmer, formerly at Encom, and arcade owner)

Desktop: yori (programmer at Encom)

TV box: dumont (runs the I/O terminal)

Watch: bit (a bit)

Windows computer: dillinger (the evil corporate suit)

Posted on by Graham | Leave a comment

Episode 30: Digital Autonomy and Software Freedom

In which I posit that software freedom is a small part of a bigger picture.

2 Comments

On UML

A little context: I got introduced to UML in around 2008, at an employer who had a site licence for Enterprise Architect. I was sent on a training course run by a company that no longer exists called Sun Microsystems: every day for a week I would get on a coach to Marble Arch, then take the central line over to Shoreditch (they were very much ahead of their time, Sun Microsystems) and learn how to decompose systems into objects and represent the static and dynamic properties of these objects on class diagrams, activity diagrams, state diagrams, you name it.

I got a bye on some of the uses of Enterprise Architect at work. Our Unix team was keeping its UML diagrams in configuration management, round-tripping between the diagrams and C++ code to make sure everything was in sync. Because Enterprise Architect didn’t have round-trip support for Objective-C (it still doesn’t), and I was the tech lead for the Mac team, I wasn’t expected to do this.

This freed me from some of the more restrictive constraints imposed on other UML-using teams. My map could be a map, not the territory. Whereas the Unix folks had to show how every single IThing had a ThingImpl so that their diagrams correctly generated the PImpl pattern, my diagrams were free to show only the information relevant to their use as diagrams. Because the diagrams didn’t need to be in configuration management alongside the source, I was free to draw them on whiteboards if I was by a whiteboard and not by the desktop computer that had my installation of Enterprise Architect.

Even though I’d been working with Objective-C for somewhere around six years at this point, this training course along with the experience with EA was the thing that finally made the idea of objects click. I had been fine before with what would now be called the Massive View Controller pattern but then was the Massive App Delegate pattern (MAD software design, if you’ll excuse the ablism).

Apple’s sample code all puts the outlets and actions on the app delegate, so why can’t I? The Big Nerd Ranch book does it, so why can’t I? Oh, the three of us all editing that file has got unwieldy? OK, well let’s make App Delegate categories for the different features in the app and move the methods there.

Engaging with object-oriented design, as distinct from object-oriented programming, let me move past that, and helped me to understand why I might want to define my own classes, and objects that collaborate with each other. It helped me to understand what those classes and objects could be used for (and re-used for).

Of course, these days UML has fallen out of fashion (I still use it, though I’m more likely to have PlantUML installed than EA). In these threads and the linked posts two extreme opinions—along with quite a few in between—are found.

The first is that UML (well not UML specifically, but OO Analysis and Design) represents some pre-lapsarian school of thought from back when programmers used to think, and weren’t just shitting javascript into containers at ever-higher velocities. In this school, UML is something “we” lost along when we stopped doing software engineering properly, in the name of agile.

The second is that UML is necessarily part of the heavyweight waterfall go-for-two-years-then-fail-to-ship project management paradigm that The Blessed Cunningham did away with in the Four Commandments of the agile manifesto. Thou shalt not make unto yourselves craven comprehensive documentation!

Neither is true. “We” had a necessary (due to the dot-com recession) refocus on the idea that this software is probably supposed to be for something, that the person we should ask about what it’s for is probably the person who’s paying for it, and we should probably show them something worth their money sooner rather than later. Many people who weren’t using UML before the fall/revelation still aren’t. Many who were doing at management behest are no longer. Many who were because they liked it still are.

But when I last taught object-oriented analysis and design (as distinct, remember, from object-oriented programming), which was in March 2020, the tool to reach for was the UML (we used Visual Paradigm, not plantuml or EA). It is perhaps not embarrassing that the newest tool for the job is from the 1990s (after all, people still teach Functional Programming which is even older). It is perhaps unfortunate that no design (sorry, “emergent” design) is the best practice that has replaced it.

On the other hand, by many quantitative metrics, software is still doing fine, and the whole UML exercise was only a minority pursuit at its peak.

Posted in agile, architecture of sorts, design, software-engineering, sunw, tool-support | Tagged | 2 Comments

Episode 29: Return to Software Engineering

This episode is about technology, culture, community, and career. No links because it’s mostly a personal history and there’s only so much self-promotion I can fit in one episode.

That said! If you enjoy this podcast and would like to support me in producing material for the software engineering community, please consider becoming a Patron. No obligation, but thank you for considering it!

Leave a comment

I don’t use version control when I’m writing

Or rather, I do use version control when I’m writing, and it isn’t helpful.

I’m currently studying a PhD, and I have around 113k words of notes in a git repository. I also have countless words of notes in a Zotero database and a Remarkable tablet. I don’t particularly miss git when I’m not storing notes in my repository.

A lot of the commit messages in that repository aren’t particularly informative. “Update literature search”, “meeting notes from today”, “meeting notes”, “rewrite introduction”. So unlike in software, where I have changes like “create the ubiquitous documents folder if it doesn’t already exist” and “fix type mismatch in document delegate conformance”, I don’t really have atomic changes in long-form writing.

Indeed, that’s not how I write. I usually set out either to produce an argument, or to improve an existing one. Not to add a specific point that I hadn’t thought of before, not to improve layout or structure in any specific way, not to fix particular problems. So I’m not “adding features” or “fixing bugs” in the same atomic way that I would in software, and don’t end up with a revision history comprising multiple atomic commits.

Some of my articles—this one included—have no checkpoints in their history at all. Others, including posts on De Programmatica Ipsum and journal articles, have a dozen or more checkpoints, but only because I “saved a draft” when I stepped away from the computer, not because there were meaningful atomic increments. I would never revert a change in an article when I’m writing, I’d always fix forward. I’d never introduce a new idea on a branch, there’s always a linear flow.

Posted in writing | 1 Comment