Episode 31: Apple CPU transitions

I contextualise the x86_64 to arm64 transition by talking about all the other times Apple have switched CPUs in their personal computers; the timeline is roughly 6502-m68k-ppc-ppc+i386-ppc64-i386-x86_64-arm64. Sorry, Newton fans!

Leave a comment

Graham Lee Uses This

I’ve never been famous enough in tech circles to warrant a post on uses this, but the joy of running your own blog is that you get to indulge any narcissistic tendencies with no filter. So here we go!

The current setup

IMG 3357

This is the desktop setup. The idea behind this is that it’s a semi-permanent configuration so I can get really comfortable, using the best components I have access to to provide a setup I’ll enjoy using. The main features are a Herman Miller Aeron chair, which I got at a steep discount by going for a second-generation fire sale chair, and an M1 Mac Mini coupled to a 24″ Samsung curved monitor, a Matias Tactile Pro 4 keyboard and a Logitech G502 mouse. There’s a Sandberg USB camera (which isn’t great, but works well enough if I use it via OBS’s virtual camera) and a Blue Yeti mic too. The headphones are Marshall Major III, and the Philips FX-10 is used as a Bluetooth stereo.

I do all my streaming (both Dos Amigans and [objc retain];) from this desk too, so all the other hardware you see is related to that. There are two Intel NUC devices (though one is mounted behind one of the monitors), one running FreeBSD (for GNUstep) and one Windows 10 (with WinUAE/Amiga Forever). The Ducky Shine 6 keyboard and Glorious Model O mouse are used to drive whichever box I’m streaming from, which connects to the other Samsung monitor via an AVerMedia HDMI capture device.

IMG 3356

The laptop setup is on a variable-height desk (Ikea SKARSTA), and this laptop is actually provided by my employer. It’s a 12″ MacBook Pro (Intel). The idea is that it should be possible to work here, and in fact at the moment I spend most of my work time at it; but it should also be very easy to grab the laptop and take it away. To that end, the stuff plugged into the USB hub is mostly charge cables, and the peripheral hardware is mostly wireless: Apple Magic Mouse and Keyboard, and a Corsair headset. A desk-mounted stand and a music-style stand hold the tablets I need for developing a cross-platform app at work.

And it happens that there’s an Amiga CD32 with its own mouse, keyboard, and joypad alongside: that mostly gets used for casual gaming.

The general principle

Believe it or not, the pattern I’m trying to conform to here is “one desktop, one laptop”. All those streaming and gaming things are appliances for specific tasks, they aren’t a part of my regular computering setup. I’ve been lucky to be able to keep to the “one desktop, one laptop” pattern since around 2004, usually using a combination of personal and work-supplied equipment, or purchased and handed-down. For example, the 2004-2006 setup was a “rescued from the trash” PowerMac 9600 and a handed-down G3 Wallstreet; both very old computers at that time, but readily affordable to a fresh graduate on an academic support staff salary.

The concept is that the desktop setup should be the one that is most immediate and comfortable, that if I need to spend a few hours computering I will be able to get on very well with. The laptop setup should make it possible to work, and I should be able to easily pick it up and take it with me when I need to do so.

For a long time, this meant something like “I can put my current Xcode project and a conference presentation on a USB stick, copy it to the laptop, then go to a conference to deliver my talk and hack on a project in the hotel room”. These days, ubiquitous wi-fi and cloud sync products remove some of the friction, and I can usually rely on my projects being available on the laptop at time of use (or being a small number of steps away).

I’ve never been a single-platform person. Sometimes “my desktop” is a Linux PC, sometimes a Mac, it’s even been a NeXT Turbo Station and a Sun Ultra workstation before. Sometimes “my laptop” is a Linux PC, sometimes a Mac, the most outré was that G3 which ran OpenDarwin for a time. The biggest ramification of that is that I’ve never got particularly deep into configuring my tools. It’s better for me to be able to find my way around a new vanilla system than it is to have a deep custom configuration that I understand really well but is difficult to port.

When Mac OS X had the csh shell as default, I used that. Then with 10.3 I switched to bash. Then with 10.15 I switched to zsh. My dotfiles repo has a git config, and a little .emacs that enables some org-mode plugins. But that’s it.

Posted in Mac, meta-waffle | Leave a comment

On industry malaise

Robert Atkins linked to his post on industry malaise:

All over the place I see people who got their start programming with “view source” in the 2000s looking around at the state of web application development and thinking, “Hey wait a minute, this is a mess” […] On the native platform side, there’s no joy either.

This is a post from 2019, but shared in a “this is still valid” sense. To be honest, I think it is. I recognise those doldrums myself; Robert shared the post in reply to my own toot:

Honestly jealous of people who are still excited by new developments in software and have gone through wondering why, then through how to get that excitement back, now wondering if it’s possible that I ever will.

I’ve spent long enough thinking that it’s the industry that’s at fault to thinking it’s me that’s at fault, and now I know others feel the same way I can expand that from “me” to “us”.

I recognise the pattern. The idea that “we” used to do good work with computers until “we” somehow lost “our” way with “our” focus on trivialities like functional reactive programming or declarative UI technology, or actively hostile activities like adtech, blockchain, and cloud computing.

Yes, those things are all hostile, but are they unique to the current times? Were the bygone days with their shrink-wrapped “breaking this seal means agreeing to the license printed on the paper inside the sealed box” EULAs and their “can’t contact flexlm, quitting now” really so much better than the today times? Did we not get bogged down in trivialities like object-relational mapping and rewriting the world in PerlPHPPython?

It is true that “the kids today” haven’t learned all the classic lessons of software engineering. We didn’t either, and there will soon be a century’s worth of skipped classes to catch up on. That stuff doesn’t need ramming into every software engineer’s brain, like they’re Alex from A Clockwork Orange. It needs contextualising.

A clear generational difference in today’s software engineering is what we think of Agile. Those of us who lived through—or near—the transition remember the autonomy we gained, and the liberation from heavyweight, management-centric processes that were all about producing collateral for executive sign-off and not at all about producing working software that our customers valued. People today think it’s about having heavyweight processes with daily status meetings that suck the life out of the team. Fine, things change, it’s time to move forward. But contextualise the Agile movement, so that people understand at least what moving backward would look like.

So some of this malaise will be purely generational. Some of us have aged/grown/tired out of being excited about every new technology, and see people being excited about every new technology as irrelevant or immature. Maybe it is irrelevant, but if so it probably was when we were doing it too: nothing about the tools we grew up with were any more timeless than today’s.

Some of it will also be generational, but for very different reasons. Some fraction of us who were junior engineers a decade or two ago will be leads, principles, heads of division or whatever now, and responsible for the big picture, and not willing to get caught into the minutiae of whether this buggy VC-backed database that some junior heard about at code club will get sunset before that one. We’d rather use postgres, because we knew it back then and know it now. Well, if you’re in that boat, congratulations on the career progression, but it’s now your job to make those big picture decisions, make them compelling, and convince your whole org to side with you. It’s hard, but you’re paid way more than you used to get and that’s how this whole charade works.

Some of it is also frustration. I certainly sense this one. I can pretend I understood my 2006-vintage iBook. I didn’t understand the half of it, but I understood enough to claim some kind of system-level comfort. I had (and read: that was a long flight) the Internals book so I understood the kernel. The Unix stuff is a Unix system, I know this! And if you ignore classic, carbon, and a bunch of programming languages that came out of the box, I knew the frameworks and developer tools too. I understood how to do security on that computer well enough that Apple told you to consider reading my excellent book. But it turns out that they just wouldn’t fucking sit still for a decade, and I no longer understand all of that technology. I don’t understand my M1 Mac Mini. That’s frustrating, and makes me feel stupid.

So yes, there is widespread malaise, and yes, people are doing dumb, irrelevant, or evil things in the name of computering. But mostly it’s just us.

As the kids these days say, please like and subscribe.

Posted in advancement of the self | Tagged | Leave a comment

My current host name scheme at home is characters from the film Tron. So I have:

Laptop: flynn (programmer, formerly at Encom, and arcade owner)

Desktop: yori (programmer at Encom)

TV box: dumont (runs the I/O terminal)

Watch: bit (a bit)

Windows computer: dillinger (the evil corporate suit)

Posted on by Graham | Leave a comment

Episode 30: Digital Autonomy and Software Freedom

In which I posit that software freedom is a small part of a bigger picture.

2 Comments

On UML

A little context: I got introduced to UML in around 2008, at an employer who had a site licence for Enterprise Architect. I was sent on a training course run by a company that no longer exists called Sun Microsystems: every day for a week I would get on a coach to Marble Arch, then take the central line over to Shoreditch (they were very much ahead of their time, Sun Microsystems) and learn how to decompose systems into objects and represent the static and dynamic properties of these objects on class diagrams, activity diagrams, state diagrams, you name it.

I got a bye on some of the uses of Enterprise Architect at work. Our Unix team was keeping its UML diagrams in configuration management, round-tripping between the diagrams and C++ code to make sure everything was in sync. Because Enterprise Architect didn’t have round-trip support for Objective-C (it still doesn’t), and I was the tech lead for the Mac team, I wasn’t expected to do this.

This freed me from some of the more restrictive constraints imposed on other UML-using teams. My map could be a map, not the territory. Whereas the Unix folks had to show how every single IThing had a ThingImpl so that their diagrams correctly generated the PImpl pattern, my diagrams were free to show only the information relevant to their use as diagrams. Because the diagrams didn’t need to be in configuration management alongside the source, I was free to draw them on whiteboards if I was by a whiteboard and not by the desktop computer that had my installation of Enterprise Architect.

Even though I’d been working with Objective-C for somewhere around six years at this point, this training course along with the experience with EA was the thing that finally made the idea of objects click. I had been fine before with what would now be called the Massive View Controller pattern but then was the Massive App Delegate pattern (MAD software design, if you’ll excuse the ablism).

Apple’s sample code all puts the outlets and actions on the app delegate, so why can’t I? The Big Nerd Ranch book does it, so why can’t I? Oh, the three of us all editing that file has got unwieldy? OK, well let’s make App Delegate categories for the different features in the app and move the methods there.

Engaging with object-oriented design, as distinct from object-oriented programming, let me move past that, and helped me to understand why I might want to define my own classes, and objects that collaborate with each other. It helped me to understand what those classes and objects could be used for (and re-used for).

Of course, these days UML has fallen out of fashion (I still use it, though I’m more likely to have PlantUML installed than EA). In these threads and the linked posts two extreme opinions—along with quite a few in between—are found.

The first is that UML (well not UML specifically, but OO Analysis and Design) represents some pre-lapsarian school of thought from back when programmers used to think, and weren’t just shitting javascript into containers at ever-higher velocities. In this school, UML is something “we” lost along when we stopped doing software engineering properly, in the name of agile.

The second is that UML is necessarily part of the heavyweight waterfall go-for-two-years-then-fail-to-ship project management paradigm that The Blessed Cunningham did away with in the Four Commandments of the agile manifesto. Thou shalt not make unto yourselves craven comprehensive documentation!

Neither is true. “We” had a necessary (due to the dot-com recession) refocus on the idea that this software is probably supposed to be for something, that the person we should ask about what it’s for is probably the person who’s paying for it, and we should probably show them something worth their money sooner rather than later. Many people who weren’t using UML before the fall/revelation still aren’t. Many who were doing at management behest are no longer. Many who were because they liked it still are.

But when I last taught object-oriented analysis and design (as distinct, remember, from object-oriented programming), which was in March 2020, the tool to reach for was the UML (we used Visual Paradigm, not plantuml or EA). It is perhaps not embarrassing that the newest tool for the job is from the 1990s (after all, people still teach Functional Programming which is even older). It is perhaps unfortunate that no design (sorry, “emergent” design) is the best practice that has replaced it.

On the other hand, by many quantitative metrics, software is still doing fine, and the whole UML exercise was only a minority pursuit at its peak.

Posted in agile, architecture of sorts, design, software-engineering, sunw, tool-support | Tagged | 2 Comments

Episode 29: Return to Software Engineering

This episode is about technology, culture, community, and career. No links because it’s mostly a personal history and there’s only so much self-promotion I can fit in one episode.

That said! If you enjoy this podcast and would like to support me in producing material for the software engineering community, please consider becoming a Patron. No obligation, but thank you for considering it!

Leave a comment

I don’t use version control when I’m writing

Or rather, I do use version control when I’m writing, and it isn’t helpful.

I’m currently studying a PhD, and I have around 113k words of notes in a git repository. I also have countless words of notes in a Zotero database and a Remarkable tablet. I don’t particularly miss git when I’m not storing notes in my repository.

A lot of the commit messages in that repository aren’t particularly informative. “Update literature search”, “meeting notes from today”, “meeting notes”, “rewrite introduction”. So unlike in software, where I have changes like “create the ubiquitous documents folder if it doesn’t already exist” and “fix type mismatch in document delegate conformance”, I don’t really have atomic changes in long-form writing.

Indeed, that’s not how I write. I usually set out either to produce an argument, or to improve an existing one. Not to add a specific point that I hadn’t thought of before, not to improve layout or structure in any specific way, not to fix particular problems. So I’m not “adding features” or “fixing bugs” in the same atomic way that I would in software, and don’t end up with a revision history comprising multiple atomic commits.

Some of my articles—this one included—have no checkpoints in their history at all. Others, including posts on De Programmatica Ipsum and journal articles, have a dozen or more checkpoints, but only because I “saved a draft” when I stepped away from the computer, not because there were meaningful atomic increments. I would never revert a change in an article when I’m writing, I’d always fix forward. I’d never introduce a new idea on a branch, there’s always a linear flow.

Posted in writing | 1 Comment

By doing it and helping others do it

We are uncovering better ways of developing
software by doing it and helping others do it.

It’s been 20 years since those words were published in the manifesto for agile software development, and capital-A Agile methods haven’t really been supplanted. Despite another two decades of doing it and helping others do it.

That seems problematic.

Posted in agile | Tagged | Leave a comment

What Xamarin.Forms taught me about data bindings

I’ve spent about a year working on an app for a group in the University where I work, that needed to be available on both Android and iOS. I’ve got a bit of experience working with the Apple-supplied SDKs on iOS, and a teensy amount of experience working with the Google-supplied SDKs on Android. Writing two apps is obviously an option, but not one I took very seriously. The other thing I’ve reached for before in this situation is React Native, where I’ve got a little experience but quite a bit of understanding having worked with React some.

Anyway, this project was a mobile companion for a desktop app written in C# and Windows Forms, and the client was going to have to pick up development at the end of my engagement. So I decided that the best approach for the client was to learn how to do it in Xamarin.Forms, and give them a C# project they could understand at the end. I also hoped there’d be an opportunity to share some code from the desktop software in the mobile project, though this didn’t pan out in the end.

It took a while to understand the centrality of the Model-View-ViewModel idea and how to get it to work with the code I was writing, rather than bludgeoning it in to what I was trying to do. Ultimately lots of X.F works with data bindings, where you say “this thing and that thing are connected” and so your view needs a that thing so it can display this thing. If the that thing isn’t in the right shape, is derived somehow, or shouldn’t be committed to the model until some other things are done, the ViewModel sits in the middle and separates the two.

I’m used to this model in a couple of contexts, and I’ll give Objective-C examples of each because that’s how old I am. In web applications, you can use data bindings to fill in parts of an HTML document based on values from a server-side object. WebObjects works this way (Rails doesn’t, it uses code to fill in parts of etc). The difference between web app data bindings and mobile app data bindings is one of lifecycle. Your value needs to be read once when the page (or XHR) is rendered, and stored once when the user posts the changes. This is so simple that you can use straightforward accessor methods for it. It also happens at a time when loading new content is happening, so any timing constraints are likely to come from elsewhere.

You can also do it in what, because I’m that old, I’ll call rich client applications, like Xamarin.Forms mobile apps or Cocoa Bindings desktop apps. Here, anything could be happening at any time: a worker thread could be updating the model, and the user could interact with the UI, all at the same time, potentially multiple times while a UI element is live. So you can’t just wait until the Submit button is pressed to update everything, you need to track and reflect updates when they happen.

Given a dynamic language like Objective-C, you can say “bind this thing to that thing with these options” and the binding library can rewrite your accessors for this thing and that thing to update the other when changes happen, and avoid circular updates. You can’t do that in C# because apparently more typing is easier to reason about, so you end up replicating the below pattern rather a lot.

public class MyThingViewModel : INotifyPropertyChanged
{
  public event PropertyChangedEventHandler PropertyChanged;
  // ...
  private string _value;
  public string Value
  {
    get => _value;
    set
    {
      _value = value;
      PropertyChanged?.Invoke(this, new PropertyChangedEventArgs(nameof(Value)));
    }
  }
}

And when I say “rather a lot”, I mean in this one app that boilerplate appears at least 126 times, this undercounts because despite being public, the PropertyChanged event can only be invoked by instances of the declaring class so if a subclass adds any properties or any change points, you’re going to write protected helper methods to be able to invoke the event from the subclass.

Let’s pivot to investigating another question: why is Cocoa Bindings a desktop-only thing from Apple? I’ve encountered two problems in using it on Xamarin: thread confinement and performance. Thread confinement is a problem anywhere but the performance things are more sensitive on mobile, particularly on 2007-era mobile when this decision was made, and I can imagine a choice was made between “give developers the tools to identify and fix this issue” and “don’t give developers the chance to encounter this issue” back when UIKit was designed. Neither X.F nor UIKit is wrong in their particular choice, they’ve just chosen differently.

UI updates have to happen on the UI thread, probably because UIKit is Cocoa, Cocoa is appkit, and appkit ran on an OS that didn’t give you an easy way to do multiple threads per task. But this has to happen on Android too. And also performance. Anyway, theoretically any of those 126 invocations of PropertyChanged that could be bound to a view (so all of them, because separation of concerns) should be MainThread.BeginInvokeOnMainThread(() => {PropertyChanged?.Invoke(...)}); because what if the value is updated in an async method or a Task. Otherwise, something between a crash and unexpected behaviour will happen.

The performance problem is this: any change to a property can easily cause an unknown amount of code to run, quite often on the UI thread. For example, my app has a data grid (i.e. spreadsheet-like table view) with a “selection” column containing switches. There’s a “select all” button, and a report of the number of selected objects, outside the grid. Pressing “select all” selects all of the objects. Each one notifies observers that its IsSelected property has changed, which is watched by the list view model to update the selection count, and by the data grid to update the switches. So if there’s one row in the grid, selecting all causes two main-thread UI updates. If there are 500 rows, then 1000 updates need to run on the main thread in response to that one button action.

That can get slow :). Before I understood how to fix this, some UI actions would block the UI for tens of seconds as they computed the update. I asked about this in some forums and was told the answer is “your users shouldn’t have that much data in a mobile app, design an app with less data” which is not that helpful. But luckily the folks over at SyncFusion were much more empathetic, and told me the real solution is to design your views and view models such that you can turn off updates while you’re doing some big change, then turn them back on and recalculate the state at the end.

Like I say, it’s likely that someone at Apple already knew this from the Cocoa Bindings times and decided “here’s a great technology, and here’s how to turn it off because it will get in your way” wasn’t a cool story.

Posted in mono, msft | 5 Comments