Another non-year of Desktop Linux

Let’s look at other software on the desktop, to understand why there isn’t (as a broad, popular platform) Linux on the desktop, then how there could be.

Over on De Programmatica Ipsum I discussed the difference between the platform business model, and the technology platform. In the platform model, the business acts as a matchmaking agent, connecting customers to vendors. An agricultural market is a platform, where stud farmers can meet dairy farmers to sell cattle, for example.

Meanwhile, when a technology platform is created by a business, it enables a two-sided business model. The (technology) platform vendor sells their APIs to developers as a way of making applications. They sell their technology to consumers with the fringe benefit that these third-party applications are available to augment the experience. The part of the business that is truly a platform model is the App Store, but those came late as an effort to capture a share of the (existing) developer-consumer sales revenue, and don’t really make the vendors platform businesses.

In fact, I’m going to drop the word platform now, as it has these two different meanings. I’ll say “store” or “App Store” when I’m talking about a platform business in software, and “stack” or “software stack” when I’m talking about a platform technology model.

Stack vendors have previously been very protective of their stack, trying to fend off alternative technologies that allow consumers to take their business elsewhere. Microsoft famously “poisoned” Java, an early and capable cross-platform application API, by bundling their own runtime that made Java applications deliberately run poorly. Apple famously added a clause to their store rules that forbade any applications made using off-stack technology.

Both of these situations are now in the past: Microsoft have even embraced some cross-platform technology options, making heavy use of Electron in their own applications and even integrating the Chromium rendering engine into their own browser to increase compatibility with cross-platform technology and reduce the cost of supporting those websites and applications made with Javascript. Apple have abandoned that “only” clause in their rules, replacing it with a collection of “but also” rules: yes you can make your applications out of whatever you want, but they have to support sign-in and payment mechanisms unique to their stack. So a cross-stack app is de jure better integrated in Apple’s sandpit.

These actions show us how these stack vendors expect people to switch stacks: they find a compelling application, they use it, they discover that this application works better or is better integrated on another stack, and so they change to it. If you’re worried about that, then you block those applications so that your customers can’t discover them. If you’re not worried about that, then you allow the technologies, and rely on the fact that applications are commodities and nobody is going to find a “killer app” that makes them switch.

Allowing third-party software on your own platform (cross-stack or otherwise) comes with a risk, that people are only buying your technology as an incidental choice to run something else, and that if it disappears from your stack, those customers might go away to somewhere that it is available. Microsoft have pulled that threat out of their briefcase before, settling a legal suit with Apple after suggesting that they would remove Word and Excel from the Mac stack.

That model of switching explains why companies that are otherwise competitors seem willing to support one another by releasing their own applications on each others’ stacks. When Apple and Microsoft are in competition, we’ve already seen that Microsoft’s applications give them leverage over Apple: they also allow Apple customers to be fringe players in the Microsoft sandpit, which may lead them to switch (for example when they see how much easier it is for their Windows-using colleagues to use all of the Microsoft collaboration tools their employers use). But Apple’s applications also give them leverage over Microsoft: the famed “halo effect” of Mac sales being driven by the iPod fits this model: you buy an iPod because it’s cool, and you use iTunes for Windows. Then you see how much better iTunes for Mac works, and your next computer is a Mac. The application is a gateway to the stack.

What has all of this got to do with desktop Linux? Absolutely nothing, and that’s my point. There’s never been a “halo effect” for the Free Software world because there’s never been a nucleus around which that halo can form. The bazaar model does a lot to ensure that. Let’s take a specific example: for many people, Thunderbird is the best email client you can possibly get. It also exists on multiple stacks, so it has the potential to be a “gateway” to desktop Linux.

But it won’t be. The particular bazaar hawkers working on Thunderbird don’t have any particular commitment to the rest of the desktop Linux stack: they’re not necessarily against it, but they’re not necessarily for it either. If there’s an opportunity to make Thunderbird better on Windows, anybody can contribute to exploit that opportunity. At best, Thunderbird on desktop Linux will be as good as Thunderbird anywhere else. Similarly, the people in the Nautilus file manager area of the bazaar have no particular commitment to tighter integration with Thunderbird, because their users might be using GNUMail or Evolution.

At one extreme, the licences of software in the bazaar dissuade switching, too. Let’s say that CUPS, the common UNIX printing subsystem, is the best way to do printing on any platform. Does that mean that, say, Mac users with paper-centric workflows or lifestyles will be motivated to switch to desktop Linux, to get access to CUPS? No, it means Apple will take advantage of the CUPS licence to integrate it into their stack, giving them access to the technology.

The only thing the three big stack vendors seem to agree on when it comes to free software licensing is that the GPL version 3 family of licences is incompatible with their risk appetites, particularly their weaponised patent portfolios. So that points to a way to avoid the second of these problems blocking a desktop Linux “halo effect”. Were there a GPL3 killer app, the stack vendors probably wouldn’t pick it up and integrate it. Of course, with no software patent protection, they’d be able to reimplement it without problem.

But even with that dissuasion, we still find that the app likely wouldn’t be a better experience on a desktop Linux stack than on Mac, or on Windows. There would be no halo, and there would be no switchers. Well, not no switchers, but probably no more switchers.

Am I minimising the efforts of consistency and integration made by the big free software desktop projects, KDE and GNOME? I don’t think so. I’ve used both over the years, and I’ve used other desktop environments for UNIX-like systems (please may we all remember CDE so that we never repeat it). They are good, they are tightly integrated, and thanks to the collaboration on specifications in the Free Desktop Project they’re also largely interoperable. What they aren’t is breakout. Where Thunderbird is a nucleus without a halo, Evolution is a halo without a nucleus: it works well with the other GNOME tools, but it isn’t a lighthouse attracting users from, say, Windows, to ditch the rest of their stack for GNOME on Linux.

Desktop Linux is a really good desktop stack. So is, say, the Mac. You could get on well with either, but unless you’ve got a particular interest in free software, or a particular frustration with Apple, there’s no reason to switch. Many people do not have that interest or frustration.

Posted in whatevs | Leave a comment

SICPers podcast episode 10

This episode is all about build systems! Mostly about the problems associated with the venerable ./configure; make; make install process. This expands on a section I wrote in APPropriate Behaviour.

Some meta-links: SICPers Podcast on Apple Podcasts, Direct link to RSS feed.

Posted in podcast | Tagged | Leave a comment

Episode 10: Build systems

This episode is all about build systems! Full show notes.

Leave a comment

Where We Ditched Chipzilla

WWDC2020 was the first WWDC I’ve been to in, what, five years? Whenever I last went, it was in San Francisco. There’s no way I could’ve got my employer to expense it this year had I needed to go to San Jose, nor would I have personally been able to cover the costs of physically going. So I wouldn’t even have entered the ticket lottery.

Lots of people are saying that it’s “not the same” as physically being there, and that’s true. It’s much more accessible than physically being there.

For the last couple at least, Apple have done a great job of putting the presentations on the developer site with very short lag. But remotely attending has still felt like being the remote worker on an office-based team: you know you’re missing most of the conversations and decisions.

This time, everything is remote-first: conversations happen on social media, or in the watch party sites, or wherever your community is. The bundling of sessions released once per day means there’s less of a time zone penalty to being in the UK, NZ, or India than in California or Washington state. Any of us who participated are as much of a WWDC attendee as those within a few blocks of the McEnery or Moscone convention centres.

Posted in AAPL | 2 Comments

SICPers podcast episode 9

In this episode I talk about Design by Contract. Episode RSS feed – also available in Apple and Google Podcasts.

Posted in code-level, design, podcast | Tagged | Leave a comment

Episode 9: Design by Contract

I talk about my experience with design by contract and my two implementations, in ObjC/Swift and Java. Full show notes.

Leave a comment

It protects. It also promotes and prevents.

I sometimes get asked to review, or “comment on”, the architecture for an app. Often the app already exists, and the architecture documentation consists of nothing more than the source code and the folder structure. Sometimes the app doesn’t exist, and the architecture is a collection of hopes and dreams expressed on a whiteboard. Very, very rarely, both exist.

To effectively review an architecture and make recommendations for improving it, we need much more information than that. We need to know what we’re aiming for, so that we can tell whether the architecture is going to support or hinder those goals.

We start by asking about the functional requirements of the application. Who is using this, what are they using it for, how do they do that? Does the architecture make it easy for the programmers to implement those things, for the testers to validate those things, for whoever deploys and maintains the software to provide those things?

If you see an “architecture” that promotes the choice of technical implementation pattern over the functionality of the system, it’s getting in the way. I don’t need to know that you have three folders of Models, Views and Controllers, or of Actions, Components, and Containers. I need to know that you let people book childrens’ weather forecasters for wild atmospheric physics parties.

We can say the same about non-functional requirements. When I ask what the architecture is supposed to be for, a frequent response is “we need it to scale”. How? Do you want to scale the development team? By doing more things in parallel, or by doing the same things faster, or by requiring more people to achieve the same results? Hold on, did you want to scale the team up or down?

Or did you want to scale the number of concurrent users? Have you tried… y’know, selling the product to people? Many startups in particular need to learn that a CRM is a better tool for scaling their web app than Kubernetes. But anyway, I digress. If you’ve got a plan for getting to a million users, and it’s a realistic plan, does your architecture allow you to do that? Does it show us how to keep that property as we make changes?

Those important things that you want your system to do. The architecture should protect and promote them. It should make it easy to do the right thing, and difficult to regress. It should prevent going off into the weeds, or doing work that counters those goals.

That means that the system’s architecture isn’t really about the technology, it’s about the goals. If you show me a list of npm packages in response to questions about your architecture, you’re not showing me your architecture. Yes, I could build your system using those technologies. But I could probably build anything else, too.

Posted in architecture of sorts | Leave a comment

Episode 8: Message in a bottle

In this episode, I investigate how messaging works in Smalltalk-80 and other languages. I don’t talk about how OOP is realised in Lisp using generic functions, but do set further reading for those interested: The Art of the Metaobject Protocol.

Leave a comment

Forearmed

In researching my piece for the upcoming de Programmatica Ipsum issue on cloud computing, I had thoughts about Apple, arm, and any upcoming transition that didn’t fit in the context of that article. So here’s a different post, about that. I’ve worked at both companies so don’t have a neutral point of view, but I’ve also been in bits of the companies far enough from their missions that I don’t have any insider insight into this transition.

So, let’s dismiss the Mac transition part of this thread straight away: it probably will happen, for the same reasons that the PowerPC->Intel transition happened (the things Apple needed from the parts – mostly lower power consumption for similar performance – weren’t the same things that the suppliers needed, and the business Apple brought wasn’t big enough to make the suppliers change their mind), and it probably will be easier, because Apple put the groundwork in to make third-party devs aware of porting issues during the Intel transition, and encourage devs to use high-level frameworks and languages.

Whether you think the point is convergence (now your Catalyst apps are literally iPad IPAs that run on a Mac), or cost (Apple buy arm chipset licences, but have to buy whole chips from Intel, and don’t get the discount everybody else does for sticking the Intel Inside holographic sticker on the case), or just “betterer”, the arm CPUs can certainly provide. On the “betterer” argument, I don’t predict that will be a straightforward case of tomorrow’s arm Mac being faster than today’s Intel Mac. Partly because compilers: gcc certainly has better optimisations on Intel and I wouldn’t be surprised to find that llvm does too. Partly because workload, as iOS/watchOS/tvOS all keep the platform on guard rails that make the energy use/computing need expectations more predictable, and those guard rails are only slowly being added to macOS now.

On the other hand, it’s long been the case that computers have controller chips in for interfacing with the hardware, and that those chips are often things that could be considered CPUs for systems in their own rights. Your mac certainly already has arm chips in if you bought it recently: you know what’s running the OS for the touch bar? Or the T2 security chip? (Actually, if you have an off-brand PC with an Intel-compatible-but-not-Intel chip, that’s probably an arm core running the x86-64 instructions in microcode). If you beef one of those up so that it runs the OS too, then take a whole bunch of other chips and circuits off the board, you both reduce the power consumption and put more space in for batteries. And Apple do love talking battery life when they sell you a computer.

OK, so that’s the Apple transition done. But now back to arm. They’re a great business, and they’ve only been expanding of late, but it’s currently coming at a cost. We don’t have up to date financial information on Arm Holdings themselves since they went private, but that year they lost ¥31bn (I think about $300M). Since then, their corporate parent Softbank Group has been doing well, but massive losses from their Vision Fund have led to questions about their direction and particularly Masayoshi Son’s judgement and vision.

arm (that’s how they style it) have, mostly through their partner network, fingers in many computing pies. From the server and supercomputer chips from manufacturers like Marvell to smart lightbulbs powered by Nordic Semiconductor, arm have tentacles everywhere. But their current interest is squarely on the IoT side. When I worked in their HPC group in 2017, Simon Segars described their traditional chip IP business as the “legacy engine” that would fund the “disruptive unit” he was really interested in, the new Internet of Things Business Unit. Now arm’s mission is to “enable a trillion connected devices”, and you can bet there isn’t a world market for a trillion Macs or Mac-like computers.

If some random software engineer on the internet can work this out, you can bet Apple’s exec team have worked it out, too. It seems apparent that (assuming it happens) Apple are transitioning the Mac platform to arm at start of the (long, slow) exit arm make from the traditional computing market, and still chose to do it. This suggests something else in mind (after all, Apple already designs its chips in-house, so why not have them design RISC-V or MIPS chips, or something entirely different?). A quick timetable of Mac CPU instruction sets:

  • m68k 1984 – 1996, 12 years (I exclude the Lisa)
  • ppc 1994 – 2006, 12 years
  • x86 and x86-64 2006 – 2021?, 15 years?
  • arm 2020? – 203x?, 1x years?

I think it likely that the Mac will wind down with arm’s interest in traditional computing, and therefore arm will be the last ever CPU/SoC architecture for computers called Macs. That the plan for the next decade is that Apple is still at the centre of a services-based, privacy-focused consumer electronics experience, but that what they sell you is not a computer.

Posted in AAPL, arm, Business | Leave a comment

Continuous Integration for Amiga

Amiga-Smalltalk now has continuous integration, I don’t know if it’s the first Amiga program ever to have CI but definitely the first I know of. Let me tell you about it.

I’ve long been using AROS, the AROS Research Operating System (formerly the A stood for Amiga) as a convenient place to (manually) test Amiga-Smalltalk. AROS will boot natively on PC but can also be “hosted” as a user-space process on Linux, Windows or macOS. So it’s handy to build a program like Amiga-Smalltalk in the AROS source tree, then launch AROS and check that my program works properly. Because AROS is source compatible with Amiga OS (and binary compatible too, on m68k), I can be confident that things work on real Amigas.

My original plan for Amiga-Smalltalk was to build a Docker image containing AROS, add my test program to S:User-startup (the script on Amiga that runs at the end of the OS boot sequence), then look to see how it fared. But when I discussed it on the aros-exec forums, AROS developer deadwood had a better idea.

He’s created AxRuntime, a library that lets Linux processes access the AROS APIs directly without having to be hosted in AROS as a sub-OS. So that’s what I’m using. You can look at my Github workflow to see how it works, but in a nutshell:

  1. check out source.
  2. install libaxrt. I’ve checked the packages in ./vendor (and a patched library, which fixes clean termination of the Amiga process) to avoid making network calls in my CI. The upstream source is deadwood’s repo.
  3. launch Xvfb. This lets the process run “headless” on the CI box.
  4. build and run ast_tests, my test runner. The Makefile shows how it’s compiled.

That’s it! All there is to running your Amiga binaries in CI.

Posted in Amiga | Leave a comment