On Apple’s swings and misses

There’s a trope in the Apple-using technologist world that when an Apple innovation doesn’t immediately succeed, they abandon it. It’s not entirely true, let’s see what actually happens.

The quote in the above-linked item that supports the claim: “Apple has a tendency to either hit home runs out of the box (iPod, iPhone, AirPods) or come out with a dud and just sweep it under the rug, like iMessage apps and stickers.” iMessage apps and stickers are new features in iMessage. These are incremental additions to an existing technology. Granted, neither of them have revolutionised the way that everybody uses iMessage, and neither of them have received much (or any) further (user-facing) development, but both are themselves attempts to improve an actual product that Apple actually has and has not swept under the rug.

We can make a similar argument about the TouchBar. The TouchBar is the touchscreen strip on some models of MacBook Pro laptop that replaces the function key row on the keyboard with an adaptive UI. It appeared, it…stayed around a bit, then it seems to have now disappeared. Perhaps importantly, it never got replicated on their other keyboards, like the one that comes with the iMac or the one you can buy separately. We could say that the TouchBar was a dud that got swept under the rug, or we could say that it was an incremental change to the MacBook Pro and that Apple have since tried other changes to this long-running product, like the M1 architecture.

There are two other categories of non-home-run developments to take into account. The first is the duds that do get incremental development. iTV/Apple TV was such a bad business for the first many years of its history that execs would refer to it as a hobby, right up until it made them a billion dollars and was no longer a hobby.

Mac OS X’s first release was a lightly sparkling OpenStep, incompatible with any Mac software (it came with a virtual machine to run actual MacOS) and incompatible with most Unix software too. It was sold as a server-only product, which given the long wait involved when doing something as simple as opening the text editor (a Java application) was a sensible move. Yet, here we are, 23 years later, and macOS/iOS/iPadOS/tvOS/watchOS/bridgeOS is the same technology, incrementally improved.

Then the next category is things that go away, get rethought, and brought back. The thing we see might look like a dud but it’s actually an idea that Apple stick with. Again, two examples: remember dashboard widgets in Tiger? There was an overlay view that let you organise a screen of little javascript widgets to do world time, stocks, weather, and other things including those supplied by third-party developers. It was there, it looked the same for a bit (as long as you don’t mention the DashCode tool introduced along the way), then it wasn’t there. But later, control center came along, and we got a new version of the same idea.

In between that fizzy NeXT version of Mac OS X Server and the first public release of Mac OS X 10.0 (which was also a dud, many users sticking with MacOS 9 and Apple even giving away 10.1 for free to ensure as many people as possible got the fixes), the Aqua interface was born. Significantly more “lickable” than its modern look, it was nonetheless recognisable to a Monterey user, with its familiar traffic-light window controls: red for close, yellow for minimise, green for zoom, and purple for…wait, purple? Yes, purple. This activated single-window mode, in which only the active window was shown and all others minimised to the dock. Switch window, and the previous one disappeared. This wasn’t in the public release, but now we have Mission Control and fullscreen mode, so did it truly go away?

Posted in AAPL, UI | 2 Comments

Licenses aren’t sufficient

Another recent issue in the world of “centralised open source dependency repositories were a bad idea” initiated by the central contradiction of free software. People want to both give everything away without limitation on who uses it or how, and they want “Big Program” to pay for the work to be done.

While the license is the only tool used by free software authors, there is no way that this is going to be resolved in the favour of the Robin Hood model. There’s nothing of value on offer to Big Program in the software. They want the right to use the software for their nefarious purposes, and for free they can get the right to use the software for any purpose. Why would they pay more?

They wouldn’t. And no amount of whataboutism is going to change that. Whatabout if nobody can afford to work on free software any more, and they lose access to updates? Doesn’t happen. The current set of incentives – part financial, mostly reputational, and part itch-scratching – actually observably cause an increasing amount of free software to be created over time.

That gap needs to be resolved in other ways. There are things that companies will pay for even when they have the freedom to use the software for any purpose, at no charge. They will pay for support, bug bounties, indemnification, training, documentation, consultancy, integration, operations…

If the free software community hadn’t completely withdrawn from the patents discussion, they might pay to license the patent whether or not they take the (free) copyright licence. But that has yet to happen.

Plenty of organisations understand this: Red Hat became a forty-odd-billion dollar company giving away the software for free and selling other things. Canonical, Cygnus, ActiveState, O’Reilly, Mozilla, Musescore, Nextcloud…all of them make software, none of them is a software company. All make money in the free software world, none is a free software company.

Please continue giving us all the freedom to use the software for any purpose. Also the other freedoms, to study, improve, and share the software. But remember that freedom is not for sale.

Posted in FLOSS | Leave a comment

Episode 46: popularity

This episode is all about the TIOBE Index of programming language popularity: when to use it, what its limitations are, why certain things are or aren’t popular, and why the hell isn’t Excel on the list.

Leave a comment

On the glorification of ignorance

When I wrote I have some small idea of what I’m doing, it was on the basis that DHH was engaging in some exaggeration. Surely software engineers, whose job depends on what they know and what they can learn, would not really revel in their lack of knowledge?

Then it happened. A technology forum I’m a member of had a discussion in which participants expressed that they did not understand the topic, that they did not intend to understand that topic, and they still wished to dunk on the people in a video about said topic.

The topic, by the way, is cryptocurrency. It happens that I don’t have a lot of time for cryptocurrency and I think most other blockchain applications are not particularly beneficial, but this comes after taking a course on blockchain, reading a textbook, talking to some startups about their products, generally engaging with the topic. I haven’t flipped the bozo bit, but I have decided that I do not currently see any use for that technology and see a lot of downside to its application. If you’d asked me before all of that study, and people did, I would have told you that I don’t know anything about the topic.

I feel a bit bad for, and about, that technology forum. It contains people I respect, and I’ve had valuable conversations there, so I don’t want to disengage completely. I would then be flipping bozo bits at scale, which is exactly the problem we have with many current attempts to converse. I also don’t want it to degenerate into a bubble for the one approved mindset, and I particularly don’t want the software engineering mindset to be one where making your mind up before learning about a topic, and valorising that decision to engage before learning, is the preferred form of contribution.

Suggestions welcome.

Posted in whatevs | Leave a comment

Episode 45: Information Security

This issue is all about the various reasons information security isn’t taken more seriously by developers.

Leave a comment

Explicitly considering subtyping in inheritance

By far the post on this blog that gains the most long-term interest and attention is why inheritance never made any sense. In this post, I explain that there are three different ways to think about inheritance—ontological inheritance (this sort of thing is a special type of that sort of thing), subtype inheritance (this program that expects these things will work with these things too), and implementation inheritance (the code in this thing is also in that thing)—and that trying to use all three at the same time is a recipe for disaster.

People interpret the message behind this as they will: that you should only ever compose objects, that you should only use pure functions, whatever. The message I tried to send was that you need to not use all of these different forms of inheritance at once, but OK. In this paper from the very early days of industry OOP, the late Bill Cook and colleagues resolve implementation and subtyping inheritance, by treating them differently (as I argued for).

Posted in OOP | Tagged | Leave a comment

I have some small idea of what I’m doing

I feel partly to blame for the current minor internet shitstorm.

But first, some scene-setting. There have long been associations between the programmer community and particular subcultures, some of which have become—not monocultural—at least dominant cultures within the world of computering. When I entered the field in the early 2000s, it was the tail end of the cyberpunk subculture: electronic and rock music, long hair on men, short hair on women, often dyed, black band or logo t-shirts, combat trousers or jeans, Doctor Martens 1460 boots. Antisocial work hours, caffeine-fuelled weekend long hacks, “all your base are belong to us” memes. Obtuse, but workhorse, C and Perl code. Maybe some Scheme if you were in the Free Software Foundation.

Then toward the end of the decade the hipster subculture rose to dominance. Mac laptops. Nice clothes, worn ironically. Especially the bow tie. Dishevelled hair. Fixed-gear bicycles. Turned up trouser cuffs and no socks. Looking to be the technical cofounder, looking down on those who ask them to be the technical cofounder. Coffee, now daytime only, had to be incredibly fussy. The evening drink of choice was Pabst Blue Ribbon. If your software wasn’t in the tech stack of choice—a Ruby on Rails app, deployed to Heroku, edited in TextMate, hosted on Github—then were you crushing any code? Bro, do you even lift?

After a few years of this I noticed that one difference between these two cultures was an approach to knowledge, or more specifically its lack. It was easy to nerdsnipe a cyberpunk: if they didn’t know something they would go and find out. Usenet groups had multiple FAQ lists because many people would all try to find the answers to the questions, and wikis weren’t yet popular. In the hipster craze that followed, confidence in one’s own knowledge reigned supreme. You showed that you knew everything you know, and you showed that everything you didn’t know wasn’t worth knowing.

This came to a head in my little Apple-centric niche of the computering field in 2015, when that whole community had chased monad tutorials and half-digested chapters on category theory into every corner of the conference and mailing list ecosystem. People gave talks not to share their knowledge, but to share that they were the people who knew the knowledge. Attendees turned up to product development conferences expecting to learn how a new programming language made it easier to develop programming, and came away confused about endofunctors.

I should be clear here that not everybody in the field was like that, and there are plenty of people who can make difficult maths accessible. There are plenty of people who can make computering accessible without difficult maths. Those people were still present.

But still I determined that something we didn’t have enough of, that had been present in the cyberpunk-esque culture that came before (for all its other faults) was a willingness to say “I have no idea what I’m doing”. Not in a “har har look at me get this wrong” way, but in a “this is interesting, let’s find out more about it” way. An “I’m not the right person to ask, let’s bring in an expert” way. A “to the library!” way.

So after a bit of writing about learning things I didn’t know, I took my (then) decade of experience and position of incredibly-minor celebritydom in that niche little bit of computering, and submitted a talk called “I have no idea what I’m doing” to AltConf 2015. I think it may even have been a very last minute submission, with another speaker pulling out sick. The talk was a collection of anecdotes about things I didn’t understand when the problem came my way, and how I dealt with that. Particularly, given that Swift was a year old at the time, I admitted I had less than a year of Swift experience and knew less about it than I did about Objective-C. I even used the dog picture. The talk was recorded, but unfortunately no longer seems to be available.

My hope in delivering this talk was partly that the people in the room would learn a little about problem solving, but also that they’d learn a lot about how an experienced person can say “here are the limits of my knowledge, I can’t help you with that problem. At least, I can’t yet, but it might be fun/interesting/remunerative to discover more about it.” How it’s OK to not know what you’re doing, if you have a plan or can make one.

In retrospect, I think that what happened was simply that 2015 was too many generations in the software industry after all of the great forcing functions that led to the way computering was currently done. The Agile folks had worked out that we don’t know what the customer will want at the end of the project, so we should optimise our work for not knowing, but they’d done that at the turn of the millennium. The dot bomb had exploded at the same time, so the Lean Startup folks had worked out that there’s no money in the things the customer doesn’t want and you have to very quickly discard all of those.

Everything had shifted left, but it had done so at least a decade earlier. Now those things, Agile and Lean Startup, were the way you did computering, and you could be expert in them. There was certification. They were no longer “because that thing before wasn’t great”, they were “because this is how we do it”. There was another round of venture capitalists in town, and the money taps were starting to turn back on. There was no great need to find out that you were wrong, so it became a cultural taboo to admit it.

Anyway, if we believe DHH, I overshot. Apparently we went from “it’s professional to own up to the limits of your knowledge” to “it’s a badge of honour to not know programming as a programmer.” To be honest I find that the weak part of the post, mostly because I don’t recognise it and he doesn’t supply evidence. The rest—that we are beings capable of learning and growth and we should not revel in ignorance—is the same as what I was trying to say with my dog-meme talk in 2015.

But now the dominant non-monoculture is the “if you’re not with us you’re against us” variant. The “come on internet, you know what to do” quote tweet. Saying that you may have things to learn, and should not still be at the same level of copy-and-paste code years into your job is now the same as saying you must memorise all algorithms and programming language quirks to call yourself a real programmer. And how very DARE he say that, what does he know about programmers anyway?

Discussions of the DHH post seem to be predicated on the idea that it’s a personal attack on people who aren’t DHH-level success, when if it’s an attack at all it’s attacking a straw man identity and in fact is worded more like this non-attack: you have more potential to live up to, find it in yourself to surpass your current limits. But scrape the surface (by showing that DHH didn’t say the things that are claimed to be “ruining it for everyone”), and it seems there’s a certain amount of hating the messenger, not the message, going on.

I’m not sure the cause of this, but I suspect it may be that having learned not to punch down, folks are looking up for targets. That DHH is successful, has said things that people don’t like in the past, so it’ll be OK to not like what he says this time. And the headline is something not to like, therefore the article must just expand on why I was correct not to read it.

I’ve certainly disagreed with DHH before. When he did the “TDD is dead” thing, I went into that from a position of disagreement. But I also knew that he has experience at being successful as a programmer, and will have reflections and knowledge that are beyond my understanding. So I listened to the discussions, and I learned what each of the people involved thought. It was an interesting, and educational experience. I gained a bit more of an idea of what I’m doing.

Posted in advancement of the self, edjercashun | Tagged | Leave a comment

An Imagined History of Agile Software Development

Having benefited from the imagined history of Object-Oriented Programming, it’s time to turn our flawed retelling toolset to Agile. This history is as inaccurate and biased as it is illuminating.

In the beginning, there was no software. This was considered to be a crisis, because there had been computers since at least the 1940s but nearly half a century later nobody had written any software that could run on them. It had all been cancelled due to cost overruns, schedule overruns, poor quality, or all three.

This was embarrassing for Emperor Carhoare I, whose mighty imperial domain was known as Software Engineering. What was most embarrassing was that at every time when the crisis came to a head, a hologram of Dijkstra would appear and repeat some pre-recorded variation of “I told you so, nobody is clever enough to write any software”.

In frustration, software managers marched their developers off of waterfalls to their doom. If you ever see a software product with a copyright date before, say, 2001, it is a work of fiction, placed by The Great Cunningham to test our faith. I know this because a very expensive consultant told me that it is so.

Eventually the situation got so bad that some people decided to do something about it. They went on a skiing holiday, which was preferable to the original suggestion that they do something about this software problem. But eventually they ended up talking about software anyway, and it turned out that two of them actually had managed to get a software project going. Their approach was extreme: they wrote the software instead of producing interim reports about how little software had been written.

With nothing else to show than a few photographs of mountains, the rest of the skiing group wrote up a little document saying that maybe everybody else writing software might want to try just writing the software, instead of writing reports about how little software had been written. This was explosive. People just couldn’t believe that the solution to writing software was this easy. It must be a trick. They turned to the Dijkstra projection for guidance, but it never manifested again. Maybe he had failed to foresee this crisis? Maybe The Blessed Cunningham was a mule who existed outside psychohistory?

There were two major problems with this “just fucking do it” approach to writing software. The first problem was that it left no breathing room for managers to commission, schedule, and ignore reports on how little software was getting written. Some people got together and wrote the Project Managers’ Declaration of Interdependence. This document uses a lot of words to say “we are totally cool and we get this whole Agile thing you’re talking about, and if you let us onto your projects to commission status reports and track deliverables we’ll definitely be able to pay our bills”.

The second problem, related to the first, is that there wasn’t anything to sell. How can you Agile up your software tools if the point is that tools aren’t as important as you thought? How can you sell books on how important this little nuance at the edge of Agile is, if the whole idea fits on a postcard?

Enter certification. We care more about the people than the process, and if you pay for our training and our CPDs you can prove to everybody that you’ve understood the process for not caring about process. Obviously this is training and certification for the aforementioned co-dependent—sorry, interdependent—project managers. There is certification for developers, but this stuff works best if they’re not actually organised so you won’t find many developers with the certification. Way better to let them divide themselves over which language is best, or which text editor, or which blank space character.

And…that’s it. We’re up to date. No particularly fresh theoretical insight in two decades, we just have a lot of developers treated as fungible velocity sources on projects managed top-down to indirect metrics and reports. Seems like we could benefit from some agility.

Posted in agile | Tagged | 1 Comment

Episode 44: We Would Know What They Thought When They Did It

We would now what they thought when they did it, a call for a history of ideas in computing.

Tagged | Leave a comment

Second Brain

The idea of a second brain really hit home. Steven and I were doing some refactoring of some code in our Amiga podcast last night, and every time we moved something between files we had to remember which header files needed including. Neither of us were familiar enough with the libraries to know this, so people in the chat had to keep helping us.

But these are things we’ve already done, so we ought to be able to recall that stuff, with or without support. And when I say with support, I mean with what that post is calling a “second brain”, i.e. with an external, indexed cache of my brain. I shouldn’t need to reconstruct from scratch information I’ve already come across, but neither should I need to remember it all.

There are three problems I can see on the path to second brain adoption. The first, and the one that immediately made itself felt, is having a single interface for all my notes. When I read this article I thought that blogging about it would be a good way to crystallise my thoughts on the topic (it’s working!). So I saved the URL to Pinboard, I wrote a task in OmniFocus to write a blog post, and then when I was ready I fired up MarsEdit to write the blog that would end up on my WordPress.

Remembering that all these bits are in all these systems is itself a brain overload task. And there’s more: I have information in Zotero about academic papers I’ve read, I have two paper notebooks on the go (one for computing projects and one for political projects), I have a Freewrite and a Remarkable, which each have their own sync services, I have abandoned collections of notes in Evernote and Zim…

I have a history of productivity porn addiction that doesn’t translate to productivity. I get excited by new tools, adopt them a bit, but don’t internalise the practice. So then I have another disjoint collection of some notes, maybe in a bullet journal, a Filofax, Livescribe notes, Apple Notes, wherever…making the problem of second brain even harder to manage because now there are more places.

So step one is to centralise these things. That’s not a great task to try to do in a Big Bang, so I’ll do it piecemeal. Evernote is the closest thing I already use to the second brain concept, so today I’ve stopped using my paper notebooks, writing those notes in Evernote instead. I also moved this draft to Evernote and worked on it there.

The second problem is implicit in that last paragraph: migration. Do I sit and scan all my notebooks, with my shocking and OCR-resistant handwriting, into Evernote? Do I paste all my summaries of research articles out of Zotero and into Evernote? No, doing so will take a long time and be very boring. What I’ll do instead is to move toward integration from this moment on. If I need something and I think I already have it, I’ll move it into Evernote. If I don’t have it, I’ll make it in Evernote. It will take a while to reap the benefit, but eventually I’ll have a single place to search when I want to look for things I already know.

And that’s the third of my three problems. Being diligent about searching the second brain. You have to change your approach to solving knowledge problems to be “do I already know this?” The usual, for me at least, is “what do I do to know this?” Now I’m good at that, with lots of experience at finding, appraising, and synthesising information, so doing it from scratch every time is mostly a waste of time rather than a fool’s errand. But it’s time I don’t need to waste.

I think that the fact I haven’t internalised these three aspects of the second brain is due to the generation of computing in which I really invested in computers. Most of the computers I learned to computer on could only do one thing at a time, practically if not absolutely. They didn’t have much storage, and that storage was slow. So that meant having different tools for different purposes. You would switch from the place where you recorded dance moves to the place where you captured information on Intuition data types, and rely on first brain for indexing. You wouldn’t even have all of it in the computer: I was 23 when I got my first digital camera, and 25 before I had an MP3 player. I did my whole undergraduate degree using paper notes and books from libraries and book stores. First brain needed to track where physically any information was in addition to where logically it was.

What I’m saying is I’m a dinosaur.

Posted in advancement of the self, tool-support, whatevs | Leave a comment