Mature Optimization

This comment on why NetNewsWire is fast brings up one of the famous tropes of computer science:

The line between [performance considerations pervading software design] and premature optimization isn’t clearly defined.

If only someone had written a whole paper about premature optimization, we’d have a bit more information. …wait, they did! The idea that premature optimization is the root of all evil comes from Donald Knuth’s Structured Programming with go to Statements. Knuth attributes it to C.A.R. Hoare in The Errors of TeX, though Hoare denied that he had coined the phrase.

Anyway, the pithy phrase “premature optimization is the root of all evil”, which has been interpreted as “optimization before the thing is already running too slow is to be avoided”, actually appears in this context:

There is no doubt that the grail of efficiency leads to abuse. Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil.

Yet we should not pass up our opportunities in that critical 3%. A good programmer will not be lulled into complacency by such reasoning, [they] will be wise to look carefully at the critical code; but only after that code has been identified. It is often a mistake to make a priori judgements about what parts of a program are really critical, since the universal experience of programmers who have been using measurement tools has been that their intuitive guesses fail.

Indeed this whole subsection on efficiency opens with Knuth explaining that he does put a lot of effort into optimizing the critical parts of his code.

I now look with an extremely jaundiced eye at every operation in a critical inner loop, seeking to modify my program and data structure […] so that some of the operations can be eliminated. The reasons for this approach are that: a) it doesn’t take long, since the inner loop is short; b) the payoff is real; and c) I can then afford to be less efficinet in the other parts of my programs, which therefore are more readable and more easily written and debugged. Tools are being developed to make this critical-loop identification job easy (see for example [Dan Ingalls, The execution time profile as a programming tool] and [E. H. Satterthwaite, Debugging tools for high level languages]).

So yes, optimize your code, but optimize the bits that benefit from optimization. NetNewsWire is a Mac application, and Apple’s own documentation on improving your app’s performance describe an iterative approach for finding underperforming characteristics (note: not “what is next to optimize”, but “what are users encountering that needs improvement”), making changes, and verifying that the changes led to an improvement:

Plan and implement performance improvements by approaching the problem scientifically:

  1. Gather information about the problems your users are seeing.
  2. Measure your app’s behavior to find the causes of the problems.
  3. Plan one change to improve the situation.
  4. Implement the change.
  5. Observe whether the app’s performance improves.

I doubt that this post will change the “any optimization is the root of all evil” narrative, because there isn’t a similarly-trite epithet for the “optimize the parts that need it” school of thought, but at least I’ve tried.

Posted in performance | Tagged | 1 Comment

Episode 7: A touch of class

I’m building support for classes in the Amiga-Smalltalk Virtual Machine today, so that’s what is on my mind.

If you missed episode 6, it’s over on Youtube.

The podcast is now in the iTunes Store!

Leave a comment

Video podcast: Hisoft C for the ZX Spectrum

Episode 6 of the SICPers podcast is over on Youtube. I introduce a C compiler for the Sinclair ZX Spectrum. For American readers, that’s the Timex Sinclair TS2068.

Posted in podcast | Leave a comment

SICPers podcast episode 5

It lives! Kinda. Amiga-Smalltalk now runs on Amiga. Along the way I review The K&R book as a tutorial for C programming, mentioning my previous comparison to the Brad Cox and Bjarne Stroustrup books. I also find out how little I know “C”, it turns out I’ve been using GNU C for the last 20 years.

Thanks to Alan Francis for his part in my downfall.

Posted in Amiga, podcast | Leave a comment

It lives! (Kinda)

Amiga-Smalltalk now works on an Amiga! I describe the journey to a working port. Full show notes

Leave a comment

SICPers podcast episode 4

We’re back to Amiga-Smalltalk today, as the moment when it runs on a real Amiga inches closer. Listen here.

I think I’ve isolated all extraneous sound except the nearby motorway, which I can’t do much about. I hope the experience is better!

Posted in Amiga, podcast | Leave a comment

Episode 4: do you C what I C?

We’re getting closer to running Amiga-Smalltalk actually on an Amiga. Full show notes

Leave a comment

SICPers Podcast Episode 3

The latest episode of SICPers, in which I muse on what programming 1980s microcomputers taught me about reading code, is now live. Here’s the podcast RSS feed.

It is practically impossible to teach good programming to students that have had a prior exposure to BASIC: as potential programmers they are mentally mutilated beyond hope of regeneration. Edsger Dijkstra, “How do we tell truths that might hurt?”

As always, your feedback and suggestions are most welcome.

Posted in podcast | Leave a comment

Episode 3: Graham’s so basic

What did programming on a microcomputer with a 6809 CPU and 32k of RAM teach me about reading code? Full show notes

Leave a comment

SICPers Podcast episode two

Episode 2 is live!

The only link I promised this week is to the BCHS web application stack. Short for BSD, C, httpd, sqlite, it’s a minimalist approach to making web applications.

As ever, your feedback is welcome, here or wherever you find me.

Posted in podcast | Leave a comment