Literate Programming with LibreOffice

This post comes in the form of an OpenDocumentFormat document containing a program that can extract programs from ODF documents, including the program contained in this document.

Posted in code-level, software-engineering, tool-support | 1 Comment

Prototypical object-oriented programming

Some people think that the notion of classes is intrinsic to object-oriented programming. Bertrand Meyer even wrote a textbook about OOP called A Touch of Class. But back in the 1980s, Alan Borning and others were trying to teach object-oriented programming using the Smalltalk system, ostensibly designed to make simulation in computer programmers accessible to children. What they found was that classes are hard.

You’re not allowed to think about how your thing works before you’ve gone a level of abstraction up and told the computer all about the essence of thing-ness, what it is that’s common to all things and sets them apart from other ideas. And while you’re at it, you could well need to think about the metaclass, the essence of essence-of-thing-ness.

So Borning asked the reasonable question: why not just get rid of classes?. Rather than say what all things are like, let me describe the thing I want to think about.

But what happens when I need a different thing? Two options present themselves: both represent the idea that this thing is like that thing, except for some specific properties. One option is that I just create a clone of the first object. I now have two identical things, I make the changes that distinguish the second from the first, and now I can use my two, distinct things.

The disadvantage of that is that there’s no link between those two objects, so I have nowhere to put any shared behaviour. Imagine that I’m writing the HR software for a Silicon Valley startup. Initially there’s just one employee, the founder, and rather than think about the concept of Employee-ness and create the class of all employees, I just represent the founder as an object and get on with writing the application. Now the company hires a second employee, and being a Silicon Valley startup they hire someone who’s almost identical to the founder with just a couple of differences. Rather than duplicating the founder and changing the relevant properties, I create a new object that just contains the specific attributes that make this employee different, and link it to the founder object by saying that the founder is the prototype of the other employee.

Any message received by employee #2, if not understood, is delegated to the original employee, the founder. Later, I add a new feature to the Silicon Valley HR application: an employee can issue a statement apologising if anybody got offended. By putting this feature on the first employee, the other employee(s) also get that behaviour.

This simplified approach to beahvioural inheritance in object-oriented programming has been implemented a few times. It’s worth exploring, if you haven’t already.

Posted in javascript, OOP | 1 Comment

In defence of assertions

The year is 2017 and people are still recommending processing out assertions from release builds.

  1. many assertions are short tests (whether or not that’s a good thing): this variable now has a value, this number is now greater than zero), which won’t cost a lot in production. Or at least, let me phrase this another way: many assertions are too cheap to affect performance metrics in many apps. Or, let me phrase that another way: most production software probably doesn’t have good enough performance monitoring to see a result, or constrained enough performance goals to care about the result.

  2. The program counter has absolutely no business executing the instruction that follows a failed assertion, because the programmer wrote the subsequent instructions with the assumption that this would never happen. Yes, your program will terminate, leading to a 500 error/unfortunate stop dialog/guru meditation screen/other thing, but the alternative is to run…something that apparently shouldn’t ever occur. Far better to stop at the point of problem detection, than to try to re-detect it based on a surprising and unsupportive problem report later.

  3. assertions are things that programmers believe to always hold, and it’s sensible to take issue with the words always and believe. There’s an argument that goes:

    1. I have never seen this situation happen in development or staging.
    2. I got this job by reversing a linked list on a whiteboard.
    3. Therefore, this situation cannot happen in production.

    but unfortunately, there’s a flaw between the axioms and the conclusion. For example, I have seen the argument “items are added to this list as they are received, therefore these items are in chronological order” multiple times, and have seen items in another order just as often. Assertions that never fire on programmer input give false assurance.

Posted in code-level | 1 Comment

In defence of large teams

Seen on the twitters:

1) Bad reasons why tech startups have incredibly large mobile teams even though from an engineering perspective they don’t need it.
This is the No True Scotsman fallacy, as no true software department needs more than, say, 20 people.

I’m not going to get into the details of what you do with hundreds of mobile engineers. Suffice it to say that the larger-team apps I’ve worked on have been very feature rich, for better or worse. And that’s not just in terms of things you can do, but in terms of how well you can do them. When you in-source the small details that are important to your experience, they become as much work to solve as the overall picture.

Make a list of the companies that you think have “too big” a mobile software development team. Now review that list: all of those companies are pretty big and successful, aren’t they? Maybe big enough to hire a few hundred developers to work on how their customers access their products or services? No true software department needs to be that successful.

And that’s what I think of as the underlying problem with the “your team’s too big, you’re doing it wrong” fallacy: it’s part of the ongoing narrative to devalue all software. It says that your application can’t possibly be worth enough to spend all that developer time on. After all, mine isn’t, and I’m a true software developer.

Posted in Uncategorized | 1 Comment

Your build needs to be better

I’ve said it before, build systems are a huge annoyance. If your build is anything other than seemingly instantaneous, it’s costing you severe money.

Your developers are probably off reading HN, or writing blog posts about how slow builds cost them, while the build is going. When they finish doing that, which may be some time after the build completes, they’ll have forgotten some of what they were doing and need to spend some time getting back up to speed.

Your developers are probably suspicious of any build failure, thinking that “the build is flaky” rather than “I made a mistake”. They’ll press the button again and go back to HN. When the same error occurs twice, they might look into it.

Your developers probably know that the build is slow, but not which bit of the build is slow. And they don’t have time to investigate that, where it takes so long to get any work done anyway. So everyone will agree that “there is a problem”, but nothing will get done. Or maybe cargo-cult things will get done, things that speed up “builds” but are not the problem with your build.

The Joel test asks whether you can make a build in one test. Insufficient. If you notice when you’re making a build, you’re slowing your developers down.

…which is not always the worst thing, of course. Sometimes a lengthy translation step from some source language to some optimised form of a machine language program yields better results for your customers, because they get to use a faster program and don’t need to care about the time taken to prepare that program. But let’s be clear: that’s part of the release, and your developers don’t always need to be working from the released product (in fact, they’re usually not). Releases should be asynchronous, and the latency between having something ready to be released and having released it can be fairly high, compared with the latency between having created some source and being able to investigate its utility.

Nonetheless, that should all go off in the background. So really, builds and releases should both be non-events to the developers.

Posted in architecture of sorts, code-level, tool-support | Leave a comment

I just want to point out that even the best of us aren’t doing what we expect the makers of acne creams to do.

What we actually know about software development, and why we believe it’s true by Greg Wilson.

Posted on by Graham | Leave a comment

Why your app is not massively parallel software

That trash can Mac Pro that hasn’t been updated in years? It’s too hard to write software for.

Now, let’s be clear, there are any number of abstractions that have been created to help programmers parallelise their thing, from the process onward. If you’ve got a loop and can add the words #pragma omp parallel for to your code, then your loop can be run in parallel over as many threads as you like. It’s not hard.

Making sure that the loop body can run concurrently with itself is hard, but there are some rules to follow that either make it easy or tell you when to avoid trying. But you’re still only using the CPU, and there’s that whole dedicated GPU to look after as well.

Even with interfaces like OpenCL, it’s difficult to get this business right. If you’ve been thinking about your problem as objects, then each object has its own little part of the data – but now you need to get that information into a layout that’ll be efficient for doing the GPU work, then actually do the copy, then copy the results back from the GPU memory…is doing all of that worth it?

For almost all applications, the answer is no. For almost no applications, the answer is occasionally. For a tiny number of applications, the answer is most of the time, but if you’re writing one of those then you’re a scientist or a data “scientist” and probably not going to get much value out of a deskside workstation anyway.

What’s needed for that middle tier of applications is the tools – by which I mostly mean the libraries – to deal with this problem when it makes sense. You don’t need visualisations that say “hey, if you learned a different programming language and technique and then applied it to this little inner loop you could get a little speed boost for the couple of seconds that one percent of users will use this feature every week” – you need implementations that notice that and get on with it anyway.

The Mac Pro is, in that sense, the exact opposite of the Macintosh. Back in the 1980s, the Smalltalk software was ready well before there was any hardware that could run it well, and the Macintosh was a thing that took this environment that could be seen to have value, and made it kindof work on real hardware. Conversely, the Mac Pro was ready well before there was any software that could make use of it, and that’s a harder sell. The fact that, four years later, this is still true, makes it evident that it’s either difficult or not worth the effort to try to push the kind of tools and techniques necessary to efficiently use Mac Pro-style hardware into “the developer ecosystem”. Yes, there are niches that make very good use of them, but everybody else doesn’t and probably can’t.

Posted in whatevs | Leave a comment

Working Effectively with Legacy Code

I gave a talk to my team at ARM today on Working Effectively with Legacy Code by Michael Feathers. Here are some notes I made in preparation, which are somewhat related to the talk I gave.

This may be the most important book a software developer can
read. Why? Because if you don’t, then you’re part of the problem.

It’s obviously a lot easier and a lot more enjoyable to work on
greenfield projects all the time. You get to choose this week’s
favourite technologies and tools, put things together in the ways that
suit you now, and make progress because, well anything is progress
when there’s nothing there already. But throwing away an existing
system and starting from scratch makes it easy to throw away the
lessons learned in developing that system. It may be ugly, and patched
up all over the place, but that’s because each of those patches was
needed. They each represent something we learned about the product
after we thought we were done.

The new system is much more likely to look good from the developer’s
perspective
, but what about the users’? Do they want to pay again
for development of a new system when they already have one that mostly
works? Do they want to learn again how to use the software? We have
this strange introspective notion that professionalism in software
development means things that make code look good to other coders:
Clean Code, “well-crafted” code. But we should also have some
responsibility to those people who depend on us and who pay our way,
and that might mean taking the decision to fix the mostly-working
thing.

A digression: Lehman’s Laws

Manny Lehman identified three different categories of software system:
those that are exactly specified, those that implement
well-understood procedures, and those that are influenced by the
environment in which they run. Most software (including ours) comes
into that last category, and as the environment changes so must the
software, even if there were no (known) problems with it at an earlier
point in its evolution.

He expressed
Laws governing the evolution of software systems,
which govern how the requirements for new development are in conflict
with the forces that slow down maintenance of existing systems. I’ll
not reproduce the full list here, but for example on the one hand the
functionality of the system must grow over time to provide user
satisfaction, while at the same time the complexity will increase and
perceived quality will decline unless it is actively maintained.

Legacy Code

Michael Feather’s definition of legacy code is code without tests. I’m
going to be a bit picky here: rather than saying that legacy code is
code with no tests, I’m going to say that it’s code with
insufficient tests
. If I make a change, can I be confident that I’ll
discover the ramifications of that change?

If not, then it’ll slow me down. I even sometimes discard changes
entirely, because I decide the cost of working out whether my change
has broken anything outweighs the interest I have in seeing the change
make it into the codebase.

Feathers refers to the tests as a “software vice”. They clamp the
software into place, so that you can have more control when you’re
working on it. Tests aren’t the only tools that do this: assertions
(and particularly Design by Contract) also help pin down the software.

How do I test untested code?

The apparent way forward then when dealing with legacy code is to
understand its behaviour and encapsulate that in a collection of unit
tests. Unfortunately, it’s likely to be difficult to write unit tests
for legacy code, because it’s all tightly coupled, has weird and
unexpected dependencies, and is hard to understand. So there’s a
catch-22: I need to make tests before I make changes, but I need to
make changes before I can make tests.

Seams

Almost the entire book is about resolving that dilemma, and contains a
collection of patterns and techniques to help you make low-risk
changes to make the code more testable, so you can introduce the tests
that will help you make the high-risk changes. His algorithm is:

  1. identify the “change points”, the things that need modifying to
    make the change you have to make.
  2. find the “test points”, the places around the change points where
    you need to add tests.
  3. break dependencies.
  4. write the tests.
  5. make the changes.

The overarching model for breaking dependencies is the “seam”. It’s a
place where you can change the behaviour of some code you want to
test, without having to change the code under test itself. Some examples:

  • you could introduce a constructor argument to inject an object
    rather than using a global variable
  • you could add a layer of indirection between a method and a
    framework class it uses, to replace that framework class with a
    test double
  • you could use the C preprocessor to redefine a function call to use
    a different function
  • you can break an uncohesive class into two classes that collaborate
    over an interface, to replace one of the classes in your tests

Understanding the code

The important point is that whatever you, or someone else, thinks
the behaviour of the code should be, actually your customers have paid
for the behaviour that’s actually there and so that (modulo bugs) is
the thing you should preserve.

The book contains techniques to help you understand the existing code
so that you can get those tests written in the first place, and even
find the change points. Scratch refactoring is one technique: look
at the code, change it, move bits out that you think represent
cohesive functions, delete code that’s probably unused, make notes in
comments…then just discard all of those changes. This is like Fred
Brooks’s recommendation to “plan to throw one away”, you can take what
you learned from those notes and refactorings and go in again with a
more structured approach.

Sketching is another technique recommended in the book. You can draw
diagrams of how different modules or objects collaborate, and
particularly draw networks of what parts of the system will be
affected by changes in the part you’re looking at.

Posted in advancement of the self, books, code-level, learning, software-engineering, TDD | Leave a comment

Build systems are a huge annoyance

Take Smalltalk. Do I have an object in my image? Yes? Well I can use it. Does it need to do some compilation or something? I have no idea, it just runs my Smalltalk.

Take Python. Do I have the python code? Yes? Well I can use it. Does it need to do some compilation or something? I have no idea, it just runs my Python.

Take C.

Oh my God.

C is portable, and there are portable operating system interface specifications for the system behaviour accessible from C, so you need to have C sources that are specific to the platform you’re building for. So you have a tool like autoconf or cmake that tests how to edit your sources to make them actually work on this platform, and performs those changes. The outputs from them are then fed into a thing that takes C sources and constructs the software.

What you want is the ability to take some C and use it on a computer. What C programmers think you want is a graph of the actions that must be taken to get from something that’s nearly C source to a program you can use on a computer. What they’re likely to give you is a collection of things, each of which encapsulates part of the graph, and not necessarily all that well. Like autoconf and cmake, mentioned above, which do some of the transformations, but not all of them, and leave it to some other tool (in the case of cmake, your choice of some other tool) to do the rest.

Or look at make, which is actually entirely capable of doing the graph thing well, but frequently not used as such, so that make all works but making any particular target depends on whether you’ve already done other things.

Now take every other programming language. Thanks to the ubiquity of the C run time and the command shell, every programming language needs its own build system named [a-z]+ake that is written in that language, and supplies a subset of make’s capabilities but makes it easier to do whatever it is needs to be done by that language’s tools.

When all you want is to use the software.

Posted in code-level | Leave a comment

Tsundoku

I only have the word of the internet to tell me that Tsundoku is the condition of acquiring new books without reading them. My metric for this condition is my list of books I own but have yet to read:

  • the last three parts of Christopher Tolkien’s Histories of Middle-Earth
  • Strategic Information Management: Challenges and Strategies in Managing Information Systems
  • Hume’s Enquiries Concerning the Human Understanding
  • Europe in the Central Middle Ages, 962-1154
  • England in the Later Middle Ages
  • Bertrand Russel’s Problems with Philosophy
  • John Stuart Mill’s Utilitarianism and On Liberty (two copies, different editions, because I buy and read books at different rates)
  • A Song of Stone by Iain Banks
  • Digital Typography by Knuth
  • Merchant and Craft Guilds: A History of the Aberdeen Incorporated Trades
  • The Indisputable Existence of Santa Claus
  • Margaret Atwood’s The Handmaid’s Tale

And those are only the ones I want to read and own (and I think that list is incomplete – I bought a book on online communities a few weeks ago and currently can’t find it). Never mind the ones I don’t own.

And this is only about books. What about those side projects, businesses, hobbies, blog posts and other interests I “would do if I got around to it” and never do? Thinking clearly about what to do next and keeping expectations consistent with what I can do is an important skill, and one I seem to lack.

Posted in advancement of the self | Leave a comment