Or: Not everyone works the way you work
Currently doing the rounds on Twitter is a paper from the ArXiV called Best Practices for Scientific Computing—a paper with 13 authors and 6 pages,including a page-long collection of references. Here’s the abstract:
Scientists spend an increasing amount of time building and using software. However, most scientists are never taught how to do this efficiently. As a result, many are unaware of tools and practices that would allow them to write more reliable and maintainable code with less effort. We describe a set of best practices for scientific software development that have solid foundations in research and experience, and that improve scientists’ productivity and the reliability of their software.
Let me start with an anecdote. It’s 2004, and I’ve just started working as a systems manager in a university computing lab. My job is partly to maintain the computers in the lab, partly to teach programming and numerical computing to Physics undergraduates, and partly to write software that will assist in said teaching. As part of this work I started using version control, both for my source code and for some of the configuration files in /etc
on the servers. A more experienced colleague saw me doing this and told me that I was just generating work for myself, that this wasn’t necessary for the small things I was maintaining.
Move on now to 2010, and I’m working in a big scientific facility in the UK. Using software and a lot of computers, we’ve got something that used to take an entire PhD to finish down to somewhere between one and eight hours. I’m on the software team, and yes we’re using version control to track changes to the software and to understand what version is released. Well, kindof, anyway. The “core” is in version control, but one of its main features is to provide a scripting environment and DSL in which scientists at the “lab benches”, if you will, can write up scripts that automate their experiments. These scripts are not (necessarily) version-controlled. Worse, the source code is deployed to the experimental stations so someone who discovers a bug in the core can fix it locally without the change being tracked in version control.
So, a group does an experiment at this facility, and produces some interesting results. You try to replicate this later, and you get different results. Could be software-related, right? All you need to do is to use the same software that the original group used…unfortunately, you can’t. It’s gone.
That’s an example of how scientists failing to use the tools from software development could be compromising their science. There’s a lot of snake oil in the software field, both from people wanting you to use their tools/methodologies because you’ll pay them for it, and from people who have decided that “their” way of working is correct and that any other way is incorrect. You need to be able to cut through all of that nonsense to find out how particular tools or techniques impact the actual work you’re trying to achieve. Current philosophy of science places a high value on reproducibility and auditing. Version control supports that, so it would be beneficial for programmers working in science to use version control.
But version control is only one of the 10 recommendation sections in this paper (another is about using the computer to record history, something that I’ll assume is covered well by the above discussion). That leaves eight other sections, which each contain numbered pronouncements about how scientists should write software.
Were you surprised?
I expect, if you write software in the commercial sector, you wouldn’t find any of their suggestions contentious: examples include naming things meaningfully, using a consistent convention for names and layout, don’t repeat yourself, and so on. I included this paper here to start discussion of an important point.
What goes on in commercial software engineering is not the be-all and end-all of software development. Scientific software has been around for as long as there have been computers to run software on, and indeed not only is some really old software still in use but the people who wrote it are still around and maintaining it. In the aforementioned university lab, one of my tasks was to help a professor who’d been using his home-grown FORTRAN FITS manipulation routines for at least two decades. Every system he’d used it on—most recently PowerPC, MIPS and Alpha workstations—had been big-endian and he didn’t know why it gave the wrong results when used on our new Intel Mac. His postdocs and PhD students were using the same code—in the same FORTRAN language, which he’d either taught them or given them a book on. And then of course when they moved to a different institution they’d take that code and that understanding of code with them.
I imagine that many professional programmers are not surprised by the validity of (m)any of the statements made in this paper, but by the necessity of stating them. No, not everyone uses version control, or thinks that agile is the best thing ever, or uses consistent naming conventions throughout a source file. Indeed in my experience of scientific programming, use of a symbolic debugger wasn’t If you consider all of these problems to be “solved” then you’re really only looking at a limited part of the world of software development. It’s not just scientific computing that doesn’t match that worldview; what about all the people out there for whom programming is a bunch of Excel formulae and maybe the odd VBA macro pasted from a website?
In both commercial and scientific software development, understanding and behaviour is spread by sharing knowledge from masters to apprentices. I think that the reason there’s such a big difference in practice could be due to the longer generations in scientific software. That 20+-year-old FITS code still works, why change it? And those 20+-year-old practices that created the FITS code, well they still work too, don’t they?
Which of these things actually matters?
Based on my own experience I’d assert that all of them are important things for scientific programmers to know about. I’ve argued, hopefully convincingly, that version control has an important part to play in the scientific process: numerical analysis is a key part of many experiments and like the rest of the method it should be available for inspection and repetition. Science is also a collaborative activity, so it makes sense that some of the recommendations would be about collaboration: document the purpose of the code instead of its mechanics, write programs for people.
Could I justify those assertions with figures? Probably not. Is that important? Well, actually it probably is. Of the researchers I’ve worked with (bear in mind this has always been in Physics), even many who are heavily invested in computational methods see programming (rightly) as a means to an end, and aren’t likely to try new-to-them techniques in programming just for the sake of programming. Despite any rational economic benefit, they’d rather stick with what they know and focus on getting new results without any surprises.
If you want to say “it’s better to work this way” or “you’ll get results quicker like this”, to a bunch of physicists, you have to show them data to prove it. A paper like the one I’m discussing here will likely be read, if it:
- gets published
- said publication happens in a relevant journal
- said publication is picked up and circulated in enough news sources that researchers who don’t read the publishing journal get wind
On the other hand, it’s likely that the article’s tone will ensure that it only preaches to the converted. Nothing in the paper says “this is actually better”, just “professional programmers do these things”. Exercises like Software Carpentry are likely to only appeal to people who already have an interest in bettering their own programming abilities. As I said, most researchers I know don’t: they want to publish, and programming is a necessary—albeit complex—tool helping them to achieve that.
Why is this suddenly an issue?
It isn’t. A very quick search for errors in scientific computing yielded papers published across the last two decades, and I could probably find more. The abstracts for these (I did say it was a very quick search) include some pining for the use of skills from software engineering, or a closer focus by software engineering researchers on scientific computing projects.
What can be done about it?
That’s a very good question. If we knew what to do to improve the quality of any software production effort, there’d be a lot more good software in the world :). If the techniques from commercial software really would help make scientific software better, why wait for the scientists to apply them? Plenty of scientific software is open source, so in the case of things like analysis tools and automatic tests, sufficiently motivated individuals could just apply those things then demonstrate to the project maintainers how much of a difference they’ve made. Sure, there will be problems: I once worked on some software that could only be successfully executed if there was a particle accelerator connected to your workstation. But the first thing I did was to make a virtual particle accelerator – demonstrating how much easier it was to make progress if you could do it away from the experiment.
This brings me onto another option: scientific computing teams can employ commercial developers. I’ve seen it happen, I’ve seen it work and I’ve seen it fail. The ways in which it work include sharing of knowledge from both disciplines, discussing and improving practices. The ways in which it fails come down to frustration on both sides: scientific programmers feel that refactoring is change for change’s sake, perhaps, and software engineers think that not using their favourite practices is the realm of cowboys. That means that for a cross-discipline software team to work, it needs good leadership: the team needs to be designed to appreciate the different skills and viewpoints brought by the different members. And now we’ve gone fully out of the realm of science into management techniques.