Week Two

As week one featured an observation of how post-work life was similar to working life, this week’s post is a catalogue of differences. Not all of these differences are huge.

No watch

I own five watches (three wrist, two pocket) but have only worn one once in the last two weeks. Most of the time, I don’t need to know what the time is and don’t need to mark its passing.

No beard

When you get up at 5:30 every morning for the commuter train, putting time into your appearance takes a back seat to getting out of the door quickly. I now shave every day; not particularly important except that it indicates I have time to do so.

No computer

Or at least no sitting at a computer. I’m writing this on a laptop which I’ve just been using to write up the dissertation, but when I’m not doing that, or editing music scores in MuseScore, I’m not “at” a computer. Tempus Fugit was written on my phone: the best computer is the one you have on you.

No tech news

My morning ritual used to involve a lot of RSS tech news feeds, as well as browsing some aggregation sites like lobste.rs to find articles to read. Now that’s all been pared back, apart from a few people who write well. I usually have two or three unread articles every day now, which are more often than not web comics.

Posted in advancement of the self | Leave a comment

Tempus Fugit

A common concern programmers have when I talk about my year off is that I’ll be unemployable at the end of it. After all, the industry moves really quickly and if I’m off thinking about things that aren’t programming, I’ll fall off the treadmill. Programmers are like the red queen, constantly running in order to stand still. Aren’t they?

Well, no, not really. Looking at the current TIOBE programming language index, I’ve written software in nine of the top ten languages (which are all at least a decade old). The likelihood that all of these will become obsolete in a year is miniscule, and the likelihood that the underlying principles of organisation of thought will perish is smaller still.

What about the platforms? Will big screens, small screens, touch screens, pointing devices, keyboards, web clients  or network servers disappear within the next year? How would a freeze-dried programmer from 2014 or even 2005 cope with today’s near-identical world?

Maybe, should I come back to professional programming next year, I’ll find that I’ve grown my ability to understand things that aren’t programming; a skill that could stand programmers in great stead. I doubt, however, that I’ll have lost my ability to use a text editor and a compiler, tools that remain obstinately similar to their 1950s forebears.

Posted in advancement of the self | 2 Comments

Week One

Nearly eight days ago I stopped working to have a break. I’ve been describing it as a “gap year”, because I’ve arranged my finances to last at least that long with some contingency. Also, I want to set a year as the anchor in my mind, so I don’t do what normally happens and take the first interesting-looking job that comes along. There’s a danger that I’ll be bored around a month from now and start interviewing again.

Honestly, after one week I feel better rested but not like I’ve made some fundamental life change. That’s partly because one of my first projects for this year is to complete an MSc in software engineering, so I’m still “a programmer” by trade to some extent. One goal for this year is to experience more of humanity than just programming.

I’ve taken some time out to do that which can be described (with capital letters, no less) as The Arts: visiting the Birmingham Museum and Art Gallery and the Library of Birmingham. And I turned my hand to graphic design to lay out a new “business” card, which I hope I’ll have to hand next week. I’m speaking at #pragmaconf in October and people there might want to know who I am.

In literature news, Goodreads tells me I read Snow Crash, Emotional Design, a Philip K Dick anthology and I started on the Salmon of Doubt. Add to that this month’s Linux Voice, Linux Format and CACM. A lot of reading, but things that programmer-me would have got around to anyway.

In home economics news, I did bake two loaves of plum bread (plums “sourced locally”, by which I mean they were scrumped from a tree up the road) which is something I haven’t had time to do in nearly a year.

Posted in advancement of the self | 2 Comments

Criticising the Four Freedoms

The core principle of Free Software is that people who use software
retain certain freedoms, unlike the situation with proprietary
software in which all of the freedom associated with the software
remains with the vendor. Those are the Four Freedoms:

A program is free software if the program’s users have the four essential freedoms:

  • The freedom to run the program as you wish, for any purpose (freedom 0).
  • The freedom to study how the program works, and change it so it does your computing as you wish (freedom 1). Access to the source code is a precondition for this.
  • The freedom to redistribute copies so you can help your neighbor (freedom 2).
  • The freedom to distribute copies of your modified versions to others (freedom 3). By doing this you can give the whole community a chance to benefit from your changes. Access to the source code is a precondition for this.

Without other resources, these freedoms are pretty academic. Let’s
take access to a computer as a given for the purpose of this argument:
you’re one of “the program’s users”, so presumably you have the
material needed to use the program.

But does the program need all the resources it uses?

I can study and modify the program. Access to the source code is
indeed a prerequisite; comprehensible source code is also a
prerequisite. So are the study materials I need to comprehend the
source code, and the time it’ll take me to do that study.

So that’s me on the receiving end of free software, what about the
producing end? Nothing in the world of free software compels me to
choose the simplest language, to design my software for
comprehensibility, nor to make available the tools and information
needed to understand the source code that enables the other
freedoms. But unless I do that, the four freedoms are only
hypothetical.

Posted in freesoftware, Responsibility | 2 Comments

Improving a presentation with slides

Take a look at your slides. For each slide, think how you would present the same information if you didn’t have the slide. Practise that, so that you can give the information on the slide without using the slide as an aide memoire. Practise that, until you can introduce that topic, discuss it, and move on to the next without a single reference to the slide. Do the same for each slide.

How will that improve my slides?

It won’t. It will improve your presentation with slides, by turning it into a presentation without slides.

As an optional extra, you could make new slides that support the presentation, but it shouldn’t be necessary.

Posted in performance, Talk | 2 Comments

“When I had that problem”

A common lie in programming is that every project is new, that no problem has been seen before. This is the reason given for estimates being bad, for plans being bad, for design being bad…for anything other than diving in uninformed being bad.

But I’ve noticed that more and more frequently my discussions about problems-technical problems, organisational problems, personal problems-involve the phrase “when I had that problem”. That somebody (and, as time goes on, that’s more frequently me) has seen this problem or one with many similarities before.

It’s time to stop pretending that your UI fronting a database table is up there among the Hilbert problems as one of the big research questions of the 21st century. We have seen that before, or something like it, and we tried things, some of them worked. They probably weren’t the best possible solutions but they were solutions.

Posted in advancement of the self, edjercashun | Leave a comment

PL personality theory

An analysis of programmer personality traits inferred from their answer to the question “which is your favourite programming language?”

Algol About to re-enact that scene in Jumanji where Robin Williams has a huge beard.

Basic Remembers a time when you could code a whole platform game with twenty levels in 6k of RAM. Probably works on some trading platform that needs the JVM heap size bumped to 4GB to add two numbers.

C Learnt programming once, what more could there be to it?

C++ Learnt programming once, it was horrible.

C# Wears Microsoft shoes, Microsoft trousers, and a Microsoft t-shirt. C# also goes by the name “Washington State Swift”.

D Probably best to ask again next week.

Elixir I used to be a Ruby programmer until I realised I hate Ruby programmers.

F# Wears the same clothes as the C# programmer but does so ironically.

Go Uses Google+ earnestly.

Java Has hobbies that aren’t programming.

Javascript Look, even Yersinia Pestis was popular once.

Lisp Mostly calm with sudden outbursts of zen.

Objective-C At the intersection of technology and liberal cash. In danger of progressing to Smalltalk.

Perl Stoic in the face of abuse. Ignores it and carries on getting loads of work done.

Python Had an argument with a Perl programmer in 2004. They each think they won; neither is correct.

Ruby Used to use Java but then learned object-oriented programming and had to move on.

Ruby on Rails Like that kid in that movie. No, not War Games, the other one. Home Alone.

Scheme Pretentious. Probably has a blog named for a pun on a classic computing textbook.

Self Slightly further along in their hatred of computing than a Smalltalker.

Smalltalk About to re-enact that scene in Planet of the Apes where Charlton Heston finds the statue.

Swift Like the person who goes into the specialist metal record store and conspiratorially asks whether they’ve got anything by Metallica.

Tcl Submits write-in answers to multiple choice questions.

Posted in whatevs | Leave a comment

The paradox of scripting

But how can scripting be dead? There’s bash, and powershell, and ruby, and…even Perl is still popular among sysadmins. There’s never been a better time to be a programmer or other IT professional trying to automate a task.

True, but there’s never been a worse time for someone who doesn’t care about computers to use a computer to automate a task. Apps are in-your-face “experiences” to be “used”, and for the most part can’t be glued together.

The message given off by the state of scripting is that scripting is programming, programming is a specialist pursuit, therefore regular folk should not be shown scripting nor given access to its power. They should rely on the technomages to magnanimously grant them the benefits of computing.

If I were to build an automation technology for today’s mobile platforms, I’d probably call it Prometheus.

Posted in script | 8 Comments

The death of scripting

Back in the day, when programmers knew that they couldn’t possibly think of everything somebody might want to do with a computer, there were scripts. If somebody could find enough of the pieces of the thing that they wanted to do, they might be able to put them together themselves in furtherance of their task.

Many times, constructing these scripts was a lot like programming the software being glued together. On the Amiga there was ARexx, on the PC there were batch files, on Mac there was AppleScript: all programming in its own right, making new applications out of the ones you’d bought.

Applications. Here’s the dichotomy. Think of two axes on a chart: one axis records the things you want to do with a computer; the tasks you want to complete. The other records the things you can do with the computer: the applications to which it can be put.

These axes are not perpendicular, as if there is no projection into your tasks by your applications. But they are not parallel either. And where the directions taken by the applications are not progressing your tasks, in comes scripting to provide bridges between those applications and take you on your way.

Not all of these bridges are esoteric programming languages on top of other programming languages. NeXT had services, in which applications could publish menu items that became available in other applications where the two were using the same data. Apple took a bit from each column to make Automator, a UI in which you could snap together bits of applications to make your task.

All of this represented a helpfulness and humility on the part of the applications makers: we do not know everything you want to do. We do know some things you might want to do: we’ll let you combine them and mash them up – “rip, mix and burn” as they used to say – making you more satisfied and our stuff more useful.

Sadly all of this utility plays merry hell with branding. Applications aren’t just utilities, they’re icons in the launcher, splash screens, names in menu bars, reminders that I also make other applications and by the way have you rated this one five stars yet? Scripts stop people seeing that, they’re too busy using their computers productively to see the marketing.

And so it’s sad to see scripting die out as the popular platforms for application development fail to support it. Instead of the personal control of the script – I will take this information from that app, and put this part of it in that app – we have the corporate control of the API. This app maker and that app maker are BFFs, sign in here to let them share everything. After all, they know best.

Ultimately the death of scripting is hubristic. We know how you want to use a computer. If you’re trying to do something that we didn’t sell to you, you must be holding it wrong.

Posted in architecture of sorts, script | 8 Comments

On having things to say

I enjoyed Jaimee’s discussion of preparing her public talks, and realised that my approach has moved in a different way. I’ve probably talked about this before but I’ve also changed how I go about it. This is my technique, particularly where it diverges from Jaimee’s; synthesis can come later (and will undoubtedly help me!).

I start by thinking up some pithy title: previous talks including “Object-Oriented Programming in Objective-C”, “By your _cmd”, “The Principled Programmer” and “I have no idea what I’m doing” all began there. I often commit—even if only privately—to using a particular title before I have any idea what the talk will be about. I enjoy the creative exercise of fitting the rest of the talk into that constraint!

With a title in place, I brainstorm all of the things I can think of that could potentially fit into that topic. Usually I look back at that brainstorm and discover that it’s rambling, disconnected and mostly boring. Looking through, I search for two or three things that are interesting, particularly if they suggest conflicting ideas or techniques that can be explored, challenged and resolved.

Then it’s time for another outline :). This one explores the selected areas in depth, and it’s from this that I pick the main headlines for the talk, which also shape the introduction and conclusion. With those in mind I write the talk out as an essay, making sure it is consistent, complete and (to the extent I can do this myself) interesting. If it looks OK, then by this point I’ve prepared so much that I can remember the flow of the talk and give it without aids, though I still look for opportunities to support the presentation visually in the slides. In the case of my Principled Programmer talk, I realised the slides weren’t helping at all so did without them.

There are plenty of better presenters than me in the world; Jaimee is one of them. I have merely trial-and-errored my way into a situation where sometimes the same people who see me talk ask me back. I hope that by comparing my method with Jaimee’s and those of other people I can find out how to prepare a better talk.

Posted in Talk | Leave a comment