On entering programming in 2022

I recently taught an introduction to Python course, to final-year undergraduate students. These students had little to zero programming experience, and were all expected to get set up with Python (using the Anaconda environment, which we had determined to be the easiest way to get a reasonable baseline configuration) on laptops they had brought themselves.

What follows is not a slight on these people, who were all motivated, intelligent, and capable. It is a slight on the world of programming in the current ages, if you are seeking to get started with putting a general-purpose computer to your own purposes and merely own a general-purpose computer.

One person had a laptop that, being a mere six (6) years old, was too old to run the current version of Anaconda Distribution. We had to crawl through the archives, guessing what older version might work (i.e. might both run on their computer and still permit them to follow the course).

Another had a modern laptop and the same same version of Python/tools that everyone else was using, except that their IDE would crash if they tried to plot a graph in dark mode.

Another had, seemingly without having launched the shell while they owned their computer, got their profile into a state where none of the system binary folders were on their PATH. Hmm, python3 doesn’t work, let’s use which python to find out why not. Hmm, which doesn’t work, let’s use ls to find out why not. Hmmm…

Many, through not having used terminal emulators before, did not yet know that terminal emulators are modal. There are shell commands, which you must type when you see a $ (or a % or a >) and will not work when you can see a >>>. There are Python commands, which are the other way around. If you type a command that launches nano/pico, there are other rules.

By the way, condo and pip (and poetry, if you try to read anything online about setting up Python) are Python things but you cannot use them as Python commands. They are shell commands.

By the other way, everyone writes those shell commands with a $ at the front. You do not write the $. Oh, and by the other other way: they don’t necessarily tell you to open the Terminal to do it.

Different environments—the shell, visual studio code, Spyder, PyCharm—will do different things with respect to your “current working directory” when you run a script. They will not tell you that they have done this, nor that it is important, nor that it is why your script can’t find a data file that’s RIGHT THERE.

This is all way before we get to the dark art of comprehending exception traces.

When I were a lad and Silicon Valley were all fields, you turned a computer on and it was ready for some programming. I’m not suggesting returning to that time, computers were useless then. But I do think it is needlessly difficult to get started with “a programming language that lets you work quickly” in this time of ubiquitous programs.

Posted in edjercashun | 1 Comment

Episode 54: professionalism and responsibility

The idea that increased autonomy and privilege for software engineers can only come when we have better confidence that software engineers are working in the best interests of society.

Thanks for listening! Don’t forget to check out my mailing list for fortnightly info on how programmers can become software engineers.

Leave a comment

More detail on software requirements

My talk at AppDevCon discussed the Requirements Trifecta but turned it into a Quadrinella: you need leadership vision, market feedback, and technical reality to all line up as listed in the trifecta, but I’ve since added a fourth component. You also need to be able to tell the people who might be interested in paying for this thing that you have it and it might be worth paying for. If you don’t have that then, if anybody has heard of you at all, it will be as a company that went out of business with a product “five years ahead of its time”: you were able to build it, it did something people could benefit from, in an innovative way, but nobody realised that they needed it.

A 45 minute presentation was enough to introduce that framework and describe it, but not to go into any detail. For example we say that we need “market feedback”, i.e. to know that the thing we are going to build is something that some customer somewhere will actually want. But how do we find out what they want? The simple answer, “we ask them”, turns out to uncover some surprising complexity and nuance.

At one end, you have the problem of mass-market scale: how do you ask a billion people what they want? It’s very expensive to do, and even more expensive to collate and analyse those billion answers. We can take some simplifying steps that reduce the cost and complexity, in return for finding less out. We can sample the population: instead of asking a billion people what they think, we can ask ten thousand people what they think and apply what we learn to all billion people.

We have to know that the way in which we select those 10,000 people is unbiased, otherwise we’re building for an exclusive portion of the target billion. Send a survey to people’s work email addresses on a Friday, and some will not pick it up until Sunday as their weekend is Fri-Sat. Others will be on holiday, or not checking their email that day, or feeling grumpy and inclined to answer with the opposite of their real thoughts, or getting everything done quickly before the weekend and disinclined to think about your questions at all.

Another technique we use is to simplify the questions—or at least the answers we’ll accept to those questions, to make it easier to combine and aggregate those answers. Now we have not asked “what do you think about this” at all; we have asked “which of these ways in which you might think about this do you agree with?” Because people are inclined to avoid conflict, they tend to agree with us. Ask “to what extent do you agree that spline reticulation is the biggest automation opportunity in widget frobnication?” and you’ll learn something different from the person who asked “to what extent do you agree that spline reticulation is the least important automation opportunity in widget frobnication?”

We’ll get richer information from deeper, qualitative interactions with people, and that tends to mean working with fewer people. At the extreme small end we have one person: an agent talks to their client about what that client would like to see. This is quite an easy case to deal with, because you have exactly one viewpoint to interpret.

Of course, that viewpoint could well be inconsistent. Someone can tell you that they get a lot of independence in how they work, then in describing their tasks list all the approvals and sign-offs they have to get. It can also be incomplete. A manager might not fully know all of the details of the work their reports do; someone may know their own work very well but not the full context of the process in which that work occurs. Additionally, someone may not think to tell you everything about their situation: many activities rely on tacit knowledge that’s hard to teach and hard to explain. So maybe we watch them work, rather than asking them how they work. Now, are they doing what they’re doing because that’s how they work, or because that’s how they behave when they’re being watched?

Their viewpoint could also be less than completely relevant: maybe the client is the person paying for the software, but are they the people who are going to use it? Or going to be impacted by the software’s outputs and decisions? I used the example in the talk of expenses software: very few people when asked “what’s the best software you’ve ever used” come up with the tool they use to submit receipts for expense claims. That’s because it’s written for the accounting department, not for the workers spending their own money.

So, we think to involve more people. Maybe we add people’s managers, or reports, or colleagues, from their own and from other departments. Or their customers, or suppliers. Now, how do we deal with all of these people? If we interview them each individually, then how do we resolve contradiction in the information they tell us? If we bring them together in a workshop or focus group, we potentially allow those contradictions to be explored and resolved by the group. But potentially they cause conflict. Or don’t get brought up at all, because the politics of the situation lead to one person becoming the “spokesperson” for their whole team, or the whole group.

People often think of the productiveness of a software team as the flow from a story being identified as “to do” to working software being released to production. I contend that many of the interesting and important decisions relating to the value and success of the software were made before that limited part of the process.

Posted in software-engineering | Leave a comment

Why mock objects aren’t popular this week

The field of software engineering doesn’t change particularly quickly. Tastes in software engineering change all the time: keeping up with them can quickly result in seasickness or even whiplash. For example, at the moment it’s popular to want to do server-side rendering of front end applications, and unpopular to do single-page web apps. Those of us who learned the LAMP stack or WebObjects are back in fashion without having to lift a finger!

Currently it’s fashionable to restate “don’t mock an interface you don’t own” as the more prescriptive, taste-driven statement “mocks are bad”. Rather than change my practice (I use mocks and I’m happy with that from 2014 is still OK), I’ll ask why has this particular taste arisen.

Mock objects let you focus on the ma, the interstices between objects. You can say “when my case controller receives this filter query, it asks the case store for cases satisfying this predicate”. You’re designing a conversation between independent programs, making restrictions about the messages they use to communicate.

But many people don’t think about software that way, and so don’t design software that way either. They think about software as a system that holistically implements a goal. They want to say “when my case controller receives this filter query, it returns a 200 status and the JSON representation of cases matching that query”. Now, the mocks disappear, because you don’t design how the controller talks to the store, you design the outcome of the request which may well include whatever behaviour the store implements.

Of course, tests depending on the specific behaviour of collaborators are more fragile, and the more specific prescription “don’t mock what you don’t control” uses that fragility: if the behaviour of the thing you don’t control changes, you won’t notice because your mock carries on working the way it always did.

That problem is only a problem if you don’t have any other method of auditing your dependencies for fitness for purpose. If you’re relying on some other interface working in a particular way then you should probably also have contract tests, acceptance tests, or some other mechanism to verify that it does indeed work in that way. That would be independent of whether your reliance is captured in tests that use mock objects or some other design.

It’ll only be a short while before mock objects are cool again. Until then, this was an interesting diversion.

Posted in TDD | Tagged | 1 Comment

Bizarrely, the Guinness book of world records lists the “first microcomputer” as 1980’s Xenix. This doesn’t seem right to me:

  1. Xenix is an operating system, not a microcomputer.
  2. Xenix was announced in 1980 but not shipped until 1981.
  3. The first computer to be designed around a microprocessor is also the first computer to be described in patents and marketing materials as a “microcomputer”—the Micral N.
Posted on by Graham | 1 Comment

On self-taught coders

When a programmer says that they are ‘self-taught’ or that they “taught themselves to code”, what do they mean by it?

Did they sit down at a computer, with reference to no other materials, and press buttons and click things until working programs started to emerge?

It’s unlikely that they learned to program this way. More probable is that our “self-taught” programmer had some instruction. But what? Did they use tutorials or reference content? Was the material online, printed, or hand written? Did it include audio or visual components? Was it static or dynamic?

What feedback did they get? Did their teaching material encourage reflection, assessment, or other checkpoints? Did they have access to a mentor or community of peers, experts, or teachers? How did they interact with that community? Could they ask questions, and if so what did they do with the answers?

What was it that they taught themselves? Text-based processing routines in Commodore BASIC, or the Software Engineering Body of Knowledge?

What were the gaps in their learning? Do they recognise those gaps? Do they acknowledge the gaps? Do they see value in the knowledge that they skipped?

And finally, why do they describe themselves as ‘self-taught’? Is it a badge of honour, or of shame? Does it act as a signal for some other quality? Why is that quality desirable?

Posted in advancement of the self, edjercashun | Leave a comment

On the locations of the bullet holes on bombers that land successfully

Ken Kocienda (unwrapped twitter thread, link to first tweet):

I see so many tweets about agile, epics, scrums, story points, etc. and none of it matters. We didn’t use any of that to ship the best products years ago at Apple.

Exactly none of the most common approaches I see tweeted about all the time helped us to make the original iPhone. Oh, and we didn’t even have product managers.

Do you know what worked?

A clear vision. Design-led development. Weekly demos to deciders who always made the call on what to do next. Clear communication between cross functional teams. Honest feedback. Managing risk as a function of the rate of actual progress toward goals.

I guess it’s tempting to lard on all sorts of processes on top of these simple ideas. My advice: don’t. Simple processes can work. The goal is to ship great products, not build the most complex processes. /end

We can talk about the good and the bad advice in this thread, and what we do or don’t want to take away, but it’s fundamentally not backed up by strong argument. Apple did not do this thing that is talked about now back in the day, and Apple is by-our-lady Apple, so you don’t need to do this thing that is talked about now.

There is lots that I can say here, but my secondary thing is to ask how much your organisation and problem look like Apple’s organisation and problem before adopting their solutions, technological or organisational.

My primary thing is that pets.com didn’t use epics, scrums, story points, etc. either. Pick your case studies carefully.

Posted in agile | Tagged | Leave a comment

Even more on generalist software engineering

There is a difference between a generalist software engineer, and a polyglot programmer. What is that difference, and why did I smoosh the two together in yesterday’s post?

A polyglot programmer is a programmer who can use, or maybe has used, multiple programming languages. That doesn’t mean that they have relevant experience or skills in any other part of software engineering. They might, and so might a monoglot programmer. But (in a work context) they’re working as a programmer and paid for doing the programming.

A generalist software engineer has knowledge and experience relevant to the rest of the software engineering endeavour. Not just the things that programmers need to know beyond the programming, but the whole practice.

In my experience, a generalist software engineer is usually also a polyglot programmer, for a couple of reasons.

  • to get enough experience at all the bits of software engineering, the generalist’s career has probably survived multiple programming language hype cycles.
  • in getting that experience, they have probably had to work with different languages in different contexts: whatever their software products are made out of; scripts for deployment and operations; plugins for various tools; changes to existing components.

More importantly, the programming part of the process is a small (crucial, admittedly, small nonetheless) intersection with the generalist’s work. Knowing all of the gotchas of any one language is something they can ask the specialists about. Fred Brooks would have had a Chief Programmer who knew what the team was trying to do, and a language lawyer who specialised in the tricks of the programming language.

Probably the closest thing that many organisations know how to pay for to the generalist software engineer in modern times is the “agile consultant”. I don’t think it’s strictly a fair comparison. Strictly speaking, an agile consultant is something like a process expert, systems analyst, or cybernetician. They understand how the various parts of the software engineering endeavour affect each other, and can help a team to improve its overall effectiveness by knowing which part needs to change and how.

And, because they use the word agile in their title, they do this with an air of team empowerment and focus on delivery.

Knowing how the threat model impacts on automated testing enables process improvement, but does not necessarily make a software engineering generalist. An actually-general generalist could perhaps also do the threat modelling and create the automated tests. To avoid drawing distinctions between genuine and inauthentic inhabitants of Scotland let’s say that a generalist can work in ‘several’ parts of the software engineering field.

We come back to the problem described in yesterday’s post, but without yesterday’s programming-centric perspective. A manager knows that they need a security person and a test automation person, but does not necessarily know how to deal with a security-and-test-automation-and-maybe-other-things person.

This topic of moving from a narrow focus on programming to a broader understanding of software engineering is the subject of my newsletter: please consider signing up if this topic interests you.

Posted in software-engineering | 1 Comment

On interviewing and generalist software engineers

After publishing podcast Episode 53: Specialism versus generality, Alan Francis raised a good point:

This could be very timely as I ponder my life as a generalist who has struggled when asked to fit in a neat box career wise.

https://twitter.com/PossiblyAlan/status/1523755064879632384

I had a note about hiring in my outline for the episode, and for some reason didn’t record a segment on it. As it came up, I’ll revisit that point.

It’s much easier to get a job as a specialist software engineer than as a generalist. I don’t think that’s because more people need specialists than need generalists, though I do think that people need more specialists than they need generalists.

For a start, it’s a lot easier to construct an interview for a specialist. Have you got any experience with this specialism? What have you done with it? Do you know the answers to these in-group trick questions?

Generalists won’t do well at that kind of question. Why bother remembering the answer to a trick question about some specific technology, when you know how to research trick answers about many technologies? But the interviewer hears “doesn’t even know the first trick answer” and wonders how do I know you can deliver a single pint of software on our stack if you can’t answer a question set by a junior our-stack-ist?

If you want to hire a generalist software engineer…ah. Yes. I think that maybe some people don’t want to, whether or not they know what the generalist would do. They seem to think it’s a “plural specialist”, and that a generalist would know all the trick questions from multiple specialisms.

This is the same thinking that yields “a senior developer is like a junior developer but faster”; it is born of trying to apply Taylorian management science to knowledge work: a junior Typescript programmer can sling a bushel of Typescript in a day. Therefore a senior Typescript programmer can sling ten gallons, and a generalist programmer can sling one peck of Typescript, two of Swift, and one of Ruby in the same time.

I think that the hiring managers who count contributions by the bushel are the ones who see software engineering as a solitary activity. “The frontend folks aren’t keeping pace with the backend folks, so let’s add another frontend dev and we’ll have four issues in progress instead of three.” I have written about the flawed logic behind one person per task before.

A generalist may be of the “I have solved problems using computers before, and can use computers to solve your problem” kind, in which case I might set aside an hour to pair with them on solving a problem. It would be interesting to learn both how they solve it and how they communicate and collaborate while doing so.

Or they may be of the “I will take everybody on the team out to lunch, then the team will become better” kind. In which case I would invite them to guest-facilitate a team ceremony, be it an architecture discussion, a retrospective, or something else, and see how they uncover problems and enable solutions.

In each case the key is the collaboration. A software engineering generalist might understand and get involved with the whole process using multiple technologies, but that does not mean that they do all of the work in isolation themselves. You don’t replace your whole team, but you do (hopefully) improve the cohesiveness of the whole team’s activity.

Of course, a generalist shouldn’t worry about trying to get hired despite having to sneak past the flawed reasoning of the bushel of trick questions interview. They should recognise that the flawed reasoning means that they won’t work well with that particular manager, and look elsewhere.

Posted in software-engineering | 3 Comments

Episode 53: Specialism versus generality

I look at the difference between being a deep specialist as a software engineer working on a particular “stack” and a generalist who builds software using a wide variety of tools, from the perspective of someone who has done both.

Leave a comment