On programmer behaviours that make Scrum so bad

Respectable persons of this parish of Internet have been, shall we say, critical of Scrum and its ability to help makers (particularly software developers) to make things (particularly software). Ron Jeffries and GeePaw Hill have both deployed the bullshit word.

My own view, which I have described before, is that Scrum is a baseline product delivery process and a process improvement framework. Unfortunately, not many of us are trained or experienced in process improvement, not many of us know what we should be measuring to get a better process, so the process never improves.

At best, then, many Scrum organisations stick with the baseline process. That is still scadloads better than what many software organisations were trying before they had heard of Agile and Scrum, or after they had heard of it and before their CIO’s in-flight magazine made it acceptable. As a quick recap for those who were recruited into the post-Agile software industry, we used to spend two months working out all the tasks that would go into our six month project. Then we’d spend 15 months trying to implement them, then we’d all point at everybody else to explain why it hadn’t worked. Then, if there was any money left, we’d finally remember to ship some software to the customer.

Scrum has improved on that, by distributing both the shipping and the blaming over the whole life span of the project. This was sort of a necessary condition for software development to survive the dot-com crash, when “I have an idea” no longer instantly led to office space in SF, a foosball table, and thirty Herman Miller chairs. You have to deliver something of value early because you haven’t got enough money any more to put it off.

So when these respectable Internet parishioners say that Scrum is bad, this is how bad it has to be to reach that mark. Both of the authors I cite have been drinking from this trough way longer than I have, so have much more experience of the before times.

In this article I wanted to look at some of the things I, and software engineering peers around me, have done to make Scrum this bad. Because, as Ron says, Scrum is systemically bad, and we are part of the system.

Focus on Velocity

One thing it’s possible to measure is how much stuff gets shovelled in per iteration. This can be used to guide a couple of process-improvement ideas. First up is whether our ability to foresee problems arising in implementation is improving: this is “do estimates match actuals” but acknowledging that they never do, and the reason they never do is that we’re too optimistic all the time.

Second is whether the amount of stuff you’re shovelling is sustainable. This has to come second, because you have to have a stable idea of how much stuff there is in “some stuff” before you can determine whether it’s the same stuff you shovelled last time. If you occasionally shovel tons of stuff, then everybody goes off sick and you shovel no stuff, that’s not sustainable pace. If you occasionally shovel tons of stuff, then have to fix a load of production bugs and outages, and do a load of refactoring before you can shovel any more stuff, that’s not a sustainable pace, even if the amount of work in the shovelling phase and the fixing phase is equivalent.

This all makes working out what you can do, and what you can tell other people about what you can do, much easier. If you’re good at working out where the problems lie, and you’re good at knowing how many problems you can encounter and resolve per time, then it’s easier to make plans. Yes, we value responding to change over following a plan, but we also value planning our response to change, and we value communicating the impact of that response.

Even all of that relies on believing that a lot of things that could change, won’t change. Prediction and forecasting aren’t so chaotic that nobody should ever attempt them, but they certainly are sensitive enough to “all things being equal” that it’s important to bear in mind what all the things are and whether they have indeed remained equal.

And so it’s a terrible mistake to assume that this sprint’s velocity should be the same as, or (worse) greater than, the last one. You’re taking a descriptive statistic that should be handled with lots of caveats and using it as a goal. You end up doing the wrong thing.

Let’s say you decide you want 100 bushels of software over the next two weeks, because you shovelled 95 bushels of software last week. The easiest way to achieve that is to take things that you think add up to 100 bushels of software, and do less work so that they contain only, say, 78 bushels, and try to get those 78 bushels done in the time.

Which bits do you cut off? The buttons that the customer presses? No, they’ll notice that. The actions that happen when the customer presses the buttons? They’ll probably notice that, too. How about all the refinement and improvement that will make it possible to add another 100 bushels next time round? Nobody needs that to shovel this 100 bushels in!

But now, next time, shovelling software is harder, and you need to get 110 bushels in to “build on the momentum”. So more corners get cut, and more evenings get worked. And now it’s even harder to get the 120 bushels in that are needed the next time. Actual rate of software is going down, claimed rate of software is going up: soon everything will explode.

Separating “technical” and “non-technical”

Sometimes also “engineering” and “the business”. Particularly in a software company, this is weird, because “the business” is software engineering, but it’s a problem in other lines of business enabled by software too.

Often, domain experts in companies have quite a lot of technical knowledge and experience. In one fintech where I used to work, the “non-technical” people had a wealth (pun intended) of technical know-how when it came to financial planning and advice. That knowledge is as important to the purpose of making financial technology software as is software knowledge. Well, at least as important.

So why do so many teams delineate “engineering” and “the business”, or “techies” and “non-technical” people? In other words, why do software teams decide that only the software typists get a say in what software gets made using what practices? Why do people divide the world into pigs and chickens (though, in fairness to the Scrum folks, they abandoned that metaphor)?

I think the answer may be defensiveness. We’ve taken our ability to shovel 50 bushels of software and our commitment (it used to be an estimate, but now it’s a commitment) to shovel 120 bushels, and it’s evident that we can’t realistically do that. Why not? Oh it must be those pesky product owners, keep bringing their demands from the VP of marketing when all they’re going to do is say “build more”, “shovel more software”, and they aren’t actually doing any of it. If the customer rep didn’t keep repping the customer, we’d have time to actually do the software properly, and that would fix everything.

Note that while the Scrum guide no longer mentions chickens and pigs, it “still makes a distinction between members of the Scrum team and those individuals who are part of the process, but not responsible for delivering products”. This is an important distinction in places, but irrelevant and harmful in others. But it’s at its most harmful when it’s too narrowly drawn. When people who are part of the process are excluded from the delivering-products cabal. You still need to hear from them and use their expertise, even if you pretend that you don’t.

The related problem I have seen, and been part of, is the software-expertise people not even gaining passing knowledge of the problem domain. I’ve even seen it working on a developer tools team where the engineers building the tools didn’t have cause to use, or particularly even understand, the tool during our day-to-day work. All of those good technical practices, like automated acceptance tests, ubiquitous language, domain-driven design; they only mean shit if they’re helping to make the software compatible with the things people want the software for.

And that means a little bit of give and take on the old job boundaries. Let the rest of the company in on some info about how the software development is going, and learn a little about what the other folks you work with do. Be ready to adopt a more nuanced position on a new feature request than “Jen from sales with another crazy demand”.

Undriven process optimisation

As I said up top, the biggest problem Scrum often encounters in the wild is that it’s a process improvement framework run by people who don’t have any expertise at process improvement, but have read a book on Scrum (if that). This is where the agile consultant / retrospective facilitators usually come in: they do know something about process improvement, and even if they know nothing about your process it’s probably failing in similar enough ways to similar processes on similar teams that they can quickly identify the patterns and dysfunctions that apply in your context.

Without that guidance, or without that expertise on the team, retrospectives are the regular talking shop in which everybody knows that something is wrong, nobody knows what, and anybody who has a pet thing to try can propose that thing because there are no cogent arguments against giving it a go (nor are there any for it, except that we’ve got to try something!).

Thus we get resume-driven development: I bet that last sprint was bad because we aren’t functionally reactive enough, so we should spend the next sprint rewriting to this library I read about on medium. Or arbitrary process changes: this sprint we should add a column to the board, because we removed a column last time and look what happened. Or process gaming: I know we didn’t actually finish anything this month, but that’s because we didn’t deploy so if we call something “done” when it’s been typed into the IDE, we’ll achieve more next month. Or more pigs and chickens: the problem we had was that sales kept coming up with things customers needed, so we should stop the sales people talking either to the customers, to us, or to both.

Work out what it is you’re trying to do (not shovelling bushels of software, but the thing you’re trying to do for which software is potentially a solution) and measure that. Do you need to do more of that? Or less of it? Or the same amount for different people? Or for less money? What, about the way you’re shovelling software, could you change that would make that happen?

We’re back at connecting the technical and non-technical parts of the work, of course. To understand how the software work affects the non-software goals we need to understand both, and their interactions. Always have, always will.

Posted in agile, process | Tagged | Leave a comment

Episode 36: the Isolation Episode

From my anosmic isolation chamber as I’ve got the ‘rona! I talk about how many of the things software engineers think they should or shouldn’t be doing probably don’t have any impact on the success or failure of the software they’re making.

I don’t explicitly mention anything that should be in show notes, but a couple of resources relevant to setting goals for businesses and measuring progress would be beneficial, non? Here are two: Measure What Matters by John Doerr is the book on Objectives and Key Results (OKRs), introduced by Andy Grove at Intel and adopted by Alphabet/Google among others. The Four Disciplines from FranklinCovey are how you ensure that the thing you think you ought to be changing, and the thing you can and are changing, are connected.

Leave a comment

Sleep on it

In my experience, the best way to get a high-quality software product is to take your time, not crunch to some deadline. On one project I led, after a couple of months we realised that the feature goals (ALL of them) and the performance goals (also ALL of them, in this case ALL of the iPads back to the first-gen, single core 2010 one) were incompatible.

We talked to the customer, worked out achievable goals with a new timeline, and set to work with a new idea based on what we had found was possible. We went from infrequently running Instruments (the Apple performance analysis tool) to daily, then every merge, then every proposed change. If a change led to a regression, find another change. Customers were disappointed that the thing came out later than they originally thought, but it was still well-received and well-reviewed.

At the last release candidate, we had two known problems. Very infrequently sound would stop playing back, which with Apple’s DTS we isolated to an OS bug. After around 12 hours of uptime (in an iPad app, remember!) there was a chance of a crash, which with DTS we found to be due to calling an animation API in a way consistent with the documentation but inconsistent with the API’s actual expectations. We managed to fix that one before going live on the App Store.

On the other hand, projects I’ve worked on that had crunch times, weekend/evening working, and increased pressure to deliver from management ended up in a worse state. They were still late, but they tended to be even later and lower quality as developers who were under pressure to fix their bugs introduced other bugs by cutting corners. And everybody got upset and angry with everybody else, desperate to find a reason why the late project wasn’t my fault alone.

In one death march project, my first software project which was planned as a three month waterfall and took two years to deliver, we spent a lot of time shovelling bugs in at a faster rate than we were fixing them. On another, an angry product manager demanded that I fix bugs live in a handover meeting to another developer, without access to any test devices, then gave the resulting build to a customer…who of course discovered another, worse bug that had been introduced.

If you want good software, you have to sleep on it, and let everybody else sleep on it.

Posted in process, team | Leave a comment

Episode 35: a bored man with a microphone

I explore the theme of community and the difficulty I have with feeling like a member of a community of technologists. This was motivated by Joy, or Not by Ron Jeffries.

The Descent of Man by Grayson Perry

Society of Research Software Engineering

Thinking Clearly about Corporations by John Sullivan

[objc retain];, the video stream for Objective-C programmers on GNUstep

Leave a comment

My proposal for scaling open source: don’t

I’ve had a number of conversations about what “we” in the “free software community” “need” to do to combat the growth in proprietary, user-hostile and customer-hostile business models like cloud user-generated content hosts, social media platforms, hosted payment platforms, videoconferencing services etc. Questions can often be summarised as “what can we do to get everyone off of Facebook groups”, “how do we get businesses to adopt Jitsi Meet instead of Teams” or “how do we convince everyone that Mattermost is better for community chat than Slack”.

My answer is “we don’t”, which is very different from “we do nothing about those things”. Scaled software platforms introduce all sorts of problems that are only caused by trying to operate the software at scale, and the reason the big Silicon Valley companies are that big is that they have to spend a load of resources just to tread water because they’ve made everything so complex for themselves.

This scale problem has two related effects: firstly the companies are hyper-concerned about “growth” because when you’ve got a billion users, your shareholders want to know where the next hundred million are coming from, not the next twenty. Secondly the companies are overly-focused on lowest common denominator solutions, because Jennifer Miggins from South Shields is a rounding error and anything that’s good enough for Scott Zablowski from Los Angeles will have to be good enough for her too, and the millions of people on the flight path between them.

Growth hacking and lowest common denominator experiences are their problems, so we should avoid making them our problems, too. We already have various tools for enabling growth: the freedom to use the software for any purpose being one of the most powerful. We can go the other way and provide deeply-specific experiences that solve a small collection of problems incredibly well for a small number of people. Then those people become super-committed fans because no other thing works as well for them as our thing, and they tell their small number of friends, who can not only use this great thing but have the freedom to study how the program works, and change it so it does their computing as they wish—or to get someone to change it for them. Thus the snowball turns into an avalanche.

Each of these massive corporations with their non-free platforms that we’re trying to displace started as a small corporation solving a small problem for a small number of people. Facebook was internal to one university. Apple sold 500 computers to a single reseller. Google was a research project for one supervisor. This is a view of the world that’s been heavily skewed by the seemingly ready access to millions of dollars in venture capital for disruptive platforms, but many endeavours don’t have access to that capital and many that do don’t succeed. It is ludicrous to try and compete on the same terms without the same resources, so throw Marc Andreessen’s rulebook away and write a different one.

We get freedom to a billion people a handful at a time. That reddit-killing distributed self-hosted tool you’re building probably won’t kill reddit, sorry. Design for that one farmer’s cooperative in Skåne, and other farmers and other cooperatives will notice. Design for that one town government in Nordrhein-Westfalen, and other towns and other governments will notice. Design for that one biochemistry research group in Brasilia, and other biochemists and other researchers will notice. Make something personal for a dozen people, because that’s the one thing those massive vendors will never do and never even understand that they could do.

Posted in whatevs | 10 Comments

Episode 34: actually it’s about ethics in software engineering

An episode about the philosophy of software engineering.

Leave a comment

There is no “us” in team

I’ve talked before about the non-team team dynamic that is “one person per task”. Where the management and engineers collude to push the organisation beyond a sustainable pace by making sure that at all times, each individual is kept busy and collaboration is minimised.

I talked about the deleterious effect on collaboration, particularly that code review becomes a burden resolved with a quick “LGTM”. People quickly develop specialisations and fiefdoms: oh there’s a CUDA story, better give it to Yevgeny as he worked on the last CUDA story.

The organisation quickly adapts to this balkanisation and optimises for it. Is there a CUDA story in the next sprint? We need something for Yevgeny to do. This is Conway’s Corrolary: the most efficient way to develop software is when the structure matches the org chart.

Basically all forms of collaboration become a slog when there’s no “us” in team. Unfortunately, the contradiction at the heart of this 19th century approach to division of labour is that, when applied to knowledge work, the value to each participant of being in meetings is minimised, while the necessity for each participant to be in a meeting is maximised.

The value is minimised because each person has their personal task within their personal fiefdom to work on. Attending a meeting takes away from the individual productivity that the process is optimising for. Additionally, it increases the likelihood that the meeting content will be mostly irrelevant: why should I want to discuss backend work when Sophie takes all the backend tasks?

The meetings are necessary, though, because nobody owns the whole widget. No-one can see the impact of any workflow change, or dependency adoption, or clean up task, because nobody understands more than a 1/N part of the whole system. Every little thing needs to be run by Sophie and Yevgeny and all the others because no-one is in a position to make a decision without their input.

This might sound radically democratic, and not the sort of thing you’d expect from a business: nobody can make a decision without consulting all the workers! Power to the people! In fact it’s just entirely progress-destroying: nobody can make a decision at all until they’ve got every single person on board, and that’s so much work that a lot of decisions will be defaulted. Nothing changes.

And there’s no way within this paradigm to avoid that. Have fewer meetings, and each individual is happier because they get to maximise progress time spent on their individual tasks. But the work will eventually grind to a halt, as the architecture reflects N different opinions, and the N! different interfaces (which have each fallen into the unowned gaps between the individual contributors) become harder to work with.

Have more meetings, and people will grumble that there are too many meetings. And that Piotr is trying to land-grab from other fiefdoms by pushing for decisions that cross into Sophie’s domain.

The answer is to reconstitute the team – preferably along self-organising principles – into a cybernetic organism that makes use of its constituent individuals as they can best be applied, but in pursuit of the team’s goals, not N individual goals. This means radical democracy for some issues, (agreed) tyranny for others, and collective ignorance of yet others.

It means in some cases giving anyone the autonomy to make some choices, but giving someone with more expertise the autonomy to override those choices. In some cases, all decisions get made locally, in others, they must be run past an agreed arbiter. In some cases, having one task per team, or even no tasks per team if the team needs to do something more important before it can take on another task.

Posted in agile, team | 1 Comment

Episode 33: Ask Me Anything

@sharplet asked me a few questions, and you can too!

The theme music is Blue-Eyed Stranger.

Evidence-Based Software Engineering Using R.

Median household income in the UK in January 2021 was £29,900 (I said “about £28,000”). Median software engineer salary (i.e. for the role “software engineer” with no seniority modifiers) is £37,733.

Leave a comment

It was requested on twitter that I start answering community questions on the podcast. I’ve got a few to get the ball rolling, but what would you like to ask? Comment here, or reach me wherever you know I hang out.

Posted on by Graham | Leave a comment

An Imagined History of Object-Oriented Programming

Having looked at hopefully modern views on Object-Oriented analysis and design, it’s time to look at what happened to Object-Oriented Programming. This is an opinionated, ideologically-motivated history, that in no way reflects reality: a real history of OOP would require time and skills that I lack, and would undoubtedly be almost as inaccurate. But in telling this version we get to learn more about the subject, and almost certainly more about the author too.
They always say that history is written by the victors, and it’s hard to argue that OOP was anything other than victorious. When people explain about how they prefer to write functional programs because it helps them to “reason about” code, all the reasoning that was done about the code on which they depend was done in terms of objects. The ramda or lodash or Elm-using functional programmer writes functions in JavaScript on an engine written in C++. Swift Combine uses a functional pipeline to glue UIKit objects to Core Data objects, all in an Objective-C IDE and – again – a C++ compiler.

Maybe there’s something in that. Maybe the functional thing works well if you’re transforming data from one system to another, and our current phase of VC-backed disruption needs that. Perhaps we’re at the expansion phase, applying the existing systems to broader domains, and a later consolidation or contraction phase will demand yet another paradigm.
Anyway, Object-Oriented Programming famously (and incorrectly, remember) grew out of the first phase of functional programming: the one that arose when it wasn’t even clear whether computers existed, or if they did whether they could be made fast enough or complex enough to evaluate a function. Smalltalk may have borrowed a few ideas from Simula, but it spoke with a distinct Lisp.

We’ll fast forward through that interesting bit of history when all the research happened, to that boring bit where all the development happened. The Xerox team famously diluted the purity of their vision in the hot-air balloon issue of Byte magazine, and a whole complex of consultants, trainers and methodologists jumped in to turn a system learnable by children into one that couldn’t be mastered by professional programmers.
Actually, that’s not fair: the system already couldn’t be mastered by professional programmers, a breed famous for assuming that they are correct and that any evidence to the contrary is flawed. It was designed to be learnable by children, not by those who think they already know better.

The result was the ramda/lodash/Elm/Clojure/F# of OOP: tools that let you tell your local user group that you’ve adopted this whole Objects thing without, y’know, having to do that. Languages called Object-*, Object*, Objective-*, O* added keywords like “class” to existing programming languages so you could carry on writing software as you already had been, but maybe change the word you used to declare modules.

Eventually, the jig was up, and people cottoned on to the observation that Object-Intercal is just Intercal no matter how you spell come.from(). So the next step was to change the naming scheme to make it a little more opaque. C++ is just C with Classes. So is Java, so is C#. Visual BASIC.net is little better.

Meanwhile, some people who had been using Smalltalk and getting on well with fast development of prototypes that they could edit while running into a deployable system had an idea. Why not tell everyone else how great it is to develop prototypes fast and edit them while running into the deployable system? The full story of that will have to wait for the Imagined History of Agile, but the TL;DR is that whatever they said, everybody heard “carry on doing what we’re already doing but plus Jira”.

Well, that’s what they heard about the practices. What they heard about the principles was “principles? We don’t need no stinking principles, that sounds like Big Thinking Up Front urgh” so decided to stop thinking about anything as long as the next two weeks of work would be paid for. Yes, iterative, incremental programming introduced the idea of a project the same length as the gap between pay checks, thus paving the way for fire and rehire.

And thus we arrive at the ideological void of today’s computering. A phase in which what you don’t do is more important than what you do: #NoEstimates, #NoSQL, #NoProject, #NoManagers…#NoAdvances.

Something will fill that void. It won’t be soon. Functional programming is a loose collection of academic ideas with negation principles – #NoSideEffects and #NoMutableState – but doesn’t encourage anything new. As I said earlier, it may be that we don’t need anything new at the moment: there’s enough money in applying the old things to new businesses and funnelling more money to the cloud providers.

But presumably that will end soon. The promised and perpetually interrupted parallel computing paradigm we were guaranteed upon the death of Moore’s Law in the (1990s, 2000s, 2010s) will eventually meet the observation that every object larger than a grain of rice has a decent ARM CPU in, leading to a revolution in distributed consensus computing. Watch out for the blockchain folks saying that means they were right all along: in a very specific way, they were.

Or maybe the impressive capability but limited applicability of AI will meet the limited capability but impressive applicability of intentional programming in some hybrid paradigm. Or maybe if we wait long enough, quantum computing will bring both its new ideas and some reason to use them.

But that’s the imagined future of programming, and we’re not in that article yet.

Posted in OOP | Tagged | Leave a comment