In 1986, Fred Brooks posited that there was “no silver bullet” in software engineering—no tool or process that would yield an order-of-magnitude improvement in productivity. He based this assertion on the division of complexity into that which is essential to the problem being solved, and that which is an accident of the way in which we solve the problem.
In fact, he considered artificial intelligence of two types: AI-1 is “the use of computers to solve problems that previously could only be solved by applying human intelligence” (here Brooks quotes David Parnas), to Brooks that is things like speech and image recognition; and AI-2 is “The use of a specific set of programming techniques [known] as heuristic or rule-based programming” (Parnas again), which to Brooks means expert systems.
He considers that AI-1 isn’t a useful definition and isn’t a source of tackling complexity because results typically don’t transfer between domains. AI-2 contains some of the features we would recognize from today’s programming assistants—finding patterns in large databases of how software has been made, and drawing inferences about how software should be made. The specific implementation technology is very different, but while Brooks sees that such a system can empower an inexperienced programmer with the experience of multiple expert programmers—“no small contribution”—it doesn’t itself tackle the complexity in the programming problem.
He also writes about “automatic programming” systems, which he defines as “the generation of a program for solving a problem from a statement of the problem specifications” and which sounds very much like the vibe coding application of language model-based coding tools. He (writing in 1986, remember) couldn’t see how a generalization of automatic programming could occur, but now we can! So how do they fare?
Accidental complexity
Coding assistants generate the same code that programmers generate, and from that perspective they don’t reduce accidental complexity in the solution. In fact, a cynical take would be to say that they increase accidental complexity, by adding prompt/context engineering to the collection of challenges in specifying a program. That perspective assumes that the prompt is part of the program source, but the generated output is still inspectable and modifiable, so it’s not clearly a valid argument. However, these tools do supply the “no small contribution” of letting any one of us lean on the expertise of all of us.
In general, a programming assistant won’t address accidental complexity until it doesn’t generate source code and just generates an output binary instead. Then someone can fairly compare the complexity of generating a solution by prompting with generating a solution by coding; but they also have to ask whether they have validation tools that are up to the task of evaluating a program using only the executable.
Or the tools can skip the program altogether, and just get the model to do whatever tasks people were previously specifying programs for. Then the accidental complexity has nothing to do with programming at all, and everything to do with language models.
Essential complexity
Considering any problem that we might want to write software for, unless the problem statement itself involves a language model then the language model is entirely unrelated to the problem’s essential complexity. For example, “predict the weather for the next week” hides a lot of assumptions and questions, none of which include language models.
That said, these tools do make it very easy and fast to uncover essential complexity, and typically in the cursed-monkey-paw “that’s not what I meant” way that’s been the bane of software engineering since its inception. This is a good thing.
You type in your prompt, the machine tells you how absolutely right you are, generates some code, you run it—and it does entirely the wrong thing. You realize that you needed to explain that things work in this way, not that way, write some instructions, generate other code…and it does mostly the wrong thing. Progress!
Faster progress than the old thing of specifying all the requirements, designing to the spec, implementing to the design, then discovering that the requirements were ambiguous and going back to the drawing board. Faster, probably, even than getting the first idea of the requirements from the customer, building a prototype, and coming back in two weeks to find out what they think. Whether it’s writing one to throw away, or iteratively collaborating on a design[*], that at least can be much faster now.
[*] Though note that the Spec-Driven Development school is following the path that Brooks did predict for automatic programming (via Parnas again): “a euphemism for programming with a higher-level language than was presently available to the programmer”.

Greetings, Graham! It was fascinating to read your post, along with reading No Silver Bullet again after 30 years or more. Almost 60 years ago at age 12 I wrote my first program (a bad tic-tac-toe program) on a school desk-sized Wang programmable calculator. By age 19 I was working for a large software development company writing increasingly complex minicomputer (and eventually microcontroller) embedded systems in assembly language, which is pretty much what I did for the next decade. Followed by writing bits of runtime systems and code generators for the first two “production” Ada compilers (in Ada). Then graphics programming using C/Unix and workstations, along with a few C++ systems. By the millennium I’d made a near complete transition into Python for most work, although I still write C, C++, and other obscure languages as needed.
Brooks was absolutely right with this statement “at some point the elaboration of a high-level language becomes a burden that increases, not reduces, the intellectual task of the user who rarely uses the esoteric constructs.” Which is what made Ada such an unpleasant (and long winded) language to write. Object oriented programming has been a mixed bag, and I find myself returning to a more functional style. Another one that gave me a chuckle was “[t]he techniques used for speech recognition seem to have little in common with those used for image recognition, and both are different from those used in expert systems.” Multi-layer CNNs existed at the time and were in use for image recognition. I guess the brilliant idea to turn a speech problem into an image recognition problem by using spectrograms had yet to happen. But, I don’t think anyone could have imagined that the entire notion of “expert systems” would eventually be completely subsumed by CNN technology.
I’ve been finding uses for model-assisted programming techniques. It does cut down on the amount of typing needed and most of the online searching parameters for some obscure API. Asking for more than 100 lines of non-boilerplate code is still a problem, close inspection often shows missing and/or irrelevant code. What I’m playing with now is writing code comments as prompts, and using an agent to turn it into a set of “fill in middle” segments for a model.
Great article! Brooks was most insightful. I use coding assistants rather sparingly, mostly for the mechanical aspects of coding, i.e. the syntax. I’ve yet to see one create an “elegant” solution. That would seem to require that Artificial “Intelligence” include more of whatever makes we humans actually intelligent. Penrose uses the term “artificial cleverness.” I would agree that the models are clever in imitation but again, not in the most organically human way.