I was wrong, 80 characters is fine

Do you remember when I said, in my 2019 post Why 80?, that “80 characters per line is a particular convention out of many that we know literally nothing about the benefit or cost of, even today”?

Today I saw a “rule of 66 non-white space characters per line” mentioned in connection with TeX. I couldn’t find that reference in the TeXbook, though it’s also in Bringhurst’s “The Elements of Typographic Style” so let’s go with it for the moment.

If there should be 66 non-white space characters per line, then a line should be 66 * (average word length + 1) / (average word length) characters long to hold 66 non-white space characters, on average, if it’s displaying a run of words. In English, the average word length is about five. That gives us 79.2 characters per line.

If you’re reading English, an 80 column terminal makes sense, if Bringhurst’s justification (pardon the pun) is valid. Though I still don’t know why people suggest 72 characters for commit message lines.

Posted in design | 5 Comments

Re-evaluating second brains

Because my undergraduate Physics teaching drilled into me the importance of keeping a lab book, I’ve always kept notebooks throughout my professional career too. If I want to know (and I’m not sure why I would) what process I created to upgrade the Solaris 7 kernel on a Sun Enterprise 450, I still have the notes I took at the time and could reconstruct that process. Same when I moved into testing, and software security, and programming; same now, as a documentation author.

Being in computering I’ve repeatedly looked for ways to transfer that note-taking to a digital context. I’ve got about 2,300 links saved, some of which have followed my from my undergraduate-era Netscape bookmarks file. How many of them still point to the resource I intended to record? No idea. How many of them can I find when I need to? Not as many as I might like.

So I switched to clipping web pages into the same note-taking application I use to write text notes. That’s better as I now have an archive of the content I was trying to record, but now I meet the limitations of the note-taking application. The particular application I used to use lost some of the features I relied on when the creators rewrote the UI using a cross-platform framework, and then they fired all of their developers anyway so I lost faith. I exported those notes to a different tool, which has a different UI with different issues.

Those “second brain” notes tools are somewhat good for recall (though not when I’ve used my readable-to-me-but-not-to-the-computer handwriting to take notes, and not so good at voice notes), but the whole “application” thing means that I have to want to enter the note-taking context to use them. So I don’t use them: the ultimate failure of a notes tool. I have notes I’ve jotted on journal articles in two different software systems that don’t integrate with the notes application, and also on paper.

Paper. That thing that I started using decades ago, still use now, and that I know I can find when I need to. I’m leaning into paper for a second brain. There’s a lot of suggestion that physically writing things aids recall, meaning that I can learn from the notes I took without having to actually go back and rediscover them. And paper notes take away the anxiety that comes from not having curated my second brain software just right: not because I can get it right with a paper system, but because I don’t expect to. I know that everything’s chronological, I know that I usually wrote tables of contents, and I know that I mostly don’t have to go back to those old pages anyway.

Posted in memory, writing | 1 Comment

Type safety, undefined behaviour, and us

There appears to be a shift towards programming languages that improve safety by providing an expressive type system, automatic memory management, and no gaps in the specification that lead to “undefined behaviour”. If your program is consistent with the logic of the programming language specification, then it compiles and executes the behaviour you would understand from the source code and the language documentation. If any of that isn’t true, then your program doesn’t compile: there are no gaps for not-quite-consistent programs to fall through, that get detected later when the program is running.

When I express this set of language design constraints, you may think of Swift, of Rust, of F#. You may try to point to Java, and say that it is very precisely defined and specified (it may be, but it still has undefined behavior: consider when fail-fast iterators throw ConcurrentModificationException, for example). But when was this trend started? When did people start to think about how these programming language properties might affect their programs?

Here’s a 1974 description of the SIMULA 67 programming language. This is SIMULA as a tool for extensible program products, by Jacob Palme, published in ACM SIGPLAN notices. The quotes are presented out of order, for expressivity.

Many programming language definitions contain the word “undefined” in many places. This is bad for security, since no one knows for sure what will happen if a programmer by mistake makes something “undefined” . In the definition of SIMULA, there is almost no occurence [sic] of the word “undefined” . This means that you always can know what is going to happen if you just have a program and the language definition . All errors can be discovered as soon as a rule of the language is broken. In other languages, errors are some times not discovered until strange consequences appear, and in these languages you then have to look at mysterious dumps to try to find the error. This can never happen in a fully working SIMULA system. All errors against the language are discovered as soon as they logically can be discovered, and the error message is always phrased in SIMULA terms, telling you what SIMULA error you have done. (Never any mysterious message like “Illegal memory reference”). The type checking also means that those programming errors which are not language errors in other languages will very often be language errors in SIMULA, so that they can be discovered by the compiler. Compared to other languages I know of, no language has such a carefully prepared, fully covering security system as SIMULA, and this is very important for the production of commercial program products.

Important for security is also that there is no kind of explicit statement for freeing the memory of a record no longer needed. Such statements are very dangerous, since the program can by mistake later on try to process a record whose memory has been freed. In SIMULA, records no longer needed are removed automatically by a so called garbage collector.

Important for security is also that the data fields of new records are initialized at allocation time. Otherwise, some garbage left from previous data could give very difficult errors.

When Palme says “security”, he doesn’t mean the kind of thing we might call “software security” or “information security” in 2023: the protection of assets despite possibly-intentional misuse of the program. He really means “safety”: the idea that the programmer can know the behaviour of the program, even when the program has been deployed to the customer’s computer and is out of the programmer’s direct control.

Now, what do we learn from this? Firstly that the problems evident in C were already known and solved when C was being developed. C’s first wave was between roughly 1972 (its inception) and 1978 (first edition K&R); this 1974 review identifies important qualities evinced by a 1967 programming language; qualities lacking in C.

Also, that they maybe aren’t that showstoppers, given how much software is successfully in use today and is written in C, or a related language, or on a runtime that’s written in C. Probably that software engineering is a fashion discipline, and that safe programming languages are fashionable now in a way that they weren’t in the 1960s, 1970s, 1980s, 1990s, 2000s, and 2010s: we had access to them, and we didn’t care to use them.

But also we see support for André Gide’s position: Toutes choses sont dites déjà; mais comme personne n’écoute, il faut toujours recommencer. That is: everything has already been said, but because nobody listens, it’s necessary to say it again.

That nobody listens isn’tt a criticism of everyday software engineers. There were a tiny number of people reading ACM SIGPLAN notices in 1974, and most people who’ve entered computing in the intervening 39 years don’t know any of them. There’s no way they could be reasonably expected to have encountered this particular review of SIMULA, let alone to have taken the time to reflect on its position with respect to everything else they know about programming languages so that it can influence their choices.

Software engineering is big enough, and up until recently growing fast enough, that to a first approximation anything that’s worth reading in the field hasn’t been read by any practitioner. If an idea is important, it bears repeating. The people who didn’t hear it the first time around aren’t ignorant; they’re busy.

Posted in software-engineering | Tagged | 1 Comment

On legitimacy and software engineering

More than 400,000 software engineers have lost their jobs in the last couple of years, I wouldn’t be surprised if it’s really significantly more than half a million as some won’t have been documented anywhere that the tracker project saw. In the years leading up to Layoffapalooza, software engineers commanded high salaries (though maybe not universally high) , often with significant perks. Do these shifts in employability privileges reflect a change in the legitimacy software engineering enjoys among its peers, clients, and other stakeholders?

Let’s first identify what legitimacy is. A dictionary definition would have it that legitimacy is something like “the ability to be defended”, so as our working definition for software engineering’s legitimacy let’s use the idea that software engineering legitimacy is the right, or ability, that software engineers have to define their work and their contribution on their own terms. That is, taking as an assumption that somebody somewhere in our society wants someone to write software for them, software engineering is more legitimate if software engineers get to decide what to write, how, and how they’re evaluated, and less legitimate if somebody else (clients, managers, governments, whoever) gets to decide that. This kindof ties legitimacy with autonomy, but it also connects it with status or privilege.

Following Mark Suchman’s Managing legitimacy: strategic and institutional approaches, let’s break this down into three categories. He’s talking about institutions, so I’m pretending to make an assumption here that “software engineering” is an institution. I suspect that some people (both inside and outside the field) see it as such, and others don’t. But it also might be useful to explicitly call out organisations, interest groups, user communities, or other subcultures within the field and investigate whether they are more or less institutional (so that we can

Cognitive legitimacy

The third of Suchman’s categories, cognitive legitimacy is the idea that an institution is legitimate if it doesn’t take much effort to see it as such: in other words, that it’s consistent with the worldview and values that people already have. It’s easy to maintain cognitive legitimacy, though perhaps hard to acquire. But it also doesn’t get you much, as it’s really about existing in the background. An institution that didn’t have cognitive legitimacy might look something like the Communist Party of the USA: every time you’re reminded that it’s there, it’s a surprise and you’re not sure what to make of it.

People pushed for the cognitive legitimacy of software engineering basically from the start of the field. The 1967 NATO working group chose the name because it was illegitimate:

The phrase ‘software engineering’ was deliberately chosen as being provocative, in implying the need for software manufacture to be based on the types of theoretical foundations and practical disciplines, that are traditional in the established branches of engineering.

Indeed the preface to the second of the NATO conferences on software engineering reports of the first conference:

The vast majority of these participants found commonality in a widespread belief as to the extent and seriousness of the problems facing the area of human endeavour which has, perhaps somewhat prematurely, been called “software engineering”.

Brian Randell, who edited the two conference reports, recalls that the second conference attempted to “fast-track” legitimacy; an attempt that failed.

Unlike the first conference, at which it was fully accepted that the term software engineering expressed a need rather than a reality, in Rome there was already a slight tendency to talk as if the subject already existed. And it became clear during the conference that the organizers had a hidden agenda, namely that of persuading NATO to fund the setting up of an International Software Engineering Institute. However things did not go according to their plan. The discussion sessions which were meant to provide evidence of strong and extensive support for this proposal were instead marked by considerable scepticism, and led one of the participants, Tom Simpson of IBM, to write a splendid short satire on “Masterpiece Engineering”.

Fast-forward 54 years from that second conference, and the phrase “software engineering” is indeed part of the cognitive background of programming computers. But the institutions that underpin it are not. Like other phrases, including Agile, DevOps, and Open Source, software engineering has been co-opted by the managerial class to mean “the people we hire to do the work we want them to do, the way we want them to do it”. Research institutes like the SEI, or special interest groups like ACM SigSoft, don’t have a seat at the software engineering table in the large. Even in academia, while software engineering was meant to base practice on “theoretical foundations and practical disciplines”, it’s common that if software engineering is researched at all it’s a field in the Computer Science department. All theoretical foundation, no practical discipline.

Pragmatic legitimacy

Pragmatic legitimacy is that which supports an organisation because doing so is in the rational self-interests of the audience. A lot of support for open source software is in the form of pragmatic legitimacy: we open source our database because that will encourage grass-roots adoption, which is cheaper than a sales channel. But notice that, as said with cognitive legitimacy, when we talk about open source we talk about a managerial decision to “open source”; we don’t talk about joining the Open Source Initiative, or bringing in a representative from the Software Freedom Law Center to train our attorneys. The idea holds legitimacy if not the institution.

Come down from the conceptual to the project level, and more pragmatic legitimacy holds. An organisation uses Linux, but it doesn’t want to maintain its own Linux source tree, so it’s in that organisation’s interest to accept the Linux Foundation as a legitimate collaborator. In general “upstream” is a legitimate institution: you upstream your changes because it’s in your interest to accept the governance of the external maintainer team.

Moral legitimacy

Moral legitimacy perpetuates an institution because it represents positive values, or “the right thing to do”. A lot of people in the Free Software movement see GNU, and the FSF, as moral imperatives, so occasional missteps in governance can be forgiven or overlooked as they don’t represent the overall trajectory or challenge the belief structure.

In academia, a lot of arguments for research software engineering come from the idea that it’s the right thing to do: making reproducible software, or high-quality software, is good for research, therefore we should do it and rely on the collective expertise of organisations like the society for RSE, or the software sustainability institute, or Better Scientific Software, to help us interpret the needs. But does that align with pragmatic legitimacy, and when it doesn’t, how is the conflict resolved? Is “high-quality software” a strong enough moral imperative among all stakeholders to influence the autonomy of a whole occupation?

Posted in academia, software-engineering | Leave a comment

On association

My research touches on the professionalisation (or otherwise) of software engineering, and particularly the association (or not) of software engineers with a professional body, or with each other (or not) through a professional body. So what’s that about?

In Engagement Motivations in Professional Associations, Mark Hager uses a model that separates incentives to belong to a professional association into public incentives (i.e. those good for the profession as a whole, or for society as a whole) and private incentives (i.e. those good for an individual practitioner). Various people have tried to argue that people are only motivated by the private incentives (i.e. de Tocqueville’s “enlightened self-interest”).

Below, I give a quick survey of the incentives in this model, and informally how I see them enacted in computing. If there’s a summary, it’s that any idea of professionalism has been enclosed by the corporate actors.

Public incentives

Promoting greater appreciation of field among practitioners

My dude, software engineering has this in spades. Whether it’s a newsletter encouraging people to apply formal methods to their work, a strike force evangelising the latest programming language, or a consultant explaining that the reason their methodology is failing is that you don’t methodologise hard enough, it’s not incredibly clear that you can be in computering unless you’re telling everyone else what’s wrong with the way they computer. This is rarely done through formal associations though: while the SWEBOK does exist, I’d wager that the fraction of software engineers who refer to it in their work is 0 whatever floating point representation you’re using.

Ironically, the software craftsmanship movement suggests that a better way to promote good practice than professional associations is through medieval-style craft guilds, when professional associations are craft guilds that survived into the 20th century, with all the gatekeeping and back-scratching that entails.

Promoting public awareness of contributions in the field

If this happens, it seems to mostly be left to the marketing departments of large companies. The last I saw about augmented reality in the mainstream media was an advert for a product.

Influencing legislation and regulations that affect the field

Again, you’ll find a lot of this done in the policy departments of large companies. The large professional societies also get involved in lobbying work, but either explicitly walk back from discussions of regulation (ACM) or limit themselves to questions of research funding. Smaller organisations lobby on single-issue platforms (e.g. FSF Europe and the “public money, public code” campaign; the Documentation Foundation’s advocacy for open standards).

Maintaining a code of ethics for practice

It’s not like computering is devoid of ethics issues: artificial intelligence and the world of work; responsibility for loss of life, injury, or property damage caused by defective software; intellectual property and ownership; personal liberty, privacy, and data sovereignty; the list goes on. The professional societies, particularly those derived from or modelled on the engineering associations (ACM, IEEE, BCS), do have codes of ethics. Other smaller groups and individuals try to propose particular ethical codes, but there’s a network effect in play here. A code of ethics needs to be popular enough that clients of computerists and the public know about it and know to look out for it, with the far extreme being 100% coverage: either you have committed to the Hippocratic Oath, or you are not a practitioner of medicine.

Private incentives

Access to career information and employment opportunities

If you’re early-career, you want a job board to find someone who’s hiring early career roles. If you’re mid or senior career, you want a network where you can find out about opportunities and whether they’re worth pursuing. I don’t know if you’ve read the news lately, but staying employed in computering isn’t going great at the moment.

Opportunities to gain leadership experiences

How do you get that mid-career role? By showing that you can lead a project, team, or have some other influence. What early-career role gives you those opportunities? crickets Ad hoc networking based on open source seems to fill in for professional association here: rather than doing voluntary work contributing to Communications of the ACM, people are depositing npm modules onto the web.

Access to current information in the field

Rather than reading Communications of the ACM, we’re all looking for task-oriented information at the time we have a task to complete: the Q&A websites, technology-specific podcasts and video channels are filling in for any clearing house of professional advancement (to the point where even for-profit examples like publishing companies aren’t filling the gaps: what was the last attempt at an equivalent to Code Complete, 2nd Edition you can name?). This leads to a sort of balkanisation where anyone can quickly get up to speed on the technology they’re using, and generalising from that or building a holistic view is incredibly difficult. Certain blogs try to fill that gap, but again are individually published and not typically associated with any professional body.

Professional development or education programs

We have degree programs, and indeed those usually have accredited curricula (the ACM has traditionally been very active in that field, and the BCS in the UK). But many of the degrees are Computer Science rather than Software Engineering, and do they teach QA, or systems administration, or project management, or human-computer interaction? Are there vocational courses in those topics? Are they well-regarded: by potential students, by potential employers, by the public?

And then there are vendor certifications.

Posted in academia, advancement of the self, Responsibility, software-engineering | Leave a comment

Your reminder that “British English” and “American English” are fictional constructs

Low-stakes conspiracy theory: they were invented by word processing marketers to justify spell-check features that weren’t necessary.

Evidence: the Oxford English Dictionary (Oxford being in Britain) entry for “-ise” suffix’s first sense is “A frequent spelling of -ize suffix, suffix forming verbs, which see.” So in a British dictionary, -ize is preferred. But in a computer, I have to change my whole hecking country to be able to write that!

Posted in Englisc | Leave a comment

On Scarcity

It’s called scarcity, and we can’t wait to see what you do with it.

Let’s start with the important bit. I think that over the last year, with acceleration toward the end of the year, I have heard of over 100,000 software engineers losing their jobs in some way. This is a tragedy. Each one of those people is a person, whose livelihood is at the whim of some capricious capitalist or board of same. Some had families, some were working their dream jobs, others had quiet quit and were just paying the bills. Each one of them was let down by a system that values the line going up more than it values their families, their dreams, and their bills.

While I am sad for those people, I am excited for the changes in software engineering that will come in the next decade. Why? Because everything I like about computers came from a place of scarcity in computering, and everything I dislike about computers came from a place of abundance in computering.

The old, waterfall-but-not-quite, measure-twice-and-cut-once approach to project management came from a place of abundance. It’s cheaper, so the idea goes, to have a department of developers sitting around waiting for a functional specification to be completed and signed off by senior management than for them to be writing working software: what if they get it wrong?

The team at Xerox PARC – 50 folks who were just told to get on with it – designed a way of thinking about computers that meant a single child (or, even better, a small group of children) could think about a problem and solve it in a computer themselves. Some of those 50 people also designed the computer they’d do it on, alongside a network and some peripherals.

This begat eXtreme Programming, which burst onto the scene in a time of scarcity (the original .com crash). People had been doing it for a while, but when everyone else ran out of money they started to listen: a small team of maybe 10 folks, left to get on with it, were running rings around departments of 200 people.

Speaking of the .com crash, this is the time when everyone realised how expensive those Oracle and Solaris licenses were. Especially if you compared them with the zero charged for GNU, Linux, and MySQL. The LAMP stack – the beginning of mainstream adoption for GNU and free software in general – is a software scarcity feature.

One of the early (earlier than the .com crash) wins for GNU and the Free Software Foundation was getting NeXT to open up their Objective-C compiler. NeXT was a small team taking off-the-shelf and free components, building a system that rivalled anything Microsoft, AT&T, HP, IBM, Sun, or Digital were doing – and that outlived almost all of them. Remember that the NeXT CEO wouldn’t become a billionaire until his other company released Toy Story, and that NeXT not only did the above, but also defined the first wave of dynamic websites and e-commerce: the best web technology was scarcity web technology.

What’s happened since those stories were enacted is that computerists have collectively forgotten how scarcity breeds innovation. You don’t need to know how 10 folks round a whiteboard can outsmart a 200 engineer department if your department hired 200 engineers _this month_: just put half of them on solving your problems, and half of them on the problems caused by the first half.

Thus we get SAFe and Scrumbut: frameworks for paying lip service to agile development while making sure that each group of 10 folks doesn’t do anything that wasn’t signed off across the other 350 groups of 10 folks.

Thus we get software engineering practices designed to make it easier to add code than to read it: what’s the point of reading the existing code if the one person who wrote it has already been replaced 175 times over, and has moved teams twice?

Thus we get not DevOps, but the DevOps department: why get your developers and ops folks to talk to each other if it’s cheaper to just hire another 200 folks to sit between them?

Thus we get the npm ecosystem: what’s the point of understanding your code if it’s cheaper just to randomly import somebody else’s and hire a team of 30 to deal with the fallout?

Thus we get corporate open source: what’s the point of software freedom when you can hire 100 people to push out code that doesn’t fulfil your needs but makes it easier to hire the next 500 people?

I am sad for the many people whose lives have been upended by the current downturn in the computering economy, but I am also sad for how little gets done within that economy. I look forward to the coming wave of innovation, and the ability to once again do more with less.

Posted in software-engineering | Leave a comment

Transcendence

I was at the RSE conference in Newcastle, along with many people whom I have met, worked with, and enjoyed talking to in the past. Many more people whom I have met, worked with, and enjoyed talking to in the past were at an entirely different conference in Aberystwyth, and I am disappointed to have missed out there.

One of the keynote speakers at RSEcon22, Marlene Manghami, talked about the idea of transcendence through community membership. They cited evidence that fans of soccer teams go through the same hormonal shifts at the same intensity during the match as the players themselves. Effectively the fans are on the pitch, playing the game, feeling the same feelings as their comrades on the team, even though they are in the stands or even at home watching on TV.

I do not know that I have felt that sense of transcendence, and believe I am probably missing out both on strong emotional connections with others and on an ability to contribute effectively to society (to a society, to any society) by lacking the strong motivation that comes from knowing that making other people happier makes me happier, because I am with them.

Leave a comment

The Image Model

I was reflecting on things that I know now, a couple of decades in to my career, that I wish I had been told at the beginning. Many things came to mind, but the most immediate from a technological perspective was Smalltalk’s image model.

It’s not even the technology of the Smalltalk image that’s relevant, but the model of thinking that works well with it. In Smalltalk, there are two (three) important files for a given machine: the VM is the machine that can run Smalltalk; the image is a snapshot of all of the Smalltalk objects on the machine(; and the sources are the source code for the classes and methods in that image).

This has weird implications for how you work that differ greatly from “compile this text stream” or “interpret this text stream” programming environments. People who have used the ENVY/Developer tool generally seem to wax lyrical and wonder why it was never reinvented, like the rest of software engineering is the beach with the ruins of the Statue of Liberty poking out from the end of the Planet of the Apes. But the bit I wish I had been told about: the image model puts the “personal” in “personal computer” as far as programming is concerned. Every piece of software you write is part of your image: a peer of the rest of the software you wrote, of the software that other people wrote that you added, and of the software that was already there when you first booted the machine.

I wish I had been told to think like that: that each tool or project is not a separate tool or project, but a cumulative addition to the image. To keep everything I wrote, so that the next time I needed something I might not need to write it. To make sure, when using new things, that I could integrate them with the image (it didn’t exist at the time, but TruffleSQUEAK is very much this idea). To give up asking “how can I write software to solve this problem”, and to start asking “how can I solve this problem with software, writing some if necessary”?

It would be the difference between twenty of years of experience and one year of experience, twenty times over.

Posted in advancement of the self, smalltalk | Tagged | Leave a comment

Phrases in computing that might need retiring

The upcoming issue of the SICPers newsletter is all about phrases that were introduced to computing to mean one thing, but seem to get used in practice to mean another. This annoys purists, pedants, and historians: it also annoys the kind of software engineer who dives into the literature to see how ideas were discussed and used and finds that the discussions and usages were about something entirely different.

So should we just abandon all technical terminology in computing? Maybe. Here’s an irreverent guide.

Object-Oriented Programming

Luckily the industry doesn’t really use this term any more so we can ignore the changed meaning. The small club of people who still care can use it correctly, everybody else can carry on not using it. Just be aware when diving through the history books that it might mean “extreme late binding of all things” or it might mean “modules, but using the word class” depending on the age of the text.

Agile

Nope, this one’s in the bin, I’m afraid. It used to mean “not waterfall” and now means “waterfall with a status meeting every day and an internal demo every two weeks”. We have to find a new way to discuss the idea that maybe we focus on the working software and not on the organisational bureaucracy, and that way does not involve the word…

DevOps

If you can hire a “DevOps engineer” to fulfil a specific role on a software team then we have all lost at using the phrase DevOps.

Artificial Intelligence

This one used to mean “psychologist/neuroscientist developing computer models to understand how intelligence works” and now means “an algorithm pushed to production by a programmer who doesn’t understand it”. But there is a potential for confusion with the minor but common usage “actually a collection of if statements but last I checked AI wasn’t a protected term” which you have to be aware of. Probably OK, in fact you should use it more in your next grant bid.

Technical Debt

Previously something very specific used in the context of financial technology development. Now means whatever anybody needs it to mean if they want their product owner to let them do some hobbyist programming on their line-of-business software, or else. Can definitely be retired.

Behaviour-Driven Development

Was originally the idea that maybe the things your software does should depend on the things the customers want it to do. Now means automated tests with some particular syntax. We need a different term to suggest that maybe the things your software does should depend on the things the customers want it to do, but I think we can carry on using BDD in the “I wrote some tests at some point!” sense.

Reasoning About Software

Definitely another one for the bin. If Tony Hoare were not alive today he would be turning in his grave.

Posted in whatevs | 3 Comments