The hardest thing

I now have the make the hardest decision in programming. It has nothing to do with naming things or invalidating caches: rather it is which *nix to install on a computer. NextBSD and MidnightBSD both have goals that are relevant to my interests, but both seem pretty quiet.

Posted in UNIX | Leave a comment

Swift

Speaking of Swift, what idiot called it swift-evolution and not “A Modest Proposal”?

Posted in AAPL | Leave a comment

Eating the bubble

How far back do you want to go to find people telling you that JavaScript is eating the world? Last year? Two years ago? Three? Five?

It’s a slow digestion process, if that’s what is happening. Five years ago, there was no such thing as Swift. For the last four years, I’ve been told at mobile dev conferences that Swift is eating the world, too. It seems like the clear, unambiguous direction being taken by software is different depending on which room you’re in.

It’s time to leave the room. It looks sunny outside, but there are a few clouds in the sky. I pull out my phone and check the weather forecast, and a huge distributed system of C, Fortran, Java, and CUDA tells me that I’m probably going to be lucky and stay dry. That means I’m likely to go out to the Olimpick Games this evening, so I make sure to grab some cash. A huge distributed system of C, COBOL and Java rumbles into action to give me my money, and tell my bank that they owe a little more money to the bank that operates the ATM.

It seems like quite a lot of the world is safe from whichever bubble is being eaten.

Posted in whatevs | Tagged | 1 Comment

Netscape won

Back when AOL was a standalone company and Sun Microsystems existed at all, Netscape said that they wanted Windows to be a buggy collection of device drivers that people used to access the web, which would be the real platform.

It took long enough that Netscape no longer exists, but they won. I have three computers that I regularly use:

  • my work Mac has one Mac-only, Mac-native app open during the day[*]. Everything else is on the web, or is cross-platform. It doesn’t particularly matter what _technology_ the cross-platform stuff is made out of because the fact that it’s cross-platform means the platform is irrelevant, and the technology is just a choice of how the vendors spend their money. I know that quite a bit of it is Electron, wrapped web, or Java.
  • my home Windows PC has some emulators for playing (old) platform-specific games, and otherwise only runs cross-platform apps[*] and accesses the web.
  • my home Linux laptop has the tools I need to write the native application I’m writing as a side project, and everything else is cross-platform or on the web.

[*] I’m ignoring the built-in file browsers, which are forced upon me but I don’t use.

Posted in Platform | Tagged | 2 Comments

On twitter [or otherwise]

As occasionally happens, I’ve been reevaluating my relationships with social media. The last time I did this I received emails asking whether I was dead, so let me assure you that such rumours are greatly exaggerated.

Long time readers will remember that I joined twitter about a billion years ago as ‘iamleeg’, a name with a convoluted history that I won’t bore you with but that made people think that I was called Ian. So I changed to secboffin, as I had held the job title Security Boffin through a number of employers. After about nine months in which I didn’t interact with twitter at all, I deleted my account: hence people checking I wasn’t dead.

This time, here’s a heads up: I don’t use twitter any more, but it definitely uses me. When I decided I didn’t want a facebook account any longer, I just stopped using it, then deactivated my account. Done. For some reason when I stop using my twitter account, I sneak back in later, probably for the Skinnerian pleasure of seeing the likes and RTs for posts about new articles here. Then come the asinine replies and tepid takes, and eventually I’m sinking serious time into being meaningless on Twitter.

I’d like to take back my meaninglessness for myself, thank you very much. This digital Maoism which encourages me, and others like me, to engage with the system with only the reward of more engagement, is not for me any more.

And let me make an aside here on federation and digital sharecropping. Yes, the current system is not to my favour, and yes, it would be possible to make one I would find more favourable. I actually have an account on one of the Free Software microblogging things, but mindlessly wasting time there is no better than mindlessly wasting time on Twitter. And besides, they don’t have twoptwips.

The ideal of the fediverse is flawed, anyway. The technology used on the instance I have an account is by and large blocked from syncing with a section of the fediverse that uses a different technology, because some sites that allow content that is welcome in one nation’s culture and forbidden in another nation’s culture also use that technology, even though the site of which I am a member doesn’t include that content. Such blanket bans are not how federation is supposed to work, but are how it does work because actually building n! individual relationships is hard, particularly when you work to the flawed assumption that n should be everyone.

And let’s not pretend that I’m somehow “taking back control” of my information by only publishing here. This domain is effectively rented from the registry on my behalf by an agent, the VPS that the blog runs on is rented, the network access is rented…very little of the moving parts here are “mine”. Such would be true if this were a blog hosted on Blogger, or Medium, or Twitter, and it’s true here, too.

Anyway, enough about the hollow promises of the fediverse. The point is, while I’m paying for it, you can see my posts here. You can see feeds of the posts here. You can write comments. You can write me emails.

I ATEN’T DEAD.

Posted in Twitter | 1 Comment

On null

I’ve had an interesting conversation on the topic of null over the last few days, spurred by the logical disaster of null. I disagreed with the statement in the post that:

Logically-speaking, there is no such thing as Null

This is true in some logics, but not in all logics. Boolean logic as described in An Investigation of the Laws of Thought, admits only two values:

instead of determining the measure of formal agreement of the symbols of Logic with those of Number generally, it is more immediately suggested to us to compare them with symbols of quantity admitting only of the values 0 and 1.

…and in a later chapter he goes on to introduce probabilities. Anyway. A statement either does hold (it has value 1, with some probability p) or it does not (it has value 0, with some probability 1-p; the Law of the Excluded Middle means that there is no probability that the statement has any other value).

Let’s look at an example. Let x be the lemma “All people are mortal”, and y be the conclusion “Socrates is mortal”. What is the value of y? It isn’t true, because we only know that people are mortal and do not know that Socrates is a person. On the other hand, we don’t know that Socrates is not a person, so it isn’t false either. We need another value, that means “this cannot be decided given the current state of our knowledge”.

In SQL, we find a logic that encodes true, false, and “unknown/undecided” as three different outcomes of a predicate, with the third state being given the name null. If we had a table linking Identity to Class and a table listing the Mortality of different Classes of things, then we could join those two tables on their Class and ask “what is the Mortality of the Class of which Socrates is a member”, and find the answer null.

But there’s a different mathematics behind relational databases, the Relational Calculus, of which SQL is an imperfect imitation. In the relational calculus predicates can only be true or false, there is no “undecided” state. Now that doesn’t mean that the answer to the above question is either true or false, it means that that question cannot be asked. We must ask a different question.

“What is the set of all Mortality values m in the set of tuples (m, c) where c is any of the values of Class that appear in the set of tuples (x, c) where x is Socrates?”

Whew! It’s long-winded, but we can ask it, and the answer has a value: the empty set. By extension, we could always change any question we don’t yet know the answer to into a question of the form “what is the set of known answers to this question”. If we know that the set has a maximum cardinality of 1, then we have reinvented the Optional/Maybe type: it either contains a value or it does not. You get its possible value to do something by sending it a foreach message.

And so we ask whether we would rather model our problem using a binary logic, where we have to consider each question asked in the problem to decide whether it needs to be rewritten as a set membership test, or a ternary logic, where we have to consider that the answer to any question may be the “I don’t know” value.

Implementationally-speaking, there are too many damn nulls

We’ve chosen a design, and now we get to implement it. In an implementation language like Java, Objective-C or Ruby, a null value is supplied as a bottom type, which is to say that there is a magic null or nil keyword whose value acts as a subtype of all other types in the system. Good: we get “I don’t know” behaviour for free anywhere we might want it. Bad: we get that behaviour anywhere else too, so we need to think to be sure that in all places where “I don’t know” is not an answer, that invariant holds in our implementation, or for those of us who don’t like thinking we have to pepper our programs with defensive checks.

I picked those three languages as examples, by the way, because their implementations of null are totally different so ruin the “you should never use a language with null because X” trope.

  • Java nulls are terminal: if you see a null, you blew it.
  • Objective-C nulls are viral: if you see a null, you say a null.[*]
  • Ruby nulls are whatever you want: monkey-patch NilClass until it does your thing.

[*] Objective-C really secretly has _objc_setNilReceiver(id), but I didn’t tell you that.

Languages like Haskell don’t have an empty bottom, so anywhere you might want a null you are going to need to build a thing that represents a null. On the other hand, anywhere you do not want a null you are not going to get one, because you didn’t tell your program how to build one.

Either approach will work. There may be others.

Posted in code-level | Tagged | Leave a comment

“Brand”: you win some, you lose some

The 20th anniversary of the iMac reminded me that while many people capitalises the word “iMac” as Apple would like, including John “I never capitalise trademarks the way companies like” Gruber, nobody uses the article-less form that Apple does:

So you can do everything you love to do on iMac.

I, like many other people, would insert ‘an’ in there, and Apple have lost that battle. There’s probably somebody in Elephant who has chosen that hill to die on.

Posted in AAPL | Leave a comment

Let’s talk about self-documenting code

You think your code is self-documenting. That it doesn’t need comments or Doxygen or little diagrams, because it’s clear from the code what it does.

I do not think that that is true.

Even if your reader has at least as much knowledge of the programming language you’ve used as you have, and at least as much knowledge of the libraries you’ve used as you have, there is still no way that your code is self-documenting.

How long have you been doing your job? How long have you been talking to experts in the problem domain, solving similar problems, creating software in this region? The likelihood is, whoever you are, that the new person on your team has never done that, and that your code contains all of the jargon terms and assumptions that go with however-much-experience-you-have experience at solving those problems.

How long were you working on that story, or fixing that bug? How long have you spent researching that specific change that you made? However long it is, everybody else on your team has not spent that long. You are the world expert at that chunk of code, and it’s self-documenting to you as the world expert. But not to anybody else.

We were told about “working software over comprehensive documentation”, and that’s true, but nobody said anything about avoiding sufficient documentation. And nobody else has invested the time to understand the code that you just wrote that you did, so the only person for whom your code is self-documenting is you.

Help us other programmer folks out, think about us when avoiding documentation.

Posted in advancement of the self, architecture of sorts, documentation | 8 Comments

Rethinking Object-Oriented Design figures

My iPad-drawn graphics in Rethinking OOD at App Builders 2018 were not very good, so here are the ink-and-paper versions. Please have them to hand when viewing the talk (which is the first of a two-parter, though I haven’t pitched part two anywhere yet).

Posted in OOP, Talk | Leave a comment

Inheritance still doesn’t make any sense

Some ideas based on feedback to the Why inheritance never made any sense:

Feedback: Subtypes are necessary

The only one of these that is practically workable is behaviour inheritance <=> subtype inheritance: I’m sorry that you were exposed to Java at such an impressionable age. The compilers of languages like Java enable subclass = subtype, by automatically assuming that a subclass is is a valid value for variable binding, for example. However they do nothing to ensure subclass = subtype. This is valid C#, a language very like Java for this discussion:

namespace QuickTestThing
{
    class Class1
    {
        override public string ToString()
        {
            return "Class1";
        }
    }
    class Class2 : Class1
    {
        public override string ToString()
        {
            throw new Exception();
        }
    }
}

Now is Class2 a subtype of Class1? Does the compiler let you pretend that it is?

You don’t even need inheritance

As discussed in the original post, the whole “favour composition over inheritance” movement gets by fine with no inheritance. Composition and delegation (I don’t know about this message, I’ll forward it to someone who does) let you get the same behaviour.

Feedback: build it yourself

Can I demonstrate a language that has all three of subtype inheritance, behaviour inheritance, and categorical inheritance as distinct language features? Yes, but I would need to learn Racket first. I’m on it.

But in the meantime, re-read the “you don’t even need inheritance” paragraph and think about how you would build each of those three ideas out of delegation.

Posted in OOP | Leave a comment