While Alan Turing is regarded by many as the grandfather of Artificial Intelligence, George Boole should be entitled to some claim to that epithet too. His Investigation of the Laws of Thought is nothing other than a systematisation of “those universal laws of thought which are the basis of all reasoning”. The regularisation of logic and probability into an algebraic form renders them amenable to the sort of computing that Turing was later to show could be just as well performed mechanically or electronically as with pencil and paper.
But when did people start talking about the logic of binary operations in computers as being due to Boole? Turing appears never to have mentioned his name: although he certainly did talk about the benefits of implementing a computer’s memory as a collection of 0s and 1s, and describe operations thereon, he did not call them Boolean or reference Boole.
In the ACM digital library, Symbolic synthesis of digital computers from 1952 is the earliest use of the word “Boolean”. Irving S. Reed describes a computer as “a Boolean machine” and “an automatic operational filing system” in its abstract. He cites his own technical report from 1951:
Equations (1.33) and (1.35) show that the simple Boolean system, given in (1.34) may be analysed physically by a machine consisting of N clocked flip flops for the dependent variables and suitable physical devices for producing the sum and product of the various variables. Such a machine will be called the simple Boolean machine.
The best examples of simple Boolean machines known to this author are the Maddidas and (or) universal computers being built or considered by Computer Research Corporation, Northrop Aircraft Inc, Hughes Aircraft, Cal. Tech., and others. It is this author’s belief that all the electronic and digital relay computers in existence today may be interpreted as simple Boolean machines if the various elements of these machines are regarded in an appropriate manner, but this has yet to be proved.
So at least in the USA, the correlation between digital computing and Boolean logic was being explored almost as soon as the computer was invented. Though not universally: the book “The Origins of Digital Computers” edited by Brian Randell, with articles from Charles Babbage, Grace Hopper, John Mauchly, and others, doesn’t mention Boole at all. Neither does Von Neumann’s famous “first draft” report on the EDVAC.
So, second question. Why do programmers spell Boole bool
? Who first decided that five characters was too many, and that four was just right?
Some early programming languages, like Lisp, don’t have a logical data type at all. Lisp uses the empty list to mean “false” and anything else to mean true. Snobol is weird (he said, surprising nobody). It also doesn’t have a logical type, conditional execution being predicated on whether an operation signals failure. So the “less than” function can return the empty string if a<b
, or it can fail.
Fortran has a LOGICAL
type, logically. COBOL, being designed to be illogical wherever Fortran is logical, has a level 88
data type. Simula, Algol and Pascal use the word ‘boolean’, modulo capitalisation.
ML definitely has a bool
type, but did it always? I can’t see whether it was introduced in Standard ML (1980s-1990), or earlier (1973+). Nonetheless, it does appear that ML is the source of misspelled Booles.