There are three different types of inheritance going on.
- Ontological inheritance is about specialisation: this thing is a specific variety of that thing (a football is a sphere and it has this radius)
- Abstract data type inheritance is about substitution: this thing behaves in all the ways that thing does and has this behaviour (this is the Liskov substitution principle)
- Implementation inheritance is about code sharing: this thing takes some of the properties of that thing and overrides or augments them in this way. The inheritance in my post On Inheritance is this type and only this type of inheritance.
These are three different, and frequently irreconcilable, relationships. Requiring any, or even all, of them, presents no difficulty. However, requiring one mechanism support any two or more of them is asking for trouble.
A common counterexample to OO inheritance is the relationship between a square and a rectangle. Geometrically, a square is a specialisation of a rectangle: every square is a rectangle, not every rectangle is a square. For all s in Squares, s is a Rectangle and width of s is equal to height of s. As a type, this relationship is reversed: you can use a rectangle everywhere you can use a square (by having a rectangle with the same width and height), but you cannot use a square everywhere you can use a rectangle (for example, you can’t give it a different width and height).
Notice that this is incompatibility between the inheritance directions of the geometric properties and the abstract data type properties of squares and rectangles; two dimensions which are completely unrelated to each other and indeed to any form of software implementation. We have so far said nothing about implementation inheritance, so haven’t even considered writing software.
Smalltalk and many later languages use single inheritance for implementation inheritance, because multiple inheritance is incompatible with the goal of implementation inheritance due to the diamond problem (traits provide a reliable way for the incompatibility to manifest, and leave resolution as an exercise to the reader). On the other hand, single inheritance is incompatible with ontological inheritance, as a square is both a rectangle and an equilateral polygon.
The Smalltalk blue book describes inheritance solely in terms of implementation inheritance:
A subclass specifies that its instances will be the same as instances of another class, called its superclass, except for the differences that are explicitly stated.
Notice what is missing: no mention that a subclass instance must be able to replace a superclass instance everywhere in a program; no mention that a subclass instance must satisfy all conceptual tests for an instance of its superclass.
Inheritance was never a problem: trying to use the same tree for three different concepts was the problem.
“Favour composition over inheritance” is basically giving up on implementation inheritance. We can’t work out how to make it work, so we’ll avoid it: get implementation sharing by delegation instead of by subclassing.
Eiffel, and particular disciplined approaches to using languages like Java, tighten up the “inheritance is subtyping” relationship by relaxing the “inheritance is re-use” relationship (if the same method appears twice in unrelated parts of the tree, you have to live with it, in order to retain the property that every subclass is a subtype of its parent). This is fine, as long as you don’t try to also model the problem domain using the inheritance tree, but much of the OO literature recommends that you do by talking about domain-driven design.
Traits approaches tighten up the “inheritance is specialisation” relationship by relaxing the “inheritance is re-use” relationship (if two super categories both provide the same property of an instance of a category, neither is provided and you have to write it yourself). This is fine, as long as you don’t try to also treat subclasses as covariant subtypes of their superclasses, but much of the OO literature recommends that you do by talking about Liskov Substitution Principle and how a type in a method signature means that type or any subclass.
What the literature should do, I believe, is say “here are the three types of inheritance, focus on any one of them at a time”. I also believe that the languages should support that (obviously Smalltalk, Ruby and friends do support that by not having any type constraints).
- If I’m using inheritance as a code sharing tool, it should not be assumed that my subclasses are also subtypes.
- If I am using subtypes to tighten up interface contracts, I should be not only allowed to mark a class anywhere in the tree as a subtype of another class anywhere in the tree, but required to do so: once again, it should not be assumed that my subclasses are also subtypes.
- If I need to indicate conceptual specialisation via classes, this should also not be assumed to follow the inheritance tree. I should be not only allowed to mark a class anywhere in the tree as a subset of another class, but required to do so: once again, it should not be assumed that my subclasses are also specialisations.
Your domain model is not your object model. Your domain model is not your abstract data type model. Your object model is not your abstract data type model.
Now inheritance is easy again.
Can you perhaps provide examples for the last 3 points in the end?
It will help a lot to understand your meaning.
Thanks!
Examples would be very much appreciated :-)
The most lucid, succinct and accessible description of this nest of misunderstanding I’ve yet seen. Thanks for posting.
Inheritance and composition are both about finding common components from the data or from the functionality. I found it delightful to find out the link between the category theory, class inheritance and composition. Here I have the connection written down: https://github.com/kummahiih/python-domain-equations and here is the definition of a category if you are a lazy googler: https://en.wikipedia.org/wiki/Category_(mathematics)#Definition
One difference between inheritance and composition you don’t mention here is the life cycle of the functionality or the data: if the functionality might have different life cycle, bring it in with dependency injection, if your data has a different life cycle get it in as a parameter and use composition.
The usual reason of the different life cycle is the support for the unit testing.
In 1996, the programming language Objective Caml (the name later shortened to OCaml ) was invented to try to separate notions of subtyping, inheritance, etc. Could you add your thoughts on what you think of its innovations, and why they didn’t become popular?
(Your name field doesn’t accommodate my name: Christopher Froehlich–no big deal)
I think your post is interesting at a philosophical level, and I would even to most extents agree with you. I have a hard time cogitating big ideas without having concrete examples to back them, which is my problem–not yours.
There is one place where I absolutely love inheritance: at the database tier. I love it in one very specific way, which is that I can define all my entities as inheriting from a single base class which defines nothing except a unique Id. It’s a very simple sort of behavioral expectation: do you exist? It allows for some rather elegant simplifications of logic at higher layers when it comes to crafting resources and interfaces.
I can’t really speak to the rest of the argument as I have no other concrete examples with which to speak from, but in this one specific case–inheritance serves a purpose and serves it well (imo).
Hey, you say that Ruby supports those three types of inheritance, how are they implemented?
I suppose that 1) and 3) are implemented via traditional inheritance and we can achieve 2) by using modules.
Am i right ?
Thank you.
Great conceptual illustration of the problems — it also shows how poor traditional OO examples of “Cat and Dog extend Animal” and “Square extends Rectangle” are.
However, I find one of the key problems with inheritance is trying to apply it to entire entities — it breaks down for non-trivial entities having multiple interactions, since the API & code reuse needs of each interaction may be different.
The solution I bring to bear is to factor interactions separately, designing their interfaces on a behavioral basis. This seems highly effective in allowing robust substitution, code reuse where warranted, and independent variation of different entity interactions.
An article & commentary on this approach:
– http://literatejava.com/oo-design/oo-design-is-more-about-doing-than-being/
– http://literatejava.com/java-language/why-inheritance-never-made-any-sense/
Do a study on Design Patterns. The head start series has a VERY good one for the layman using Java, but its dated. You can see the examples in a real word context where inheritance is a failure or creates many complexities as projects get larger that can be resolved through interfaces and abstractions. Many of the principles cited here are taught in a DP course or in the book.
I think this is backwards:
“You can use a rectangle everywhere you can use a square, but you cannot use a square everywhere you can use a rectangle.”
It should read like:
“You cannot use a rectangle everywhere you can use a square (since the square is a special case of a rectangle [need all 4 sides to be of same length]), but you can use a square everywhere you can use a rectangle (since rectangles don’t have restrictions about side length, every square IS a rectangle anyway).”
Victor, you can’t use a Square anywhere you would use a Rectangle. Consider:
Myclass::SizeOfAffectedArea( Rectangle &r );
If you pass a square in as the argument to receive the affected area, either its height or its width will be wrong, since the method will attempt to update them separately.
Oh, and Graham, a football is an ellipsoid.
After hitting send, of course, I realized that I should not have said:
"you can’t use a Square anywhere you would use a Rectangle"
because you “can”, but
"you shouldn’t use a Square anywhere you can use a Rectangle"
because it (probably) won’t give you the result you want.
This is one area where C++ had the right idea since it provided for Public inheritance (substitution) and Private inheritance (reuse), which you could make explicit in your code (although 90+% of programmers never realized the importance of and the differences between these two language constructs).
This is a defence of my thesis :). You can use the concept square everywhere you can use the concept rectangle, because a square is a rectangle with additional constraints. You cannot use the type square everywhere you can use the type rectangle, because a square has stricter invariants than a rectangle. The fact that this is confusing is an argument against trying to use inheritance to model both of these relationships.
That’s a handegg.
Pingback: Why inheritance never made any sense | Literate Java
Good god I need to be like six times smarter to grok this. Clearly something very important is being said, but it’s a square peg, and my brain is being a round hole.
I don’t get it. I read one part of the article, think I get it, then I read a different part and what I read there doesn’t jive with what I thought I understood. And I can’t figure out how to reconcile them.
One thing you really need to do is give a clearer definition of “subtype”, I think. Also, what you mean by “tighten up interface contracts.”
And then there’s also what a few others said: examples. Please provide more.
Pingback: Explicitly considering subtyping in inheritance | Structure and Interpretation of Computer Programmers
Pingback: Wenn Vererbung so schlecht ist, warum verwendet sie dann jeder? – looid.
Rectangle/square: I think you just rediscovered covariance and contravariance.
As for language support: why? Numbers are abstract, but can be used for many conceptually different uses (a count, a measurement, a register value, etc). Strings also have many disparate uses (unstructured user input, an ID, a language like HTML, a password, etc).
It’s the job of the programmer to use these abstract concepts in meaningful ways. If you combine them in nonsensical ways, of course you’re going to have a bad time.
I would like to see what your idea of a language with inheritance hierarchies distinct from type hierarchies would look like. It sounds overly complex. “We just need to add a lot more explicit structure to the language” is a frequent claim, and almost always wrong. Many languages are going the route of “function” and “async function”, and that is causing no end of headaches.