The year is 2017 and people are still recommending processing out assertions from release builds.
-
many assertions are short tests (whether or not that’s a good thing): this variable now has a value, this number is now greater than zero), which won’t cost a lot in production. Or at least, let me phrase this another way: many assertions are too cheap to affect performance metrics in many apps. Or, let me phrase that another way: most production software probably doesn’t have good enough performance monitoring to see a result, or constrained enough performance goals to care about the result.
-
The program counter has absolutely no business executing the instruction that follows a failed assertion, because the programmer wrote the subsequent instructions with the assumption that this would never happen. Yes, your program will terminate, leading to a 500 error/unfortunate stop dialog/guru meditation screen/other thing, but the alternative is to run…something that apparently shouldn’t ever occur. Far better to stop at the point of problem detection, than to try to re-detect it based on a surprising and unsupportive problem report later.
-
assertions are things that programmers believe to always hold, and it’s sensible to take issue with the words always and believe. There’s an argument that goes:
- I have never seen this situation happen in development or staging.
- I got this job by reversing a linked list on a whiteboard.
- Therefore, this situation cannot happen in production.
but unfortunately, there’s a flaw between the axioms and the conclusion. For example, I have seen the argument “items are added to this list as they are received, therefore these items are in chronological order” multiple times, and have seen items in another order just as often. Assertions that never fire on programmer input give false assurance.
Also nice landmarks in crash logs, vs a billion instructions later where the program went off into the weeds.