Aleksei Ivanov

Not all test coverage is created equal

There is an approach called defensive programming. It is not about defending your tab vs space choices in front of your colleagues–it is about anticipating failure mode and writing the program in a way to handle such failures.

One of the most popular ways to do this is to use try … catch structure: “try doing this and if anything goes wrong — catch it and report”. It is a decent approach and whilst there are other ways to achieve the same thing (for example error values in Go) it works out fine most of the time.

Where it falls short is automated testing. The cost of building the scaffolding to reproduce these edge cases errors is just too great.

You essentially have to engineer all of the various failure modes in advance, often without even knowing what they are. An impossible task.

This is one of the reasons why 100% test coverage is a pipe dream. It is curious: in order to make the program more resilient you have to sacrifice with formal verification.

The other 95% of the code you can test just fine with unit and integration testing. You can even run end-to-end tests, calling live APIs — there is no problem with that.

The tricky part are these 5% which are there to specifically guard your program against unexpected failures.