Why 100% of coverage???

Oct 16, 2012 at 1:51 PM
Edited Oct 16, 2012 at 1:55 PM

100% of coverage does not mean anything...


Anybody can't remember the coverage of one project and feel if the coverage decrease? 

Because, coverage grow up and down if you remove dead code (dead feature) or add code and it is more simple to track 100%.

For example, in Windows Service, ServiceBase.Run is hard to Mock. I add the magic attribute ([ExcludeFromCodeCoverage]) in order to : 

> Keep the 100% of coverage and Explicit exclude this line from coverage to avoid degrease coverage when clean code.

> I make and adapter to avoid to have this magic attribute everywere.

> It is easy to track this code not coverage with find usage R# feature.

> 100% coverage = 0 bug : False! cause if you forget a case to test, the code does not exist and you could have 100% of coverage and lots of bugs!

It's very difficult to have 100%, and it is very difficult to mock static class like DateTime or ServiceBase of WindowsService (run method).

There is 2 approach : 

> Moles Isolation Framework (Fake framework)... but

> Build can take much much time

> Test Runner can't find files dor Fake FW (NCrunch)

> Adapter

> No build time problems

> Many code to adapt .Net FW classes, ...

I test this 2 solutions and i rollback to adapter cause it take so much time to configure tools in order to run quickly with Moles (nevertheless, Moles is very interesting in order to make test on legacy code with Refactoring)