On Saturday, 25 July 2015 at 14:28:31 UTC, Andrei Alexandrescu wrote:
On 7/25/15 9:35 AM, Dicebot wrote:
This is absolutely impractical. I will never even consider such attitude as a solution for production projects. If test coverage can't be verified automatically, it is garbage, period. No one will ever manually verify thousands lines of code after some trivial refactoring just to
make sure compiler does its job.

Test coverage shouldn't totter up and down as application code is written - it should be established by the unittests. And yes one does need to examine coverage output while writing unittests.

Does word "refactoring" or "adding new features" ring a bell? In the first case no one manually checks coverage of all affected code because simply too much code is affected. Yet it can become reduced by an accident. In the second case developer is likely to check coverage for actual functionality he has written - and yet coverage can become reduced in different (but related) parts of code because that is how templates work.

You will have a very hard time selling this approach. If official position of language authors is that one must manually check test coverage all the time over and over again, pragmatical people will look into other languages.

I do agree more automation is better here (as is always). For example, if a template is followed by one or more unittests, the compiler might issue an error if the unittests don't cover the template.

This isn't "better". This is bare minimum for me to call that functionality effectively testable. Manual approach to testing doesn't work, I thought everyone has figured that out by 2015. It works better than no tests at all, sure, but this is not considered enough anymore.

Reply via email to