I am fully for automating whatever can be automated. In this sense tests are a good combination of:
* automated quality-control * sample-code Often there is no good way, for somebody new to a codebase, to distinguish between these two kinds of tests. There are tests that can be looked at like "runnable specifications" or "sample usage code". And there are tests that more about the nuts and bolts of a framework. Relevant to developers/maintainers of a package, but not for a potential user (yet). Solutions for this? Maybe categorizing tests accordingly... ? Either through packages/method-categories, or pragmas? I am also glad whenever I find an "example" method-category for a core class in a package. My 2 cents. 2012/12/4 Sven Van Caekenberghe <[email protected]>: > What makes me value tests a bit above documentation (but not so much > that I am saying that documentation is not important), are two things: > > - tests are machine enforceable/controllable > - you can find them exactly using senders or references
