Fawzi Mohamed fa...@gmx.ch wrote:
On 18-nov-10, at 09:11, Don wrote:
Jonathan M Davis wrote:
On Tuesday, November 16, 2010 13:33:54 bearophile wrote:
Jonathan M Davis:
Most of the rest (if not all of it) could indeed be done in a
library.
I am not sure it could be done nicely too :-)
That would depend on what you're trying to do. Printing test
success or failure is as simple as adding the approprate scope
statement to the beginning of each unittest block. A bit tedious
perhaps, but not hard.
Right now
unit tests follow the unix convention of saying nothing on
success,
That's an usability failure. Humans expect feedback, because you
can't tell
apart unittests run and succeed from unittests not even run.
That Unix
convention is bad here. And Unix commands sometimes have a -v
(verbose)
command that gives feedback, while D unittests don't have this
option.
I'm afraid that I have to disagree there. Having all of the
successes print out would, in many cases, just be useless output
flooding the console. I have no problem with making it possible
for unit tests to report success, but I wouldn't want that
to be the default. It's quite clear when a test fails, and
that's what is necessary in order to fix test failures.
I can see why a beginner might want the positive feedback that a
test has succeeded, but personally, I would just find it annoying.
The only real advantage would be that it would indicate where
in the unit tests the program was, and that's only
particularly important if you have a _lot_ of them and they
take a long time to run.
I think: %d unit tests passed in %d modules
would be enough.
This was already discussed, I think that optimal solution would be to
have a testing function a bit like tangos, the testing functions knows
how the module is called. Tango one always prints the module, but
that is easy to change.
What I use is my own testing framework, in it i have defined as
default main function that checks commandline arguments, so that one
can for example pass --print-level=error and see only the errors...
See http://dsource.org/projects/blip/wiki/BlipOverview for an example
of using it.
This means having a special file to compile, that generates an
executable dedicated to testing, but this maps well to how I do tests.
In fact I often keep the tests separated from the code, I even hide
them behind templates to avoid excessive template instantiation in
some cases because they are large and would slow down the
compilation...
The current default unittest function runs very early (before main),
so it is not possible to use that and use commandline arguments (which
is correct because in the current model unittests can be activated
for *any* executable, and should not disturb its run).
It should be possible to write a test function that just sets up
things for a later real unittest run that starts from main and can
parse the commandline arguments, thus solving all these discussions...
You can access the commandline args via Runtime.args. This works within
unittests.