Daniel John Debrunner wrote:
Vemund Ostgaard wrote:
When running the top-level suite now, the only output I got was a
lot of dots, the time it took to run the suite and the number of OK
tests run.
Have anyone considered a framework or interface for logging events
and information in the junit tests? I think it will be very
difficult in the future to analyse test failures only based on the
text from an assertion failure or exception. Especially when we
start porting multhreaded stress tests and other more complicated
parts of the old suite.
<snip>
The Swing test runner shows more information about which tests have
run and passed. Remember the textui test runner is just a simple test
runner.
I would encourage investigation of what others have done in this area,
see if there are better test runners, rather than invent a new
mechanism. I would like that the Derby JUnit tests continue to run as
other Junit tests so others can run & integrate them up easily. If the
default running of Derby's JUnit tests produce a lot of output that
will confuse others who are used to the model where a test produces no
output if it succeeds.
The default behaviour could be to write nothing if that is what we want,
while still making it possible to get log from the tests for those who
want that. With for instance the logging API that I suggest the
logstatements will be in the code but wether the log actually gets
printed anywhere (console or file) can be controlled dynamically with
properties. Different runners can give more detailed information on the
exact tests that were run, but no runners will give us insight into what
each test actually did.
I very much agree that the tests should run as other JUnit tests and not
be confusing when others want to integrate them in junit-compatible
environments.
I think the type of testsuite you have and the breadth of test execution
you do influences the need for logging in the tests. If you have a
testuite of "unit like" tests that just do a single method call into the
product (as JUnit was intended for), the exception thrown from a test
will probably give you the information you need to be able to analyze
what happened. If you have a testsuite of many
complex/long/multithreaded tests run within a variety of configurations
and decorators, errors can be difficult to analyze and you will more
often have to put som debug in the test and rerun to get what you need.
Tests are run nightly on several platforms and jvms. When the junit
suite grows to a size comparable with the old testsuite and beyond,
there will be problems to analyze every night. If these problems are
accompanied with a sensible amount of log it will ease the work of the
analyzer, who often will not be the original author of the test. The
tests will not be perfect, there will be situations that should lead to
an exception being thrown that doesn't, but instead create difficulties
for following tests. These issues will also be very costly to track down
without a better view into what each test actually did.
A last issue is that when you make changes to a test, it is good to get
some verification that the changes actually had the desired effect.
Being able to take a look at the log from the nightly run of tests and
verify that the output from your fix got printed is a nice way of doing
just that. We all develop on different systems so even when it worked on
yours it might not have the same effect on the one its failing on in the
nightlies. You may not even be able to reproduce the problem on the
machines you have access to and depend on getting more output when it
happens, perhaps intermittently) in others testing.
If you think about the old harness, how often have you made use of
information from the testlogs that you wouldn't have gotten from just an
exception thrown?
Vemund