Tiago Espinha wrote:
Hmm, if there's code covered by insane builds which is not covered by
sane builds, is that really a problem?
i continue to regret the terminology (which was chosen when the terms
were only internal in a closed source environment), and sometimes I
mis-type.
sane = extra checking debug build
insane = no checking, fastest build, expected to be used in production
no I would not worry about this issue. I was worried about issue
of code covered by sane builds that is not covered by insane builds.
We would just have to make sure
that regressions are ran against insane jars and all would be well. Or
am I missing something?
My previous comments assumed that insane was being run, which I believe
is the case for the automated reports. I think sane runs are preferred
as they will include coverage of the debug only code paths.
I think it is very rare to have a code path that is covered in an insane
run but not covered in a sane run. While I think there a good
percentage of codepaths that are not covered in an insane run that
are covered in a sane run.
Yesterday Siddharth was also trying to obtain the Emma reports and he
was obtaining different results from those of the automated tests. It
turns out he was indeed running the tests against sane jars and he was
running derbyall instead of suites.All. If I'm not mistaken, suites.All
is the one that picks up all the JUnit tests whereas derbyall runs the
old harness tests. I was actually surprised that derbyall produced
results at all...
I think we used to "add" the results of the derbyall run to the
suites.all run. It would be interesting to understand what code paths
are covered by derbyall and not suites.all which would then might
identify best candidates for test conversion from all harness to junit.