Hi,

you may have noticed that Helge did a blog post yesterday showing some statistics about automated testing. [1] It is indeed grat to see, that the quality of our build is improving rapidly.


Well .. the blog came just in, when my own test run on German localized m6 on linux fas finished. I checked the results locally (using the tools/makesummary.pl script). This revealed that I had 5 errors and 22 warnings. (This is still a quite good result for the fact that I'm doing remote testing with my own environment). One of the errors was expected, as the testtool found issue 100235 [2] which has ben escalated as stopper.

Ok - so although I did not meet the "0" errors that are shown in QUASTE for the english builds I went on and did uplaod my results.

And then I was quite surprised: QUASTE showed *green* for the German builds. My 5 errors including one showstopper do not matter for the Quaste status. (I checked if they have been correctly submitted - they were. But Quaste does not look for the results of these certain tests).

I wonder, what it's worth to run the tests if the tool that should be used for publishing the results is likely to ignore errors? Not only that I run severa tests that are not analysed, even a stopper issue has been ignored!

Looking at the current discussion about the amount of stoppers in 3.1.0 code line [3] it seems really critical that we cannot rely on our tooling.

André




[1] http://blogs.sun.com/GullFOSS/entry/results_of_automated_tests_for
[2] http://www.openoffice.org/issues/show_bug.cgi?id=100235
[3] http://www.mail-archive.com/[email protected]/msg11251.html

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to