Hi Mathias,
Mathias Bauer schrieb:
Jörg Jahnke wrote:
But I agree that a proper selection of tests is a good idea. Perhaps a
user should be able to call e.g. "dmake regressiontests -run:sw,basic"
to execute special tests for the writer and the basic.
You misunderstood me. I took that for granted.
It was not. The problem with optional regression tests is that not
everyone executes them, this leading to regressions creeping slowly into
all optional tests. We then sooner or later get to a situation where the
user can't be sure whether a failed test means he introduced that
regression in his CWS or whether it has been in the MWS before. He must
then invest time to clarify this. This can be a tiresome process and
should only be optional.
Instead the plan is to have a defined set of tests that everyone has to
run and that are guaranteed to succeed - unless a developer introduces a
regression. That way after execution of the tests it is immediately
clear whether the tests failed or succeeded and failure means that a
regression was introduced via the CWS.
So we'd have the following new tasks to execute on a CWS:
- run "dmake regressiontest" (or something like this)
- wait until the tests finish
- if the result says OK then you are done
- if the result says failed then look at the log file to check where the
test failed and fix the bug, then start anew
But I also think that before starting with the tests we should carefully select
which tests we
put into the different categories. I don't want to have 20 Writer tests
that just test "something in Writer", run for 8 hours but don't test the
code that usually is prone to regressions.
What you seem to assume (correct me if I am wrong) is that there
can/should be a set of TestTool-tests that is able to find code in areas
prone to regressions. But this regression testing for code segments
falls in the area of Unit Testing and not in the area of the System
Testing which the TestTool does (see
http://en.wikipedia.org/wiki/Software_testing for more information).
So the TestTool works on a different level of testing and finds bugs
different to those which you seem to have in mind. But that does of
course not mean that these tests are worthless. And I think we have a
chance to improve the System Testing for OOo by introducing automated
tests on CWSs as described on the Wiki page. That can - and should - be
done in parallel to any efforts done to intensify also the Unit Testing
for OOo. Doing one does not mean that doing the other is not necessary.
Besides that I doubt that there is a representative set of test cases
that can be used as "typical for Writer regressions". The regressions we
had in the past always have been in completely different areas of the
code and it strongly depends on the changed code where to look at. So
having "Writer" as a category is too broad. I'm thinking about "text
formatting", "tables", "layout", "graphics", "objects" etc.
See above. I think we are talking about different levels of testing.
Regards,
Jörg
---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]