Hi Mathias,
Mathias Bauer schrieb:
So I do not think, that it make sense to discuss only the 'release
testing' mode. In the past the regressions were integrated before the
QA started with switching in this mode.
I'm not sure if you understood my concern. Let me put it simple: what
makes us think that the current tests we are talking about that AFAIK
have been used in QA for testing the master for quite some time will
help to find regressions that currently stay unnoticed?
You talking about finding all regressions. I talk about finding the most
important regressions, which could be P1 issues, when it is integrated
in the master. The mandatory tests are for these errors.
Therefore you should get a mechanism to run more automated test scripts
as the mandatory tests. Then you can check your implementation more
effectively and find the regressions in your feature.
When I read all your mails, I think you know, which code bring in the
regressions ;-)
I know some places, yes. Of course not all of them. But I don't know
which tests we could use to find them. But many of the regressions I
remember couldn't be found by automated GUI testing as they manifested
themselves by showing some more or less subtile formatting differences.
You should talk with Helge, Automator for Writer, about these points. I
think we will find ways to find these regressions. If it isn't possible
with TestTool, then we sould use ConvWatch or something we have to
develop. We have so many tools, but nobody know when and how it should
be used. But this is another problem.
As I wrote some mails ago my suggestion is, to bring only a small set of
mandatory tests. But give the solution to select testing areas. Then you
can run dedicated tests on your implementation. And you will not run
toolbar tests on your bugfix for automated styles.
Anything else would be insane. I took that for granted. But I also want
to believe that running several hours of tests for e.g. automatic styles
would be worth the effort. This is a good example where I suspect that
possible regressions would stay unnoticed by automatic GUI testing. But
Do not forget, that the Automated test find the XML-fileformat error,
which was introduces by automatic styles (found in Master, because the
tester doesn't think that a problem in fileformat can be integrated) and
some other errors were found before integration.
of course that's open for debate. And exactly this debate is what I want
to see happening. So let's wait until the proposed test cases are
published and until we have verified that they run reliably. Then the QA
and development engineers of the different teams can investigate them
and decide if they make sense or if we can create other tests that serve
the desited purpose better. *Then* we can decide whether we wanted to
run the tests more frequently or even make them mandatory.
Jogi send out the link with the tests. Can we start to chaffer the
mandatory tests? ;-)
AS I wrote very often and here again. The time shouldn't be the problem.
Because the developer and the QA person do not stop working, when a
machine run the tests.
Thorsten
---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]