Thorsten Ziehm wrote:

>> There is something else that should be thought-provoking: AFAIK most or
>> nearly all discovered regressions we had on the master in the last
>> releases haven't been found by the existing automated tests. They have
>> been found by manual testing of users. So what makes us think that
>> applying the existing test cases earlier and more often will help us to
>> find these regressions? For me this is a hint that we might need at
>> least additional or even other tests if we wanted to test efficiently.
>> I'm not sure about that but it would be careless to ignore this fact.
> 
> You are right, there were not found all regressions in Master by the
> automated tests. But some of them were found, when some more tests are
> mandatory. In the past only 2 smaller tests are mandatory for approving
> a CWS. Many testers run more than these tests, but not all. Therefore
> some regressions went into the Master, which could be identified by
> the test cases.

I didn't talk about tests on a CWS (I know that we only have a few
mandatory tests), I was talking about the regressions that haven't been
detected by the "release testing" on the master.

We should try to identify tests that will be able to detect regressions
in code where we know that it is prone to regressions. I don't want to
make tests mandatory if it tests code that most probably will not create
a single regression in 5 years or so.

> So mandatory tests will help to identify more regressions before
> integration of a CWS, but not all. That is right and cannot be denied.

I know that each and every test I can make is able to find bugs and also
regressions. This is a correct but nevertheless trivial statement that
of course noone would deny. But for the same reason why QA nowadays does
not execute each and every test we have I think we should have a proper
selection for what we perhaps want to make mandatory. Efficiency is
important. Thus my insisting on first discussing and selecting the tests
and then deciding how to deal with them. If you think that the 45 test
cases identified by the QA team are a proper selection we should have a
closer look on them and identify which code they test.

> 
>> So currently I don't know where this discussion will end. If the
>> expected result is a confirmation that developers agreed to executing
>> some arbitrary tests not known yet to test something not defined yet I
>> doubt that this will come to an end. But if we are talking about tests
>> that will be reliable, not too slow and that will be specifically
>> designed to investigate the "dark corners" that are known to produce
>> regressions more frequently: I think that wouldn't get a lot resistance.
>> But that's not where we are now.
> 
> I don't think so.

Maybe a reason for the confusion created by this discussion is that the
findings gained before hadn't been made available. But that's just a
wild guess.

Ciao,
Mathias

-- 
Mathias Bauer (mba) - Project Lead OpenOffice.org Writer
OpenOffice.org Engineering at Sun: http://blogs.sun.com/GullFOSS
Please don't reply to "[EMAIL PROTECTED]".
I use it for the OOo lists and only rarely read other mails sent to it.

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to