Bernd Eilers wrote:

> Mathias Bauer wrote:
>> Hi Thorsten,
>> 
> [... snip ...]
> 
> 
>> And also again: let's stop this discussion (and especially allegations
>> obviously caused by misunderstandings) until we have got information
>> about what the tests cover exactly and if this is what we want.
> 
> Do you think it makes sense to break this "what do they cover?" question 
> down to source file level, eg. stating exactly which CXX, HXX, java 
> file, etc. is being covered or do you think more of a functional 
> coverage description.

I don't believe that code coverage on source level makes a lot of sense
but let's move that discussion elsewhere as I think that this is not
relevant for our current topic.

Meanwhile I discovered that my understanding of the goals of the tests
we are talking about was partially wrong. So I stand corrected wrt. what
I said about the actual content of the tests - what we have now is OK as
a starting point. The additional integration testing I wanted to achieve
and that should address the regressions we had on the RC builds in the
last releases indeed have to be tackled differently, preferrably by
integration tests based on more direct code access. But this is not a
contradiction to the proposal of Jörg (as I errorneously assumed).

It also seems that the remaining concerns I have and the whishes how
they should be addressed are shared by Thorsten Ziehm and his team as
Thorsten and I found out in a short face to face meeting. My goal is
that developers should not have more work to do as now. Developers
shouldn't be forced to find out why a test failed as unfortunately still
is necessary for our performance tests and tinderbox builds, something
that causes a lot of annoyances. Installing more annoying procedures is
something we should avoid, not least as it is counter productive wrt.
our goal to attract more developers.

Here's my understanding of the intended workflow:

Developers "press a button" and then only get something back if there
really is a regression they added. If the test fails for other reasons
the test or test system maintainers will make sure that the test will
run properly again. But according to Thorsten Ziehm odds are low anyway
that this will happen.

Whether the tests run a few hours or more is of minor importance as this
isn't time that a developers has to spend. And if a CWS needs 3 weeks or
3 weeks and one day before the manual testing can start also doesn't
matter a lot.

So I agree with Jörg: as soon as the system is available (together with
possibly needed additional hardware) let's go for a pilot and see if it
works for us.

Ciao,
Mathias

-- 
Mathias Bauer (mba) - Project Lead OpenOffice.org Writer
OpenOffice.org Engineering at Sun: http://blogs.sun.com/GullFOSS
Please don't reply to "[EMAIL PROTECTED]".
I use it for the OOo lists and only rarely read other mails sent to it.

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to