It seems to me we could leverage each others' resources and allocate platforms across the two groups. I would *love* to get a nightly report of test results and test failures. Why? Because then I feel that each developer can pull back on the amount of testing we have to do before checking in/submitting a patch. We could identify a set of MATS and a policy around running them prior to checkin, and shorten the checkin lifecycle considerably.

It would be even more effective if we had a "tinderbox" approach where the tinderbox machine was pulling changes, building and testing continuously. The sooner a failure is caught, the easier it is to track it down; obviously having each developer run full regressions is the best way to catch this, but when it takes hours and hours to run tests before checkin, we start wasting precious developer resources and it makes it harder to turn around fixes and patches.

David

Ole Solberg wrote:
Hi,

We also build and test on a few platforms daily and could provide those results.

My level of ambition would be to just send out the results without any deep analysis. (Just catching and filtering obvious local setup/enviroment blunders etc.)

I think communicating daily regression test results could be a good way to present the state of Derby.



Ole


Wed, 04 May 2005 Myrna van Lunteren wrote:

Hi...

At IBM we build the jars & run the tests nightly on a small set of
platforms...we could work on sending an automated list of the failures
to the community...

I'd propose sharing the list of failures for insane jar runs, Sun's
jdk142 for windows & (suse) linux (barring unexpected machine
outages)... I won't promise an actual analysis, but proactive
individuals could volunteer...

Is there interest in this?

Myrna

On 5/4/05, Ole Solberg <[EMAIL PROTECTED]> wrote:

Hi,

Are regression test results for "head of trunk" of Derby available
somewhere? (I.e. tests run at some specific svn revision of Derby.)

I am asking because I would like to compare my own test results with the
current "official" state of Derby: e.g.
- if errors I see are due to problems with my own setup/environment, and
- what errors are *expected* to be seen on the revisions where
regression tests were run.

Regards
Ole Solberg

Reply via email to