Bernt M. Johnsen wrote: > Hi all, > > The last week or two, derbyall has failed a lot the time. This makes > it hard to verify that I haven't goofed up things when doing changes > in the code (I run derbyall daily as a quality measure). To be able to > work efficiently, many of us are dependent on a reliable test suite. > > So, I urge everyone who submits a patch to the Derby to run derbyall > on at least one platform/vm and not only the "relevant" tests for the > patch, even if you "know" that it should not be necessary. I think > this would save us all a lot of work (even more time saved for the > committers, I guess :-).
This is good advice, especially for people new to the project. Knowing which tests might be affected by a fix comes with experience and experience is obtained by running derbyall and seeing what fails. In some ways it's easier just to run derbyall, I try to do that for all my contributions, usually running overnight. I think Satheesh and myself have requested that with any patch the contributor should indicate which test suites have been run. And not just for the first submission of a patch, if the patch is reworked then the e-mail should indicate again which tests have been run. I tend to only look at the most recent submission for a patch, not the complete history. I think Satheesh also said that he would only look at any patches that did indicate which tests had been run. Another job for the committers is to stop applying any patches if the number of tests failing on derbyall passes some threshold. Then only commit (or revert) patches that resolve the failures. Looking at the latest results from the Sun group, we have an average pass rate of around 98.7% (~8 tests failing) and minimum pass rate of around 98.0% (12 tests). Not sure what the threshold should be, but we may be close at this point. One or two tests failing is somewhat easy to "work-around" when submitting patches, 8-12 is harder (just to remember which fail and compare). Our goal should be 100% passing all the time. Dan.
