[ 
http://issues.apache.org/jira/browse/DERBY-1116?page=comments#action_12370596 ] 

David Van Couvering commented on DERBY-1116:
--------------------------------------------

Sorry, I should have made clear this was more a guideline than a requirement.

The issue is, if we all generally agree to follow a guideline in terms of what 
tests to run, then when someone *doesn't* run those tests, they potentially 
cause failures for other developers.  That's what I meant about "blocking 
others" -- their derbyall starts failing because of my choice to run fewer 
tests and missing something -- same as what you mentioned about breaking 
derbyall and impacting others across multiple timezones.  If Tomohito doesn't 
run derbyall but derbymats (a proposed smaller set of tests) then I can feel 
more comfortable running derbymats myself, rather than derbyall.

I agree that it is up to the contributor to decide what is sufficient, but if 
we had a general agreement in terms of what the Big Suite is for running prior 
to checkin, and that is a smaller set of tests than derbyall, then I think we 
can be more productive overall.

I do agree that if there are five submissions between tinderbox runs, it's hard 
to tell which one it is, but it's a lot easier than when we *didn't* have 
tinderbox tests and we might not find out for a day or two that there was a 
problem.  I think that's why we all very seriously ran derbyall, it was as if 
all of us were taking responsibility for running full regression tests.  Now I 
think we have Ole's automated suites doing that heavy lifting for us, and we 
can pull back a bit in terms of what we *recommend* developers do prior to 
committing a patch.


> Define a minimal acceptance test suite for checkins
> ---------------------------------------------------
>
>          Key: DERBY-1116
>          URL: http://issues.apache.org/jira/browse/DERBY-1116
>      Project: Derby
>         Type: Improvement
>   Components: Test
>     Reporter: David Van Couvering
>     Priority: Minor

>
> Now that we have an excellent notification system for tinderbox/nightly 
> regression failures, I would like to suggest that we reduce the size of the 
> test suite being run prior to checkin.   I am not sure what should be in such 
> a minimal test, but in particular I would like to remove things such as the 
> stress test and generally reduce the number of tests being run for each 
> subsystem/area of code.
> As an example of how derbyall currently affects my productivity, I was 
> running derbyall on my machine starting at 2pm, and by evening it was still 
> running.  At 9pm my machine was accidentally powered down, and this morning I 
> am restarting the test run.
> I have been tempted (and acted on such temptation) in the past to run a 
> smaller set of tests, only to find out that I have blocked others who are 
> running derbyall prior to checkin.  For this reason, we need to define a 
> minimal acceptance test (MATS) that we all agree to run prior to checkin.
> One could argue that you can run your tests on another machine and thus 
> reduce productivity, but we can't assume everybody in the community has nice 
> big test servers to run their tests on.
> If there are no objections, I can take a first pass at defining what this 
> test suite should look like, but I suspect many others in the community have 
> strong opinions about this and may even wish to volunteer to do this 
> definition themselves (for example, some of you who may be working in the QA 
> division in some of our Big Companies :) ).

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira

Reply via email to