A good comparison would be the JMS tck. We don't run that on every build, so sometimes we check in code that breaks it. Having said that though, it would be better if we didn't.
These tests can be thought of as picking up where the tck ends. On 28/09/2007, Rupert Smith <[EMAIL PROTECTED]> wrote: > > It would mean that, yes. Although, where there is a genuine regression > issue, i.e., we've fixed something on M2 so the test passes, but it breaks > on trunk, I think checking the tests in is acceptable. The failure on trunk > is there to tell us that at some point trunk needs to deal with the > regression issue. > > On 28/09/2007, Rafael Schloming <[EMAIL PROTECTED]> wrote: > > > > Rupert Smith wrote: > > > How about this. Instead of putting these JMS+Qpid java tests under > > > trunk/qpidtests (or jmstests or whatever), lets put them in a module > > under > > > trunk/qpid/java/integrationtests (or jmstests or how about > > > 'regressiontests'?). Trunk will be the definitive source for these > > > non-version specific, non-branch specific tests, they will be pulled > > onto > > > the M2/2.1 or other branches by setting up an svn:external onto trunk. > > That > > > way they will always be the same accross all branches, and there will > > be no > > > need to start putting stuff outside of trunk/qpid. > > > > Does this mean that when you modify the tests you need to build all the > > branches to make sure you didn't break anything? > > > > --Rafael > > > > >
