2014-06-17 21:39 GMT+09:00 John Garbutt <j...@johngarbutt.com>: > On 12 June 2014 17:10, Sean Dague <s...@dague.net> wrote: >> On 06/12/2014 12:02 PM, Matt Riedemann wrote: >>> >>> >>> On 6/12/2014 10:51 AM, Matthew Treinish wrote: >>>> On Fri, Jun 13, 2014 at 12:41:19AM +0930, Christopher Yeoh wrote: >>>>> On Fri, Jun 13, 2014 at 12:25 AM, Dan Smith <d...@danplanet.com> wrote: >>>>> >>>>>> I think it'd be OK to move them to the experimental queue and a >>>>>> periodic >>>>>>> nightly job until the v2.1 stuff shakes out. The v3 API is marked >>>>>>> experimental right now so it seems fitting that it'd be running >>>>>>> tests in >>>>>>> the experimental queue until at least the spec is approved and >>>>>>> microversioning starts happening in the code base. >>>>>>> >>>>>> >>>>>> I think this is reasonable. Continuing to run the full set of tests on >>>>>> every patch for something we never expect to see the light of day >>>>>> (in its >>>>>> current form) seems wasteful to me. Plus, we're going to >>>>>> (presumably) be >>>>>> ramping up tests on v2.1, which means to me that we'll need to clear >>>>>> out >>>>>> some capacity to make room for that. >>>>>> >>>>>> >>>>> Thats true, though I was suggesting as v2.1microversions rolls out >>>>> we drop >>>>> the test out of v3 and move it to v2.1microversions testing, so >>>>> there's no >>>>> change in capacity required. >>>> >>>> That's why I wasn't proposing that we rip the tests out of the tree. >>>> I'm just >>>> trying to weigh the benefit of leaving them enabled on every run against >>>> the increased load they cause in an arguably overworked gate. >>>> >>>>> >>>>> Matt - how much of the time overhead is scenario tests? That's something >>>>> that would have a lot less impact if moved to and experimental queue. >>>>> Although the v3 api as a whole won't be officially exposed, the api >>>>> tests >>>>> test specific features fairly indepdently which are slated for >>>>> v2.1microversions on a case by case basis and I don't want to see those >>>>> regress. I guess my concern is how often the experimental queue >>>>> results get >>>>> really looked at and how hard/quick it is to revert when lots of stuff >>>>> merges in a short period of time) >>>> >>>> The scenario tests tend to be the slower tests in tempest. I have to >>>> disagree >>>> that removing them would have lower impact. The scenario tests provide >>>> the best >>>> functional verification, which is part of the reason we always have >>>> failures in >>>> the gate on them. While it would make the gate faster the decrease in >>>> what were >>>> testing isn't worth it. Also, for reference I pulled the test run >>>> times that >>>> were greater than 10sec out of a recent gate run: >>>> http://paste.openstack.org/show/83827/ >>>> >>>> The experimental jobs aren't automatically run, they have to be manually >>>> triggered by leaving a 'check experimental' comment. So for changes >>>> that we want >>>> to test the v3 api on a comment would have to left. To prevent >>>> regression is why >>>> we'd also have the nightly job, which I think is a better compromise >>>> for the v3 >>>> tests while we wait to migrate them to the v2.1 microversion tests. >>>> >>>> Another, option is that we make the v3 job run only on the check queue >>>> and not >>>> on the gate. But the benefits of that are slightly more limited, >>>> because we'd >>>> still be holding up the check queue. >>>> >>>> -Matt Treinish >>>> >>>> >>>> >>>> _______________________________________________ >>>> OpenStack-dev mailing list >>>> OpenStack-dev@lists.openstack.org >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>> >>> Yeah the scenario tests need to stay, that's how we've exposed the two >>> big ssh bugs in the last couple of weeks which are obvious issues at scale. >>> >>> I still think experimental/periodic is the way to go, not a hybrid of >>> check-on/gate-off. If we want to explicitly test v3 API changes we can >>> do that with 'recheck experimental'. Granted someone has to remember to >>> run those, much like checking/rechecking 3rd party CI results. >>> >>> One issue I've had with the nightly periodic job is finding out where >>> the results are in an easy to consume format. Is there something out >>> there for that? I'm thinking specifically of things we've turned off in >>> the gate before like multi-backend volume tests and >>> allow_tenant_isolation=False. >> >> It's getting emailed to the otherwise defunct openstack-qa list. >> Subscribe there for nightlies. >> >> Also agreed, the scenario tests find and prevent *tons* of real issues. >> Those have to stay. There is a reason we use them in the smoke runs for >> grenade, they are a very solid sniff test of real working. >> >> I also think by policy we should probably pull v3 out of the main job, >> as it's not a stable API. We've had issues in Tempest with people >> landing tests, then trying to go and change the API. The biggest issue >> in taking branchless tempest back to stable/havana was Nova v3 API, as >> it's actually quite different in havana than icehouse. >> >> We have a chicken / egg challenge in testing experimental APIs which >> will need to get resolved, but for now I think turning off v3 is the >> right approach. > > +1 > > Seems like we should concentrate on v2 tests for now. > > To stop v3 code from regressing, we should be merging and testing > v2.1, ASAP, using those v2 tests.
Yes, right. We have already started v2.1 API tests on Tempest side with some prototype: https://review.openstack.org/#/c/96662/ Thanks Ken'ichi Ohmichi -- > For the future micro versions, we still have those v3 test to be inspired by. > > Chris, does that seem workable? > > John > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev@lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev _______________________________________________ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev