Re: [openstack-dev] Which program for Rally

2014-08-18 Thread Matthew Treinish
On Fri, Aug 15, 2014 at 01:57:29AM +0400, Boris Pavlovic wrote:
> Matt,
> 
>> One thing did just occur to me while writing this though it's probably worth
> > investigating splitting out the stress test framework as an external
> > tool/project after we start work on the tempest library. [3]
> 
> 
> 
> I fully agree with the fact that stress testing doesn't belong to Tempest.
> 
> This current thread is all about this aspect and all my arguments, related
> to splitting Rally and merging it to tempest are related to this.
> 
> 
> Could you please elaborate, why instead of ready solution "Rally"  that has
> community and is aligned with OpenStack and OpenStack processes you are
> going to create from scratch similar solution?

This is the same issue which was brought up on your subunit2sql thread [1] and
has come up several times already in this thread. Rally doesn't work as
individual components, you need to use the whole tool chain to do anything with
it. There are pieces of Rally which would be very useful in conjunction with all
the other projects we have in the gating workflow. But, because Rally has
decided to duplicate much of what we already have and then tie everything
together in a monolithic toolchain we can't use these pieces which are useful by
itself. This doesn't work with the workflow we already have, it also ignores the
split in functionality we currently have and are working on improving. Instead
of rewriting all the details again just look at [2][3] they elaborate on all of
this already.

You're also ignoring that by splitting out the stress test framework we'd be
doing the exact thing you're so opposed to doing in rally. Which is splitting
out existing functionality from a larger more complex project into smaller more
purpose built consumable chunks that work together to build a modular pipeline,
which can be used in different configurations to suit the application. It's also
not being created from scratch, the stress framework already exists and has for
quite some time. (it pre-dates Rally) We would just be breaking it off into a
separate repo to make the split in functionality more clear. It is essentially
a separate tool already, it just lives in the tempest tree which I feel is a
source of some confusion around it.

> 
> I really don't see any reasons why we need to duplicate existing and
> working solution and can't just work together on Rally?

So I have to point out the double standard with this statement. You're ignoring
all the functionality that Rally has already duplicated. For example, the fact
that tempest exists as a battery of tests which is being slowly duplicated in
Rally, or the stress test framework, which is functionality that was more or
less completely duplicated in Rally. You seem to be under the mistaken
impression that by continuing to improve things on QA program projects we're
duplicating Rally. Which is honestly something I'm having a hard time
understanding. I especially do not see the value in working on improvements to
tempest and the existing workflow by replacing everything with Rally.



So I think there's been continued confusion around exactly how all the projects
work together in the QA program. So I wrote a blog post giving a high level
overview here: http://blog.kortar.org/?p=4

Now if Rally is willing to work with us it would really be awesome since a lot
of the work has already been done by them. But we would need to rework Rally so
the bits which don't already exist in the QA program are interoperable with what
we have now and can be used independently. Duplicated functionality would need
to be removed, and we would need to add the improvements in Rally on existing
functionality back into the projects, where they really belong. However, as I've
been continually saying in this thread Rally in it's current form doesn't work
with our current model, nor does it work with the vision, at least I have, for
how things should be in the future.

Now I'm sure someone is going to see the flow diagrams on my blog and see it as
a challenge to explain how Rally does it better today, or something along those
lines. Please don't, because honestly it's irrelevant, these are just ideas in
my head, and where I want to help steer the QA program as long as I'm working on
it. (at least as of today) I fully expect things to evolve and grow more
organically resulting in something that will be completely different from that.

Also, I'm really done with this thread, I've outlined my stance repeatedly and
tried my best to explain my position as clearly as I can. At this point I have
nothing else to add. I view the burden as being fully on the Rally team to
decide whether they want to start working with the QA program towards
integrating Rally into the QA program, (the steps Sean outlined in [2] are a
good start) or remain a separate external project. (or I guess unless the TC
mandates something else)

One last comment, I do want to apologize in advance if any of my w

Re: [openstack-dev] Which program for Rally

2014-08-15 Thread Jeremy Stanley
On 2014-08-13 19:28:48 -0700 (-0700), Joe Gordon wrote:
> We actually run out of nodes almost every day now (except
> weekends), we have about 800 nodes, and hit that quota most days
[...]

It's worth noting that a lot of the recent exhaustion has been due
to OpenStack bugs impacting the providers donating those resources
(instances perpetually stuck in error state when deleting because
nova calls to neutron timed out on port deletion, neutron floating IP
deletion through nova failing at random causing leaks eating into
our quota, et cetera). Not to say that there won't be new bugs
cropping up to take their place once those are solved/worked around,
but at least the current state is not entirely due to the volume and
duration of jobs we run.
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Which program for Rally

2014-08-14 Thread Boris Pavlovic
Matt,

One thing did just occur to me while writing this though it's probably worth
> investigating splitting out the stress test framework as an external
> tool/project after we start work on the tempest library. [3]



I fully agree with the fact that stress testing doesn't belong to Tempest.

This current thread is all about this aspect and all my arguments, related
to splitting Rally and merging it to tempest are related to this.


Could you please elaborate, why instead of ready solution "Rally"  that has
community and is aligned with OpenStack and OpenStack processes you are
going to create from scratch similar solution?

I really don't see any reasons why we need to duplicate existing and
working solution and can't just work together on Rally?


Best regards,
Boris Pavlovic


On Fri, Aug 15, 2014 at 1:15 AM, Matthew Treinish 
wrote:

> On Wed, Aug 13, 2014 at 03:48:59PM -0600, Duncan Thomas wrote:
> > On 13 August 2014 13:57, Matthew Treinish  wrote:
> > > On Tue, Aug 12, 2014 at 01:45:17AM +0400, Boris Pavlovic wrote:
> > >> Keystone, Glance, Cinder, Neutron and Heat are running rally
> performance
> > >> jobs, that can be used for performance testing, benchmarking,
> regression
> > >> testing (already now). These jobs supports in-tree plugins for all
> > >> components (scenarios, load generators, benchmark context) and they
> can use
> > >> Rally fully without interaction with Rally team at all. More about
> these
> > >> jobs:
> > >>
> https://docs.google.com/a/mirantis.com/document/d/1s93IBuyx24dM3SmPcboBp7N47RQedT8u4AJPgOHp9-A/
> > >> So I really don't see anything like this in tempest (even in observed
> > >> future)
> >
> > > So this is actually the communication problem I mentioned before.
> Singling out
> > > individual projects and getting them to add a rally job is not "cross
> project"
> > > communication. (this is part of what I meant by "push using Rally")
> There was no
> > > larger discussion on the ML or a topic in the project meeting about
> adding these
> > > jobs. There was no discussion about the value vs risk of adding new
> jobs to the
> > > gate. Also, this is why less than half of the integrated projects have
> these
> > > jobs. Having asymmetry like this between gating workloads on projects
> helps no
> > > one.
> >
> > So the advantage of the approach, rather than having a massive
> > cross-product discussion, is that interested projects (I've been very
> > interested for a cinder core PoV) act as a test bed for other
> > projects. 'Cross project' discussions rather come to other teams, they
> > rely on people to find them, where as Boris came to us, said I've got
> > this thing you might like, try it out, tell me what you want. He took
> > feedback, iterated fast and investigated bugs. It has been a genuine
> > pleasure to work with him, and I feel we made progress faster than we
> > would have done if it was trying to please everybody.
>
> I'm not arguing whether Boris was great to work with or not. Or whether
> there
> isn't value in talking directly to the dev team when setting up a new job.
> That
> is definitely the fastest path to getting a new job up and running. But,
> for
> something like adding a new class of dsvm job which runs on every patch,
> that
> affects everyone, not just the project where the jobs are being added. A
> larger
> discussion is really necessary to weigh whether such a job should be
> added. It
> really only needs to happen once, just before the first one is added on an
> integrated project.
>
> >
> > > That being said the reason I think osprofiler has been more accepted
> and it's
> > > adoption into oslo is not nearly as contentious is because it's an
> independent
> > > library that has value outside of itself. You don't need to pull in a
> monolithic
> > > stack to use it. Which is a design point more conducive with the rest
> of
> > > OpenStack.
> >
> > Sorry, are you suggesting tempest isn't a giant monolithic thing?
> > Because I was able to comprehend the rally code very quickly, that
> > isn't even slightly true of tempest. Having one simple tool that does
> > one thing well is exactly what rally has tried to do - tempest seems
> > to want to be five different things at once (CI, instalation tests,
> > trademark, preformance, stress testing, ...)
>
> This is actually a common misconception about the purpose and role of
> Tempest.
> Tempest is strictly concerned with being the integration test suite for
> OpenStack, which just includes the actual tests and some methods of
> running the
> tests. This is attempted to be done in a manner which is independent of the
> environment in which tempest is run or run against. (for example, devstack
> vs a
> public cloud) Yes tempest is a large project and has a lot of tests which
> just
> adds to it's complexity, but it's scope is quite targeted. It's just that
> it
> grows at the same rate OpenStack scope grows because tempest has coverage
> for
> all the projects.
>
> Methods of running the te

Re: [openstack-dev] Which program for Rally

2014-08-14 Thread Matthew Treinish
On Wed, Aug 13, 2014 at 03:48:59PM -0600, Duncan Thomas wrote:
> On 13 August 2014 13:57, Matthew Treinish  wrote:
> > On Tue, Aug 12, 2014 at 01:45:17AM +0400, Boris Pavlovic wrote:
> >> Keystone, Glance, Cinder, Neutron and Heat are running rally performance
> >> jobs, that can be used for performance testing, benchmarking, regression
> >> testing (already now). These jobs supports in-tree plugins for all
> >> components (scenarios, load generators, benchmark context) and they can use
> >> Rally fully without interaction with Rally team at all. More about these
> >> jobs:
> >> https://docs.google.com/a/mirantis.com/document/d/1s93IBuyx24dM3SmPcboBp7N47RQedT8u4AJPgOHp9-A/
> >> So I really don't see anything like this in tempest (even in observed
> >> future)
> 
> > So this is actually the communication problem I mentioned before. Singling 
> > out
> > individual projects and getting them to add a rally job is not "cross 
> > project"
> > communication. (this is part of what I meant by "push using Rally") There 
> > was no
> > larger discussion on the ML or a topic in the project meeting about adding 
> > these
> > jobs. There was no discussion about the value vs risk of adding new jobs to 
> > the
> > gate. Also, this is why less than half of the integrated projects have these
> > jobs. Having asymmetry like this between gating workloads on projects helps 
> > no
> > one.
> 
> So the advantage of the approach, rather than having a massive
> cross-product discussion, is that interested projects (I've been very
> interested for a cinder core PoV) act as a test bed for other
> projects. 'Cross project' discussions rather come to other teams, they
> rely on people to find them, where as Boris came to us, said I've got
> this thing you might like, try it out, tell me what you want. He took
> feedback, iterated fast and investigated bugs. It has been a genuine
> pleasure to work with him, and I feel we made progress faster than we
> would have done if it was trying to please everybody.

I'm not arguing whether Boris was great to work with or not. Or whether there
isn't value in talking directly to the dev team when setting up a new job. That
is definitely the fastest path to getting a new job up and running. But, for
something like adding a new class of dsvm job which runs on every patch, that
affects everyone, not just the project where the jobs are being added. A larger
discussion is really necessary to weigh whether such a job should be added. It
really only needs to happen once, just before the first one is added on an
integrated project.

> 
> > That being said the reason I think osprofiler has been more accepted and 
> > it's
> > adoption into oslo is not nearly as contentious is because it's an 
> > independent
> > library that has value outside of itself. You don't need to pull in a 
> > monolithic
> > stack to use it. Which is a design point more conducive with the rest of
> > OpenStack.
> 
> Sorry, are you suggesting tempest isn't a giant monolithic thing?
> Because I was able to comprehend the rally code very quickly, that
> isn't even slightly true of tempest. Having one simple tool that does
> one thing well is exactly what rally has tried to do - tempest seems
> to want to be five different things at once (CI, instalation tests,
> trademark, preformance, stress testing, ...)

This is actually a common misconception about the purpose and role of Tempest.
Tempest is strictly concerned with being the integration test suite for
OpenStack, which just includes the actual tests and some methods of running the
tests. This is attempted to be done in a manner which is independent of the
environment in which tempest is run or run against. (for example, devstack vs a
public cloud) Yes tempest is a large project and has a lot of tests which just
adds to it's complexity, but it's scope is quite targeted. It's just that it
grows at the same rate OpenStack scope grows because tempest has coverage for
all the projects.

Methods of running the tests does include the stress tests framework, but that
is mostly just a method of leveraging the large quantity of tests we currently
have in-tree to generate load. [1] (Yeah, we need to write better user docs
around this and a lot of other things) It just lets you define which tests to
use and how to loop and distribute them over workers. [2] 

The trademark, CI, upgrade testing, and installation testing are just examples
of applications where tempest is being used. (some of which are the domain of
other QA or Infra program projects, some are not) If you look in the tempest
tree you'll see very little specifically about any of those applications.
They're all mostly accomplished by building tooling around tempest. For example:
refstack->trademark, devstack-gate->ci, grenade->upgrade, etc. Tempest is just a
building block that can be used to make all of those things. As all of these
different use cases are basically tempest's primary consumer we do have to
take them into acco

Re: [openstack-dev] Which program for Rally

2014-08-13 Thread Tom Fifield
On 13/08/14 19:55, Boris Pavlovic wrote:
> Matt, 
> 
> 
> On Mon, Aug 11, 2014 at 07:06:11PM -0400, Zane Bitter wrote:
> > On 11/08/14 16:21, Matthew Treinish wrote:
> > >I'm sorry, but the fact that the
> > >docs in the rally tree has a section for user testimonials [4] I feel 
> speaks a
> > >lot about the intent of the project.
> 
> 
> Yes, you are absolutely right it speaks a lot about the intent of the
> project. 
> 
> One of the goal of Rally is to be the bridge between Operators and
> OpenStack community. 

Just throwing something out of left field here, but since the purpose of
the User Committee is basically that, maybe there's something to be
investigated there ...

Regards,


Tom

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Which program for Rally

2014-08-13 Thread Joe Gordon
On Wed, Aug 13, 2014 at 2:48 PM, Duncan Thomas 
wrote:

> On 13 August 2014 13:57, Matthew Treinish  wrote:
> > On Tue, Aug 12, 2014 at 01:45:17AM +0400, Boris Pavlovic wrote:
> >> Keystone, Glance, Cinder, Neutron and Heat are running rally performance
> >> jobs, that can be used for performance testing, benchmarking, regression
> >> testing (already now). These jobs supports in-tree plugins for all
> >> components (scenarios, load generators, benchmark context) and they can
> use
> >> Rally fully without interaction with Rally team at all. More about these
> >> jobs:
> >>
> https://docs.google.com/a/mirantis.com/document/d/1s93IBuyx24dM3SmPcboBp7N47RQedT8u4AJPgOHp9-A/
> >> So I really don't see anything like this in tempest (even in observed
> >> future)
>
> > So this is actually the communication problem I mentioned before.
> Singling out
> > individual projects and getting them to add a rally job is not "cross
> project"
> > communication. (this is part of what I meant by "push using Rally")
> There was no
> > larger discussion on the ML or a topic in the project meeting about
> adding these
> > jobs. There was no discussion about the value vs risk of adding new jobs
> to the
> > gate. Also, this is why less than half of the integrated projects have
> these
> > jobs. Having asymmetry like this between gating workloads on projects
> helps no
> > one.
>
> So the advantage of the approach, rather than having a massive
> cross-product discussion, is that interested projects (I've been very
> interested for a cinder core PoV) act as a test bed for other
> projects. 'Cross project' discussions rather come to other teams, they
> rely on people to find them, where as Boris came to us, said I've got
> this thing you might like, try it out, tell me what you want. He took
> feedback, iterated fast and investigated bugs. It has been a genuine
> pleasure to work with him, and I feel we made progress faster than we
> would have done if it was trying to please everybody.
>
> > That being said the reason I think osprofiler has been more accepted and
> it's
> > adoption into oslo is not nearly as contentious is because it's an
> independent
> > library that has value outside of itself. You don't need to pull in a
> monolithic
> > stack to use it. Which is a design point more conducive with the rest of
> > OpenStack.
>
> Sorry, are you suggesting tempest isn't a giant monolithic thing?
> Because I was able to comprehend the rally code very quickly, that
> isn't even slightly true of tempest. Having one simple tool that does
> one thing well is exactly what rally has tried to do - tempest seems
> to want to be five different things at once (CI, instalation tests,
> trademark, preformance, stress testing, ...)
>
> >> Matt, Sean - seriously community is about convincing people, not about
> >> forcing people to do something against their wiliness.  You are making
> huge
> >> architectural decisions without deep knowledge about what is Rally, what
> >> are use cases, road map, goals and auditory.
> >>
> >> IMHO community in my opinion is thing about convincing people. So QA
> >> program should convince Rally team (at least me) to do such changes. Key
> >> secret to convince me, is to say how this will help OpenStack to perform
> >> better.
> >
> > If community, per your definition, is about convincing people then there
> needs
> > to be a 2-way discussion. This is an especially key point considering the
> > feedback on this thread is basically the same feedback you've been
> getting since
> > you first announced Rally on the ML. [1] (and from even before that I
> think, but
> > it's hard to remember all the details from that far back)  I'm afraid
> that
> > without a shared willingness to explore what we're suggesting because of
> > preconceived notions then I fail to see the point in moving forward. The
> fact
> > that this feedback has been ignored is why this discussion has come up
> at all.
> >
> >>
> >> Currently Rally team see a lot of issues related to this decision:
> >>
> >> 1) It breaks already existing performance jobs (Heat, Glance, Cinder,
> >> Neutron, Keystone)
> >
> > So firstly, I want to say I find these jobs troubling. Not just from the
> fact
> > that because of the nature of the gate (2nd level virt on public clouds)
> the
> > variability between jobs can be staggering. I can't imagine what value
> there is
> > in running synthetic benchmarks in this environment. It would only
> reliably
> > catch the most egregious of regressions. Also from what I can tell none
> of these
> > jobs actually compare the timing data to the previous results, it just
> generates
> > the data and makes a pretty graph. The burden appears to be on the user
> to
> > figure out what it means, which really isn't that useful. How have these
> jobs
> > actually helped? IMO the real value in performance testing in the gate
> is to
> > capture the longer term trends in the data. Which is something these
> jobs are
> > not doing.
>
> S

Re: [openstack-dev] Which program for Rally

2014-08-13 Thread Robert Collins
On 14 August 2014 09:48, Duncan Thomas  wrote:
> On 13 August 2014 13:57, Matthew Treinish  wrote:

>> So this is actually the communication problem I mentioned before. Singling 
>> out
>> individual projects and getting them to add a rally job is not "cross 
>> project"
>> communication. (this is part of what I meant by "push using Rally") There 
>> was no
>> larger discussion on the ML or a topic in the project meeting about adding 
>> these
>> jobs. There was no discussion about the value vs risk of adding new jobs to 
>> the
>> gate. Also, this is why less than half of the integrated projects have these
>> jobs. Having asymmetry like this between gating workloads on projects helps 
>> no
>> one.

W.r.t. communication we had a very limited number of slots at Atlanta
for cross-project discussion; osprofiler *did* get a slot (I forget
under what banner) - and it has iterated to address deployer concerns
(the ones I know of anyhow).

I'm very keen to see osprofiler integrated in close to or even by
default, its an essential bit of operator and diagnostic tooling. I'd
like to ask that we address Rally and osprofiler separately.

> So the advantage of the approach, rather than having a massive
> cross-product discussion, is that interested projects (I've been very
> interested for a cinder core PoV) act as a test bed for other
> projects. 'Cross project' discussions rather come to other teams, they
> rely on people to find them, where as Boris came to us, said I've got
> this thing you might like, try it out, tell me what you want. He took
> feedback, iterated fast and investigated bugs. It has been a genuine
> pleasure to work with him, and I feel we made progress faster than we
> would have done if it was trying to please everybody.

Right - try early, fail fast, iterate. Making everything get consensus
before anyone tries it is a waste of everyones time. Most ideas won't
be hits, so lets wait for them to be successful before we standardise.

>> That being said the reason I think osprofiler has been more accepted and it's
>> adoption into oslo is not nearly as contentious is because it's an 
>> independent
>> library that has value outside of itself. You don't need to pull in a 
>> monolithic
>> stack to use it. Which is a design point more conducive with the rest of
>> OpenStack.
>
> Sorry, are you suggesting tempest isn't a giant monolithic thing?
> Because I was able to comprehend the rally code very quickly, that
> isn't even slightly true of tempest. Having one simple tool that does
> one thing well is exactly what rally has tried to do - tempest seems
> to want to be five different things at once (CI, instalation tests,
> trademark, preformance, stress testing, ...)
...
> So I put in a change to dump out the raw data from each run into a
> zipped json file so that I can start looking at the value of
> collecting this data As an experiment I think it is very worth
> while. The gate job is none voting, and apparently, at least on the
> cinder front, highly reliable. The job runs fast enough it isn't
> slowing the gate down - we aren't running out of nodes on the gate as
> far as I can tell, so I don't understand the hostility towards it.
> We'll run it for a bit, see if it proves useful, if it doesn't then we
> can turn it off and try something else.
>
> I'm confused by the hostility about this gate job - it is costing us
> nothing, if it turns out to be a pain we'll turn it off.
>
> Rally as a general tool has enabled me do do things that I wouldn't
> even consider trying with tempest. There shouldn't be a problem with a
> small number of parallel efforts - that's a founding principle of
> opensource in general.

+1

-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Which program for Rally

2014-08-13 Thread Angus Salkeld
On Wed, 2014-08-13 at 15:48 -0600, Duncan Thomas wrote:
> On 13 August 2014 13:57, Matthew Treinish  wrote:
> > On Tue, Aug 12, 2014 at 01:45:17AM +0400, Boris Pavlovic wrote:
> >> Keystone, Glance, Cinder, Neutron and Heat are running rally performance
> >> jobs, that can be used for performance testing, benchmarking, regression
> >> testing (already now). These jobs supports in-tree plugins for all
> >> components (scenarios, load generators, benchmark context) and they can use
> >> Rally fully without interaction with Rally team at all. More about these
> >> jobs:
> >> https://docs.google.com/a/mirantis.com/document/d/1s93IBuyx24dM3SmPcboBp7N47RQedT8u4AJPgOHp9-A/
> >> So I really don't see anything like this in tempest (even in observed
> >> future)
> 
> > So this is actually the communication problem I mentioned before. Singling 
> > out
> > individual projects and getting them to add a rally job is not "cross 
> > project"
> > communication. (this is part of what I meant by "push using Rally") There 
> > was no
> > larger discussion on the ML or a topic in the project meeting about adding 
> > these
> > jobs. There was no discussion about the value vs risk of adding new jobs to 
> > the
> > gate. Also, this is why less than half of the integrated projects have these
> > jobs. Having asymmetry like this between gating workloads on projects helps 
> > no
> > one.
> 
> So the advantage of the approach, rather than having a massive
> cross-product discussion, is that interested projects (I've been very
> interested for a cinder core PoV) act as a test bed for other
> projects. 'Cross project' discussions rather come to other teams, they
> rely on people to find them, where as Boris came to us, said I've got
> this thing you might like, try it out, tell me what you want. He took
> feedback, iterated fast and investigated bugs. It has been a genuine
> pleasure to work with him, and I feel we made progress faster than we
> would have done if it was trying to please everybody.
> 
> > That being said the reason I think osprofiler has been more accepted and 
> > it's
> > adoption into oslo is not nearly as contentious is because it's an 
> > independent
> > library that has value outside of itself. You don't need to pull in a 
> > monolithic
> > stack to use it. Which is a design point more conducive with the rest of
> > OpenStack.
> 
> Sorry, are you suggesting tempest isn't a giant monolithic thing?
> Because I was able to comprehend the rally code very quickly, that
> isn't even slightly true of tempest. Having one simple tool that does
> one thing well is exactly what rally has tried to do - tempest seems
> to want to be five different things at once (CI, instalation tests,
> trademark, preformance, stress testing, ...)
> 
> >> Matt, Sean - seriously community is about convincing people, not about
> >> forcing people to do something against their wiliness.  You are making huge
> >> architectural decisions without deep knowledge about what is Rally, what
> >> are use cases, road map, goals and auditory.
> >>
> >> IMHO community in my opinion is thing about convincing people. So QA
> >> program should convince Rally team (at least me) to do such changes. Key
> >> secret to convince me, is to say how this will help OpenStack to perform
> >> better.
> >
> > If community, per your definition, is about convincing people then there 
> > needs
> > to be a 2-way discussion. This is an especially key point considering the
> > feedback on this thread is basically the same feedback you've been getting 
> > since
> > you first announced Rally on the ML. [1] (and from even before that I 
> > think, but
> > it's hard to remember all the details from that far back)  I'm afraid that
> > without a shared willingness to explore what we're suggesting because of
> > preconceived notions then I fail to see the point in moving forward. The 
> > fact
> > that this feedback has been ignored is why this discussion has come up at 
> > all.
> >
> >>
> >> Currently Rally team see a lot of issues related to this decision:
> >>
> >> 1) It breaks already existing performance jobs (Heat, Glance, Cinder,
> >> Neutron, Keystone)
> >
> > So firstly, I want to say I find these jobs troubling. Not just from the 
> > fact
> > that because of the nature of the gate (2nd level virt on public clouds) the
> > variability between jobs can be staggering. I can't imagine what value 
> > there is
> > in running synthetic benchmarks in this environment. It would only reliably
> > catch the most egregious of regressions. Also from what I can tell none of 
> > these
> > jobs actually compare the timing data to the previous results, it just 
> > generates
> > the data and makes a pretty graph. The burden appears to be on the user to
> > figure out what it means, which really isn't that useful. How have these 
> > jobs
> > actually helped? IMO the real value in performance testing in the gate is to
> > capture the longer term trends in the data. Which is som

Re: [openstack-dev] Which program for Rally

2014-08-13 Thread Duncan Thomas
On 13 August 2014 13:57, Matthew Treinish  wrote:
> On Tue, Aug 12, 2014 at 01:45:17AM +0400, Boris Pavlovic wrote:
>> Keystone, Glance, Cinder, Neutron and Heat are running rally performance
>> jobs, that can be used for performance testing, benchmarking, regression
>> testing (already now). These jobs supports in-tree plugins for all
>> components (scenarios, load generators, benchmark context) and they can use
>> Rally fully without interaction with Rally team at all. More about these
>> jobs:
>> https://docs.google.com/a/mirantis.com/document/d/1s93IBuyx24dM3SmPcboBp7N47RQedT8u4AJPgOHp9-A/
>> So I really don't see anything like this in tempest (even in observed
>> future)

> So this is actually the communication problem I mentioned before. Singling out
> individual projects and getting them to add a rally job is not "cross project"
> communication. (this is part of what I meant by "push using Rally") There was 
> no
> larger discussion on the ML or a topic in the project meeting about adding 
> these
> jobs. There was no discussion about the value vs risk of adding new jobs to 
> the
> gate. Also, this is why less than half of the integrated projects have these
> jobs. Having asymmetry like this between gating workloads on projects helps no
> one.

So the advantage of the approach, rather than having a massive
cross-product discussion, is that interested projects (I've been very
interested for a cinder core PoV) act as a test bed for other
projects. 'Cross project' discussions rather come to other teams, they
rely on people to find them, where as Boris came to us, said I've got
this thing you might like, try it out, tell me what you want. He took
feedback, iterated fast and investigated bugs. It has been a genuine
pleasure to work with him, and I feel we made progress faster than we
would have done if it was trying to please everybody.

> That being said the reason I think osprofiler has been more accepted and it's
> adoption into oslo is not nearly as contentious is because it's an independent
> library that has value outside of itself. You don't need to pull in a 
> monolithic
> stack to use it. Which is a design point more conducive with the rest of
> OpenStack.

Sorry, are you suggesting tempest isn't a giant monolithic thing?
Because I was able to comprehend the rally code very quickly, that
isn't even slightly true of tempest. Having one simple tool that does
one thing well is exactly what rally has tried to do - tempest seems
to want to be five different things at once (CI, instalation tests,
trademark, preformance, stress testing, ...)

>> Matt, Sean - seriously community is about convincing people, not about
>> forcing people to do something against their wiliness.  You are making huge
>> architectural decisions without deep knowledge about what is Rally, what
>> are use cases, road map, goals and auditory.
>>
>> IMHO community in my opinion is thing about convincing people. So QA
>> program should convince Rally team (at least me) to do such changes. Key
>> secret to convince me, is to say how this will help OpenStack to perform
>> better.
>
> If community, per your definition, is about convincing people then there needs
> to be a 2-way discussion. This is an especially key point considering the
> feedback on this thread is basically the same feedback you've been getting 
> since
> you first announced Rally on the ML. [1] (and from even before that I think, 
> but
> it's hard to remember all the details from that far back)  I'm afraid that
> without a shared willingness to explore what we're suggesting because of
> preconceived notions then I fail to see the point in moving forward. The fact
> that this feedback has been ignored is why this discussion has come up at all.
>
>>
>> Currently Rally team see a lot of issues related to this decision:
>>
>> 1) It breaks already existing performance jobs (Heat, Glance, Cinder,
>> Neutron, Keystone)
>
> So firstly, I want to say I find these jobs troubling. Not just from the fact
> that because of the nature of the gate (2nd level virt on public clouds) the
> variability between jobs can be staggering. I can't imagine what value there 
> is
> in running synthetic benchmarks in this environment. It would only reliably
> catch the most egregious of regressions. Also from what I can tell none of 
> these
> jobs actually compare the timing data to the previous results, it just 
> generates
> the data and makes a pretty graph. The burden appears to be on the user to
> figure out what it means, which really isn't that useful. How have these jobs
> actually helped? IMO the real value in performance testing in the gate is to
> capture the longer term trends in the data. Which is something these jobs are
> not doing.

So I put in a change to dump out the raw data from each run into a
zipped json file so that I can start looking at the value of
collecting this data As an experiment I think it is very worth
while. The gate job is none voting, and apparently,

Re: [openstack-dev] Which program for Rally

2014-08-13 Thread Matthew Treinish
On Tue, Aug 12, 2014 at 01:45:17AM +0400, Boris Pavlovic wrote:
> Hi stackers,
> 
> 
> I would like to put some more details on current situation.
> 
> >
> > The issue is with what Rally is in it's
> > current form. It's scope is too large and monolithic, and it duplicates
> > much of
> > the functionality we either already have or need in current QA or Infra
> > projects. But, nothing in Rally is designed to be used outside of it. I
> > actually
> > feel pretty strongly that in it's current form Rally should *not* be a
> > part of
> > any OpenStack program
> 
> 
> Rally is not just a bunch of scripts like tempest, it's more like Nova,
> Cinder, and other projects that works out of box and resolve Operators &
> Dev use cases in one click.
> 
> This architectural design is the main key of Rally success, and why we got
> such large adoption and community.
> 
> So I'm opposed to this option. It feels to me like this is only on the table
> > because the Rally team has not done a great job of communicating or
> > working with
> > anyone else except for when it comes to either push using Rally, or this
> > conversation about adopting Rally.
> 
> 
> Actually Rally team done already a bunch of useful work including cross
> projects and infra stuff.
> 
> Keystone, Glance, Cinder, Neutron and Heat are running rally performance
> jobs, that can be used for performance testing, benchmarking, regression
> testing (already now). These jobs supports in-tree plugins for all
> components (scenarios, load generators, benchmark context) and they can use
> Rally fully without interaction with Rally team at all. More about these
> jobs:
> https://docs.google.com/a/mirantis.com/document/d/1s93IBuyx24dM3SmPcboBp7N47RQedT8u4AJPgOHp9-A/
> So I really don't see anything like this in tempest (even in observed
> future)
> 

So this is actually the communication problem I mentioned before. Singling out
individual projects and getting them to add a rally job is not "cross project"
communication. (this is part of what I meant by "push using Rally") There was no
larger discussion on the ML or a topic in the project meeting about adding these
jobs. There was no discussion about the value vs risk of adding new jobs to the
gate. Also, this is why less than half of the integrated projects have these
jobs. Having asymmetry like this between gating workloads on projects helps no
one.

> 
> I would like to mention work on OSprofiler (cross service/project profiler)
> https://github.com/stackforge/osprofiler (that was done by Rally team)
> https://review.openstack.org/#/c/105096/
> (btw Glance already accepted it https://review.openstack.org/#/c/105635/ )

So I don't think we're actually talking about osprofiler here, this discussion
is about Rally itself. Although personally I feel that the communication issues
I mentioned before are still present around osprofiler. From everything I've
seen about osprofiler adoption it has been the same divide and conquer strategy
when talking to people, instead of having a combined discussion about the
project and library adoption upfront.

That being said the reason I think osprofiler has been more accepted and it's
adoption into oslo is not nearly as contentious is because it's an independent
library that has value outside of itself. You don't need to pull in a monolithic
stack to use it. Which is a design point more conducive with the rest of
OpenStack.

> 
> 
> My primary concern is the timing for doing all of this work. We're
> > approaching
> > J-3 and honestly this feels like it would take the better part of an entire
> > cycle to analyze, plan, and then implement. Starting an analysis of how to
> > do
> > all of the work at this point I feel would just distract everyone from
> > completing our dev goals for the cycle. Probably the Rally team, if they
> > want
> > to move forward here, should start the analysis of this structural split
> > and we
> > can all pick this up together post-juno
> 
> 
> 
> Matt, Sean - seriously community is about convincing people, not about
> forcing people to do something against their wiliness.  You are making huge
> architectural decisions without deep knowledge about what is Rally, what
> are use cases, road map, goals and auditory.
> 
> IMHO community in my opinion is thing about convincing people. So QA
> program should convince Rally team (at least me) to do such changes. Key
> secret to convince me, is to say how this will help OpenStack to perform
> better.

If community, per your definition, is about convincing people then there needs
to be a 2-way discussion. This is an especially key point considering the
feedback on this thread is basically the same feedback you've been getting since
you first announced Rally on the ML. [1] (and from even before that I think, but
it's hard to remember all the details from that far back)  I'm afraid that
without a shared willingness to explore what we're suggesting because of
preconceived notions then I fail to see the point in

Re: [openstack-dev] Which program for Rally

2014-08-13 Thread Boris Pavlovic
Matt,


On Mon, Aug 11, 2014 at 07:06:11PM -0400, Zane Bitter wrote:
> > On 11/08/14 16:21, Matthew Treinish wrote:
> > >I'm sorry, but the fact that the
> > >docs in the rally tree has a section for user testimonials [4] I feel
> speaks a
> > >lot about the intent of the project.
>

Yes, you are absolutely right it speaks a lot about the intent of the
project.

One of the goal of Rally is to be the bridge between Operators and
OpenStack community.
Particularly this directory was made to create a common OpenStack knowledge
base
about how different configuration & deployments impact on OpenStack in
numbers.
There are 2 nice things about using this approach for collecting user
experience:
1) Everybody is able to repeat exactly the same experiment locally, and
prove that it is the true
2) Collecting results by different Operators is absolutely distributed
process and scales really well.

Using this user stories OpenStack community (e.g. Rally team) will be able
to create a
"best practice" for  deployments configurations & architecture that should
be used in production.
And all this is base on real life experience (not just feelings).


. I personally feel that those user stories
> would probably be more appropriate as a blog post, and shouldn't
> necessarily be
> in a doc tree. But, that's not the stinging indictment which didn't need
> any
> explanation that I apparently thought it was yesterday; it definitely isn't
> something worth calling out on this thread.



PTL is not a dictator, it's just a person who collects opinion of project
team & users and manage work on project in such way
to cover everybody's use cases..
In other words you shouldn't believe or feel, you should just ask users and
community of the project: "what they think?".
In my case I asked Rally community and about 20 different Operators from
various companies and they like and support
this idea. So I would prefer to keep this section in code of Rally and help
with involving more people in this work.


Best regards,
Boris Pavlovic







On Tue, Aug 12, 2014 at 9:47 PM, Matthew Treinish 
wrote:

> On Mon, Aug 11, 2014 at 07:06:11PM -0400, Zane Bitter wrote:
> > On 11/08/14 16:21, Matthew Treinish wrote:
> > >I'm sorry, but the fact that the
> > >docs in the rally tree has a section for user testimonials [4] I feel
> speaks a
> > >lot about the intent of the project.
> >
> > What... does that even mean?
>
> Yeah, I apologize for that sentence, it was an unfair thing to say and
> uncalled
> for. Looking at it with fresh eyes this morning I'm not entirely sure what
> my intent
> was by pointing out that section. I personally feel that those user stories
> would probably be more appropriate as a blog post, and shouldn't
> necessarily be
> in a doc tree. But, that's not the stinging indictment which didn't need
> any
> explanation that I apparently thought it was yesterday; it definitely isn't
> something worth calling out on this thread.
>
> >
> > "They seem like just the type of guys that would help Keystone with
> > performance benchmarking!"
> > "Burn them!"
>
> I'm pretty sure that's not what I meant. :)
>
> >
> > >I apologize if any of this is somewhat incoherent, I'm still a bit
> jet-lagged
> > >so I'm not sure that I'm making much sense.
> >
> > Ah.
> >
>
> Yeah, let's chalk it up to dulled senses from insufficient sleep and
> trying to
> get back on my usual schedule from a trip down under.
>
> > >[4]
> http://git.openstack.org/cgit/stackforge/rally/tree/doc/user_stories
>
> -Matt Treinish
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Which program for Rally

2014-08-12 Thread Matthew Treinish
On Mon, Aug 11, 2014 at 07:06:11PM -0400, Zane Bitter wrote:
> On 11/08/14 16:21, Matthew Treinish wrote:
> >I'm sorry, but the fact that the
> >docs in the rally tree has a section for user testimonials [4] I feel speaks 
> >a
> >lot about the intent of the project.
> 
> What... does that even mean?

Yeah, I apologize for that sentence, it was an unfair thing to say and uncalled
for. Looking at it with fresh eyes this morning I'm not entirely sure what my 
intent
was by pointing out that section. I personally feel that those user stories
would probably be more appropriate as a blog post, and shouldn't necessarily be
in a doc tree. But, that's not the stinging indictment which didn't need any
explanation that I apparently thought it was yesterday; it definitely isn't
something worth calling out on this thread.

> 
> "They seem like just the type of guys that would help Keystone with
> performance benchmarking!"
> "Burn them!"

I'm pretty sure that's not what I meant. :)

> 
> >I apologize if any of this is somewhat incoherent, I'm still a bit jet-lagged
> >so I'm not sure that I'm making much sense.
> 
> Ah.
> 

Yeah, let's chalk it up to dulled senses from insufficient sleep and trying to
get back on my usual schedule from a trip down under.

> >[4] http://git.openstack.org/cgit/stackforge/rally/tree/doc/user_stories


-Matt Treinish
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Which program for Rally

2014-08-12 Thread Doug Hellmann

On Aug 11, 2014, at 12:00 PM, David Kranz  wrote:

> On 08/06/2014 05:48 PM, John Griffith wrote:
>> I have to agree with Duncan here.  I also don't know if I fully understand 
>> the limit in options.  Stress test seems like it could/should be different 
>> (again overlap isn't a horrible thing) and I don't see it as siphoning off 
>> resources so not sure of the issue.  We've become quite wrapped up in 
>> projects, programs and the like lately and it seems to hinder forward 
>> progress more than anything else.
>> 
>> I'm also not convinced that Tempest is where all things belong, in fact I've 
>> been thinking more and more that a good bit of what Tempest does today 
>> should fall more on the responsibility of the projects themselves.  For 
>> example functional testing of features etc, ideally I'd love to have more of 
>> that fall on the projects and their respective teams.  That might even be 
>> something as simple to start as saying "if you contribute a new feature, you 
>> have to also provide a link to a contribution to the Tempest test-suite that 
>> checks it".  Sort of like we do for unit tests, cross-project tracking is 
>> difficult of course, but it's a start.  The other idea is maybe functional 
>> test harnesses live in their respective projects.
>> 
>> Honestly I think who better to write tests for a project than the folks 
>> building and contributing to the project.  At some point IMO the QA team 
>> isn't going to scale.  I wonder if maybe we should be thinking about 
>> proposals for delineating responsibility and goals in terms of functional 
>> testing?
>> 
>> 
>> 
> All good points. Your last paragraph was discussed by the QA team leading up 
> to and at the Atlanta summit. The conclusion was that the api/functional 
> tests focused on a single project should be part of that project. As Sean 
> said, we can envision there being half (or some other much smaller number) as 
> many such tests in tempest going forward.
> 
> Details are under discussion, but the way this is likely to play out is that 
> individual projects will start by creating their own functional tests outside 
> of tempest. Swift already does this and neutron seems to be moving in that 
> direction. There is a spec to break out parts of tempest 
> (https://github.com/openstack/qa-specs/blob/master/specs/tempest-library.rst) 
> into a library that might be used by projects implementing functional tests. 
> 
> Once a project has "sufficient" functional testing, we can consider removing 
> its api tests from tempest. This is a bit tricky because tempest needs to 
> cover *all* cross-project interactions. In this respect, there is no clear 
> line in tempest between scenario tests which have this goal explicitly, and 
> api tests which may also involve interactions that might not be covered in a 
> scenario. So we will need a principled way to make sure there is complete 
> cross-project coverage in tempest with a smaller number of api tests. 
> 
>  -David

We need to be careful about dumping the tests from tempest now that the DefCore 
group is relying on them as well. Tempest is no longer just a 
developer/QA/operations tool. It’s also being used as the basis of a trademark 
enforcement tool. That’s not to say we can’t change the test suite, but we have 
to consider a new angle when doing so.

Doug

> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Which program for Rally

2014-08-11 Thread Zane Bitter

On 11/08/14 16:21, Matthew Treinish wrote:

I'm sorry, but the fact that the
docs in the rally tree has a section for user testimonials [4] I feel speaks a
lot about the intent of the project.


What... does that even mean?

"They seem like just the type of guys that would help Keystone with 
performance benchmarking!"

"Burn them!"


I apologize if any of this is somewhat incoherent, I'm still a bit jet-lagged
so I'm not sure that I'm making much sense.


Ah.


[4] http://git.openstack.org/cgit/stackforge/rally/tree/doc/user_stories


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Which program for Rally

2014-08-11 Thread Boris Pavlovic
Hi stackers,


I would like to put some more details on current situation.

>
> The issue is with what Rally is in it's
> current form. It's scope is too large and monolithic, and it duplicates
> much of
> the functionality we either already have or need in current QA or Infra
> projects. But, nothing in Rally is designed to be used outside of it. I
> actually
> feel pretty strongly that in it's current form Rally should *not* be a
> part of
> any OpenStack program


Rally is not just a bunch of scripts like tempest, it's more like Nova,
Cinder, and other projects that works out of box and resolve Operators &
Dev use cases in one click.

This architectural design is the main key of Rally success, and why we got
such large adoption and community.

So I'm opposed to this option. It feels to me like this is only on the table
> because the Rally team has not done a great job of communicating or
> working with
> anyone else except for when it comes to either push using Rally, or this
> conversation about adopting Rally.


Actually Rally team done already a bunch of useful work including cross
projects and infra stuff.

Keystone, Glance, Cinder, Neutron and Heat are running rally performance
jobs, that can be used for performance testing, benchmarking, regression
testing (already now). These jobs supports in-tree plugins for all
components (scenarios, load generators, benchmark context) and they can use
Rally fully without interaction with Rally team at all. More about these
jobs:
https://docs.google.com/a/mirantis.com/document/d/1s93IBuyx24dM3SmPcboBp7N47RQedT8u4AJPgOHp9-A/
So I really don't see anything like this in tempest (even in observed
future)


I would like to mention work on OSprofiler (cross service/project profiler)
https://github.com/stackforge/osprofiler (that was done by Rally team)
https://review.openstack.org/#/c/105096/
(btw Glance already accepted it https://review.openstack.org/#/c/105635/ )


My primary concern is the timing for doing all of this work. We're
> approaching
> J-3 and honestly this feels like it would take the better part of an entire
> cycle to analyze, plan, and then implement. Starting an analysis of how to
> do
> all of the work at this point I feel would just distract everyone from
> completing our dev goals for the cycle. Probably the Rally team, if they
> want
> to move forward here, should start the analysis of this structural split
> and we
> can all pick this up together post-juno



Matt, Sean - seriously community is about convincing people, not about
forcing people to do something against their wiliness.  You are making huge
architectural decisions without deep knowledge about what is Rally, what
are use cases, road map, goals and auditory.

IMHO community in my opinion is thing about convincing people. So QA
program should convince Rally team (at least me) to do such changes. Key
secret to convince me, is to say how this will help OpenStack to perform
better.

Currently Rally team see a lot of issues related to this decision:

1) It breaks already existing performance jobs (Heat, Glance, Cinder,
Neutron, Keystone)

2) It breaks functional testing of Rally (that is already done in gates)

2) It makes Rally team depending on Tempest throughput, and what I heard
multiple times from QA team is that performance work is very low priority
and that major goals are to keep gates working. So it will slow down work
of performance team.

3) Brings a ton of questions what should be in Rally and what should be in
Tempest. That are at the moment quite resolved.
https://docs.google.com/a/pavlovic.ru/document/d/137zbrz0KJd6uZwoZEu4BkdKiR_Diobantu0GduS7HnA/edit#heading=h.9ephr9df0new

4) It breaks existing OpenStack team, that is working 100% on performance,
regressions and sla topics. Sorry but there is no such team in tempest.
This directory is not active developed:
https://github.com/openstack/tempest/commits/master/tempest/stress


Matt, Sean, David - what are the real goals of merging Rally to Tempest?
I see a huge harm for OpenStack and companies that are using Rally, and
don't see actually any benefits.
What I heard for now is something like "this decision will make tempest
better"..
But do you care more about Tempest than OpenStack?


Best regards,
Boris Pavlovic




On Tue, Aug 12, 2014 at 12:37 AM, David Kranz  wrote:

>  On 08/11/2014 04:21 PM, Matthew Treinish wrote:
>
> I apologize for the delay in my response to this thread, between travelling
> and having a stuck 'a' key on my laptop this is the earliest I could
> respond.
> I opted for a separate branch on this thread to summarize my views and I'll
> respond inline later on some of the previous discussion.
>
> On Wed, Aug 06, 2014 at 12:30:35PM +0200, Thierry Carrez wrote:
> > Hi everyone,
> >
> > At the TC meeting yesterday we discussed Rally program request and
> > incubation request. We quickly dismissed the incubation request, as
> > Rally appears to be able to live happily on top of OpenStack and would
> > bene

Re: [openstack-dev] Which program for Rally

2014-08-11 Thread David Kranz

On 08/11/2014 04:21 PM, Matthew Treinish wrote:


I apologize for the delay in my response to this thread, between 
travelling
and having a stuck 'a' key on my laptop this is the earliest I could 
respond.
I opted for a separate branch on this thread to summarize my views and 
I'll

respond inline later on some of the previous discussion.

On Wed, Aug 06, 2014 at 12:30:35PM +0200, Thierry Carrez wrote:
> Hi everyone,
>
> At the TC meeting yesterday we discussed Rally program request and
> incubation request. We quickly dismissed the incubation request, as
> Rally appears to be able to live happily on top of OpenStack and would
> benefit from having a release cycle decoupled from the OpenStack
> "integrated release".
>
> That leaves the question of the program. OpenStack programs are created
> by the Technical Committee, to bless existing efforts and teams that are
> considered *essential* to the production of the "OpenStack" integrated
> release and the completion of the OpenStack project mission. There are 3
> ways to look at Rally and official programs at this point:
>
> 1. Rally as an essential QA tool
> Performance testing (and especially performance regression testing) is
> an essential QA function, and a feature that Rally provides. If the QA
> team is happy to use Rally to fill that function, then Rally can
> obviously be adopted by the (already-existing) QA program. That said,
> that would put Rally under the authority of the QA PTL, and that raises
> a few questions due to the current architecture of Rally, which is more
> product-oriented. There needs to be further discussion between the QA
> core team and the Rally team to see how that could work and if that
> option would be acceptable for both sides.

So ideally this is where Rally would belong, the scope of what Rally is
attempting to do is definitely inside the scope of the QA program. I 
don't see
any reason why that isn't the case. The issue is with what Rally is in 
it's
current form. It's scope is too large and monolithic, and it 
duplicates much of

the functionality we either already have or need in current QA or Infra
projects. But, nothing in Rally is designed to be used outside of it. 
I actually
feel pretty strongly that in it's current form Rally should *not* be a 
part of

any OpenStack program.

All of the points Sean was making in the other branch on this thread 
(which I'll
probably respond to later) are a huge concerns I share with Rally. He 
basically
summarized most of my views on the topic, so I'll try not to rewrite 
everything.
But, the fact that all of this duplicate functionality was implemented 
in a
completely separate manner which is Rally specific and can't really be 
used

unless all of Rally is used is of a large concern. What I think the path
forward here is to have both QA and Rally work together on getting common
functionality that is re-usable and shareable. Additionally, I have some
concerns over the methodology that Rally uses for it's performance 
measurement.
But, I'll table that discussion because I think it would partially 
derail this

discussion.

So one open question is long-term where would this leave Rally if we 
want to
bring it in under the QA program. (after splitting up the 
functionality to more
conducive with all our existing tools and projects) The one thing 
Rally does
here which we don't have an analogous solution for is, for lack of 
better term,
the post processing layer. The part that generates the performs the 
analysis on
the collected data and generates the graphs. That is something that 
we'll have
an eventually need for and that is something that we can work on 
turning rally

into as we migrate everything to actually work together.

There are probably also other parts of Rally which don't fit into an 
existing

QA program project, (or the QA program in general) and in those cases we
probably should split them off as smaller projects to implement that 
bit. For
example, the SLA stuff Rally has that probably should be a separate 
entity as

well, but I'm unsure if that fits under QA program.

My primary concern is the timing for doing all of this work. We're 
approaching
J-3 and honestly this feels like it would take the better part of an 
entire
cycle to analyze, plan, and then implement. Starting an analysis of 
how to do

all of the work at this point I feel would just distract everyone from
completing our dev goals for the cycle. Probably the Rally team, if 
they want
to move forward here, should start the analysis of this structural 
split and we

can all pick this up together post-juno.

>
> 2. Rally as an essential operator tool
> Regular benchmarking of OpenStack deployments is a best practice for
> cloud operators, and a feature that Rally provides. With a bit of a
> stretch, we could consider that benchmarking is essential to the
> completion of the OpenStack project mission. That program could one day
> evolve to include more such "operations best practices" tools. In
> 

Re: [openstack-dev] Which program for Rally

2014-08-11 Thread Matthew Treinish
I apologize for the delay in my response to this thread, between travelling
and having a stuck 'a' key on my laptop this is the earliest I could respond.
I opted for a separate branch on this thread to summarize my views and I'll
respond inline later on some of the previous discussion.

On Wed, Aug 06, 2014 at 12:30:35PM +0200, Thierry Carrez wrote:
> Hi everyone,
> 
> At the TC meeting yesterday we discussed Rally program request and
> incubation request. We quickly dismissed the incubation request, as
> Rally appears to be able to live happily on top of OpenStack and would
> benefit from having a release cycle decoupled from the OpenStack
> "integrated release".
> 
> That leaves the question of the program. OpenStack programs are created
> by the Technical Committee, to bless existing efforts and teams that are
> considered *essential* to the production of the "OpenStack" integrated
> release and the completion of the OpenStack project mission. There are 3
> ways to look at Rally and official programs at this point:
> 
> 1. Rally as an essential QA tool
> Performance testing (and especially performance regression testing) is
> an essential QA function, and a feature that Rally provides. If the QA
> team is happy to use Rally to fill that function, then Rally can
> obviously be adopted by the (already-existing) QA program. That said,
> that would put Rally under the authority of the QA PTL, and that raises
> a few questions due to the current architecture of Rally, which is more
> product-oriented. There needs to be further discussion between the QA
> core team and the Rally team to see how that could work and if that
> option would be acceptable for both sides.

So ideally this is where Rally would belong, the scope of what Rally is
attempting to do is definitely inside the scope of the QA program. I don't see
any reason why that isn't the case. The issue is with what Rally is in it's
current form. It's scope is too large and monolithic, and it duplicates much of
the functionality we either already have or need in current QA or Infra
projects. But, nothing in Rally is designed to be used outside of it. I actually
feel pretty strongly that in it's current form Rally should *not* be a part of
any OpenStack program.

All of the points Sean was making in the other branch on this thread (which I'll
probably respond to later) are a huge concerns I share with Rally. He basically
summarized most of my views on the topic, so I'll try not to rewrite everything.
But, the fact that all of this duplicate functionality was implemented in a
completely separate manner which is Rally specific and can't really be used
unless all of Rally is used is of a large concern. What I think the path
forward here is to have both QA and Rally work together on getting common
functionality that is re-usable and shareable. Additionally, I have some
concerns over the methodology that Rally uses for it's performance measurement.
But, I'll table that discussion because I think it would partially derail this
discussion.

So one open question is long-term where would this leave Rally if we want to
bring it in under the QA program. (after splitting up the functionality to more
conducive with all our existing tools and projects) The one thing Rally does
here which we don't have an analogous solution for is, for lack of better term,
the post processing layer. The part that generates the performs the analysis on
the collected data and generates the graphs. That is something that we'll have
an eventually need for and that is something that we can work on turning rally
into as we migrate everything to actually work together.

There are probably also other parts of Rally which don't fit into an existing
QA program project, (or the QA program in general) and in those cases we
probably should split them off as smaller projects to implement that bit. For
example, the SLA stuff Rally has that probably should be a separate entity as
well, but I'm unsure if that fits under QA program.

My primary concern is the timing for doing all of this work. We're approaching
J-3 and honestly this feels like it would take the better part of an entire
cycle to analyze, plan, and then implement. Starting an analysis of how to do
all of the work at this point I feel would just distract everyone from
completing our dev goals for the cycle. Probably the Rally team, if they want
to move forward here, should start the analysis of this structural split and we
can all pick this up together post-juno.

> 
> 2. Rally as an essential operator tool
> Regular benchmarking of OpenStack deployments is a best practice for
> cloud operators, and a feature that Rally provides. With a bit of a
> stretch, we could consider that benchmarking is essential to the
> completion of the OpenStack project mission. That program could one day
> evolve to include more such "operations best practices" tools. In
> addition to the slight stretch already mentioned, one concern here is
> that we still w

Re: [openstack-dev] Which program for Rally

2014-08-11 Thread Zane Bitter

On 08/08/14 10:41, Anne Gentle wrote:

- Would have to ensure Rally is what we want "first" as getting to be PTL
since you are first to propose seems to be the model.


I know that at one time it was popular in the trade/gutter press to cast 
aspersions on new projects by saying that someone getting to be a PTL 
was the major motivation behind them. And although, having been there, I 
can tell you that this was grossly unfair to the people concerned, at 
least you could see where the impression might have come from in the 
days where being a PTL guaranteed you a seat on the TC.


These days with a directly elected TC, the job of a PTL is confined to 
administrative busywork. To the extent that a PTL holds any real ex 
officio power, which is not a great extent, it's probably a mistake that 
will soon be rectified. If anyone is really motivated to become a PTL by 
their dreams of avarice then I can guarantee that they will be disappointed.


It seems pretty clear to me that projects want their own programs 
because they don't think it wise to hand over control of all changes to 
the thing they've been working on for the past year or more to a group 
of people who have barely glanced at it before and already have other 
priorities. I submit that this is sufficient to completely explain the 
proliferation of programs without attributing to anyone any untoward 
motivations.


Finally, *yes*, the model is indeed that the project working in the open 
with the community eventually gets incubated, and the proprietary 
project working behind closed doors with a plan to "'open source' it one 
day, when it's perfect" is doomed to perpetual irrelevance. You'll note 
that anyone who is unhappy about that still has an obvious course of 
action that doesn't involve punishing the people who are trying to do 
the Right Thing by the community.


cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Which program for Rally

2014-08-11 Thread David Kranz

On 08/06/2014 05:48 PM, John Griffith wrote:
I have to agree with Duncan here.  I also don't know if I fully 
understand the limit in options.  Stress test seems like it 
could/should be different (again overlap isn't a horrible thing) and I 
don't see it as siphoning off resources so not sure of the issue. 
 We've become quite wrapped up in projects, programs and the like 
lately and it seems to hinder forward progress more than anything else.


I'm also not convinced that Tempest is where all things belong, in 
fact I've been thinking more and more that a good bit of what Tempest 
does today should fall more on the responsibility of the projects 
themselves.  For example functional testing of features etc, ideally 
I'd love to have more of that fall on the projects and their 
respective teams.  That might even be something as simple to start as 
saying "if you contribute a new feature, you have to also provide a 
link to a contribution to the Tempest test-suite that checks it". 
 Sort of like we do for unit tests, cross-project tracking is 
difficult of course, but it's a start.  The other idea is maybe 
functional test harnesses live in their respective projects.


Honestly I think who better to write tests for a project than the 
folks building and contributing to the project.  At some point IMO the 
QA team isn't going to scale.  I wonder if maybe we should be thinking 
about proposals for delineating responsibility and goals in terms of 
functional testing?




All good points. Your last paragraph was discussed by the QA team 
leading up to and at the Atlanta summit. The conclusion was that the 
api/functional tests focused on a single project should be part of that 
project. As Sean said, we can envision there being half (or some other 
much smaller number) as many such tests in tempest going forward.


Details are under discussion, but the way this is likely to play out is 
that individual projects will start by creating their own functional 
tests outside of tempest. Swift already does this and neutron seems to 
be moving in that direction. There is a spec to break out parts of 
tempest 
(https://github.com/openstack/qa-specs/blob/master/specs/tempest-library.rst) 
into a library that might be used by projects implementing functional 
tests.


Once a project has "sufficient" functional testing, we can consider 
removing its api tests from tempest. This is a bit tricky because 
tempest needs to cover *all* cross-project interactions. In this 
respect, there is no clear line in tempest between scenario tests which 
have this goal explicitly, and api tests which may also involve 
interactions that might not be covered in a scenario. So we will need a 
principled way to make sure there is complete cross-project coverage in 
tempest with a smaller number of api tests.


 -David
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Which program for Rally

2014-08-08 Thread Anne Gentle
On Wed, Aug 6, 2014 at 5:30 AM, Thierry Carrez 
wrote:

> Hi everyone,
>
> At the TC meeting yesterday we discussed Rally program request and
> incubation request. We quickly dismissed the incubation request, as
> Rally appears to be able to live happily on top of OpenStack and would
> benefit from having a release cycle decoupled from the OpenStack
> "integrated release".
>
> That leaves the question of the program. OpenStack programs are created
> by the Technical Committee, to bless existing efforts and teams that are
> considered *essential* to the production of the "OpenStack" integrated
> release and the completion of the OpenStack project mission. There are 3
> ways to look at Rally and official programs at this point:
>
> 1. Rally as an essential QA tool
> Performance testing (and especially performance regression testing) is
> an essential QA function, and a feature that Rally provides. If the QA
> team is happy to use Rally to fill that function, then Rally can
> obviously be adopted by the (already-existing) QA program. That said,
> that would put Rally under the authority of the QA PTL, and that raises
> a few questions due to the current architecture of Rally, which is more
> product-oriented. There needs to be further discussion between the QA
> core team and the Rally team to see how that could work and if that
> option would be acceptable for both sides.
>

Pros: Performance testing is great and we don't have it now that I know of.
Considerations:
- QA then takes on more scope in their mission. Do they want it?
- Is Rally actually splittable this way?
- How important is the PTL role - PTL cage match may ensue next election?


>
> 2. Rally as an essential operator tool
> Regular benchmarking of OpenStack deployments is a best practice for
> cloud operators, and a feature that Rally provides. With a bit of a
> stretch, we could consider that benchmarking is essential to the
> completion of the OpenStack project mission. That program could one day
> evolve to include more such "operations best practices" tools. In
> addition to the slight stretch already mentioned, one concern here is
> that we still want to have performance testing in QA (which is clearly
> essential to the production of "OpenStack"). Letting Rally primarily be
> an operational tool might make that outcome more difficult.
>

Pros: Great start to an operator program for tooling.

Considerations:
- Would have to ensure Rally is what we want "first" as getting to be PTL
since you are first to propose seems to be the model.
- Is benchmark testing and SLA-meeting a best first tool? Or monitoring? Or
deployment? Or some other tools?
- Is this program what operators want?


>
> 3. Let Rally be a product on top of OpenStack
> The last option is to not have Rally in any program, and not consider it
> *essential* to the production of the "OpenStack" integrated release or
> the completion of the OpenStack project mission. Rally can happily exist
> as an operator tool on top of OpenStack. It is built as a monolithic
> product: that approach works very well for external complementary
> solutions... Also be more integrated in OpenStack or part of the
> OpenStack programs might come at a cost (slicing some functionality out
> of rally to make it more a framework and less a product) that might not
> be what its authors want.
>
>
Pros: Rally can set the standards for this path and lead on this pioneer
trail.

Considerations:
- Would this tool be applied against continuously-deployed clouds?
- Is there any preference or advantage to be outside of integrated releases?
- Will people believe it's official?

Hopefully that summarizes how I'm looking at this application --
Anne



> Let's explore each option to see which ones are viable, and the pros and
> cons of each.
>
> --
> Thierry Carrez (ttx)
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Which program for Rally

2014-08-08 Thread Neependra Kumar Khare

- Original Message -
From: "Thierry Carrez" 
To: "OpenStack Development Mailing List" 
Sent: Wednesday, August 6, 2014 4:00:35 PM
Subject: [openstack-dev] Which program for Rally


1. Rally as an essential QA tool
Performance testing (and especially performance regression testing) is
an essential QA function, and a feature that Rally provides. If the QA
team is happy to use Rally to fill that function, then Rally can
obviously be adopted by the (already-existing) QA program. That said,
that would put Rally under the authority of the QA PTL, and that raises
a few questions due to the current architecture of Rally, which is more
product-oriented. There needs to be further discussion between the QA
core team and the Rally team to see how that could work and if that
option would be acceptable for both sides.


I want to share an use case of Rally for performance bench-marking. 
I use Rally to benchmark Keystone performance. I can easily get results
for comparison between different configurations, openstack distributions
etc. Here is a sample result :-

https://github.com/stackforge/rally/blob/master/doc/user_stories/keystone/authenticate.rst

IMO Rally can play a essential role in performance regression testing. 


Regards,
Neependra Khare
Performance Engineering @ Red Hat


2. Rally as an essential operator tool
Regular benchmarking of OpenStack deployments is a best practice for
cloud operators, and a feature that Rally provides. With a bit of a
stretch, we could consider that benchmarking is essential to the
completion of the OpenStack project mission. That program could one day
evolve to include more such "operations best practices" tools. In
addition to the slight stretch already mentioned, one concern here is
that we still want to have performance testing in QA (which is clearly
essential to the production of "OpenStack"). Letting Rally primarily be
an operational tool might make that outcome more difficult.

3. Let Rally be a product on top of OpenStack
The last option is to not have Rally in any program, and not consider it
*essential* to the production of the "OpenStack" integrated release or
the completion of the OpenStack project mission. Rally can happily exist
as an operator tool on top of OpenStack. It is built as a monolithic
product: that approach works very well for external complementary
solutions... Also be more integrated in OpenStack or part of the
OpenStack programs might come at a cost (slicing some functionality out
of rally to make it more a framework and less a product) that might not
be what its authors want.

Let's explore each option to see which ones are viable, and the pros and
cons of each.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Which program for Rally

2014-08-07 Thread John Griffith
On Thu, Aug 7, 2014 at 9:02 AM, John Griffith 
wrote:

>
>
>
> On Thu, Aug 7, 2014 at 6:20 AM, Sean Dague  wrote:
>
>> On 08/07/2014 07:58 AM, Angus Salkeld wrote:
>> > On Wed, 2014-08-06 at 15:48 -0600, John Griffith wrote:
>> >> I have to agree with Duncan here.  I also don't know if I fully
>> >> understand the limit in options.  Stress test seems like it
>> >> could/should be different (again overlap isn't a horrible thing) and I
>> >> don't see it as siphoning off resources so not sure of the issue.
>> >>  We've become quite wrapped up in projects, programs and the like
>> >> lately and it seems to hinder forward progress more than anything
>> >> else.
>> > h
>> >>
>> >> I'm also not convinced that Tempest is where all things belong, in
>> >> fact I've been thinking more and more that a good bit of what Tempest
>> >> does today should fall more on the responsibility of the projects
>> >> themselves.  For example functional testing of features etc, ideally
>> >> I'd love to have more of that fall on the projects and their
>> >> respective teams.  That might even be something as simple to start as
>> >> saying "if you contribute a new feature, you have to also provide a
>> >> link to a contribution to the Tempest test-suite that checks it".
>> >>  Sort of like we do for unit tests, cross-project tracking is
>> >> difficult of course, but it's a start.  The other idea is maybe
>> >> functional test harnesses live in their respective projects.
>> >>
>> >
>> > Couldn't we reduce the scope of tempest (and rally) : make tempest the
>> > API verification and rally the secenario/performance tester? Make each
>> > tool do less, but better. My point being to split the projects by
>> > functionality so there is less need to share code and stomp on each
>> > other's toes.
>>
>> Who is going to propose the split? Who is going to manage the
>> coordination of the split? What happens when their is disagreement about
>> the location of something like booting and listing a server -
>>
>> https://github.com/stackforge/rally/blob/master/rally/benchmark/scenarios/nova/servers.py#L44-L64
>>
>> Because today we've got fundamental disagreements between the teams on
>> scope, long standing (as seen in these threads), so this won't
>> organically solve itself.
>>
>> -Sean
>>
>> --
>> Sean Dague
>> http://dague.net
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> ​last paragraph regarding the "split" wasn't mine, but...  I think it's
> good for people to express ideas on the ML like this.  It may not be
> feasible, but I think the more people you have thinking about how to move
> forward and expressing their ideas (even if they don't work) is a good and
> healthy thing.
>
> As far as proposing a split, there's obviously a ton of detail that needs
> to be considered here and honestly it may just be a horrible idea right
> from the start.  That being said, to answer some of your questions, quite
> honestly IMO these are some of the things that I think it would be good for
> the TC to take an active role in.  Seems reasonable to have bodies like the
> TC work on governing and laying out technical process and direction.
>
> Anyway, ​I think the bottom line is that better collaboration is something
> we need to work on.  That in and of itself would've have likely thwarted
> this this thread to begin with (and I think that was one of the key points
> it tried to make).
>
> As far as the question at hand of Rally... I would surely hope that
> there's a way to for QA and Rally teams to actually collaborate and work
> together on this.  I also understand completely that lack of collaboration
> is probably what got us to this point in the first place.  It just seems to
> me that there's a middle ground somewhere but it's going to require some
> give and take from both sides.
>
> By the way, personally I feel that the movement over the last year that
> everybody needs to have their own program or project is a big problem.  The
> other thing that nobody wants to consider is why not just put some code on
> github independent of OpenStack?  Contribute things to the projects and
> build cool things for OpenStack outside of OpenStack.  Make sense?
>
> Questions about functional test responsibilities for projects etc should
> probably be a future discussion if there's interest and if it makes any
> sense at all (ie summit topic?).
>
> ​Just a note, I don't mean for the above to point fingers or even remotely
suggest that I think I have all the answers etc.  I just would like to spur
some serious thought on how we scale and grow going forward and that
includes Tempest and it role.

Currently I have zero complaints (really... zero) about Tempest, the QA or
Infra teams.  I do see more snags like the one we currently have in our
future though, and I think we need to come up with some way of 

Re: [openstack-dev] Which program for Rally

2014-08-07 Thread John Griffith
On Thu, Aug 7, 2014 at 6:20 AM, Sean Dague  wrote:

> On 08/07/2014 07:58 AM, Angus Salkeld wrote:
> > On Wed, 2014-08-06 at 15:48 -0600, John Griffith wrote:
> >> I have to agree with Duncan here.  I also don't know if I fully
> >> understand the limit in options.  Stress test seems like it
> >> could/should be different (again overlap isn't a horrible thing) and I
> >> don't see it as siphoning off resources so not sure of the issue.
> >>  We've become quite wrapped up in projects, programs and the like
> >> lately and it seems to hinder forward progress more than anything
> >> else.
> > h
> >>
> >> I'm also not convinced that Tempest is where all things belong, in
> >> fact I've been thinking more and more that a good bit of what Tempest
> >> does today should fall more on the responsibility of the projects
> >> themselves.  For example functional testing of features etc, ideally
> >> I'd love to have more of that fall on the projects and their
> >> respective teams.  That might even be something as simple to start as
> >> saying "if you contribute a new feature, you have to also provide a
> >> link to a contribution to the Tempest test-suite that checks it".
> >>  Sort of like we do for unit tests, cross-project tracking is
> >> difficult of course, but it's a start.  The other idea is maybe
> >> functional test harnesses live in their respective projects.
> >>
> >
> > Couldn't we reduce the scope of tempest (and rally) : make tempest the
> > API verification and rally the secenario/performance tester? Make each
> > tool do less, but better. My point being to split the projects by
> > functionality so there is less need to share code and stomp on each
> > other's toes.
>
> Who is going to propose the split? Who is going to manage the
> coordination of the split? What happens when their is disagreement about
> the location of something like booting and listing a server -
>
> https://github.com/stackforge/rally/blob/master/rally/benchmark/scenarios/nova/servers.py#L44-L64
>
> Because today we've got fundamental disagreements between the teams on
> scope, long standing (as seen in these threads), so this won't
> organically solve itself.
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
​last paragraph regarding the "split" wasn't mine, but...  I think it's
good for people to express ideas on the ML like this.  It may not be
feasible, but I think the more people you have thinking about how to move
forward and expressing their ideas (even if they don't work) is a good and
healthy thing.

As far as proposing a split, there's obviously a ton of detail that needs
to be considered here and honestly it may just be a horrible idea right
from the start.  That being said, to answer some of your questions, quite
honestly IMO these are some of the things that I think it would be good for
the TC to take an active role in.  Seems reasonable to have bodies like the
TC work on governing and laying out technical process and direction.

Anyway, ​I think the bottom line is that better collaboration is something
we need to work on.  That in and of itself would've have likely thwarted
this this thread to begin with (and I think that was one of the key points
it tried to make).

As far as the question at hand of Rally... I would surely hope that there's
a way to for QA and Rally teams to actually collaborate and work together
on this.  I also understand completely that lack of collaboration is
probably what got us to this point in the first place.  It just seems to me
that there's a middle ground somewhere but it's going to require some give
and take from both sides.

By the way, personally I feel that the movement over the last year that
everybody needs to have their own program or project is a big problem.  The
other thing that nobody wants to consider is why not just put some code on
github independent of OpenStack?  Contribute things to the projects and
build cool things for OpenStack outside of OpenStack.  Make sense?

Questions about functional test responsibilities for projects etc should
probably be a future discussion if there's interest and if it makes any
sense at all (ie summit topic?).
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Which program for Rally

2014-08-07 Thread Sean Dague
On 08/07/2014 07:31 AM, Rohan Kanade wrote:
> Date: Wed, 06 Aug 2014 09:44:12 -0400
> From: Sean Dague mailto:s...@dague.net>>
> To: openstack-dev@lists.openstack.org
> <mailto:openstack-dev@lists.openstack.org>
>     Subject: Re: [openstack-dev] Which program for Rally
> Message-ID: <53e2312c.8000...@dague.net
> <mailto:53e2312c.8000...@dague.net>>
> Content-Type: text/plain; charset=utf-8
> 
> Like the fact that right now the rally team is proposing gate jobs which
> have some overlap to the existing largeops jobs. Did they start a
> conversation about it? Nope. They just went off to do their thing
> instead. https://review.openstack.org/#/c/112251/
> 
> 
> Hi Sean,
> 
> Appreciate your analysis
> Here is a comparison of the tempest largeops job and similar in Rally.
> 
> What large-ops job provides as of now:
> Running hard-coded configured benchmarks (in gates), that are taken from
> tempest repo.
> eg:  "run 100 vms by one request". End result is a +1 or -1 which doesnt
> really reflect much in terms of the performance stats and regressions in
> performance.

That's true, it's a very coarse grained benchmark. It's specifically
that way to catch and block regressions.

> What Rally job provides:
> (example in glance:
> https://github.com/openstack/glance/tree/master/rally-scenarios)
>  
> 1) Projects can specify which benchmarks to run:
> https://github.com/openstack/glance/blob/master/rally-scenarios/glance.yaml
>  
> 2) Projects can specify conditions of passing and inputs to benchmarks
> (e.g. there is no iteration of benchmark failed and average duration of
> iteration is less then X)
> https://github.com/stackforge/rally/blob/master/rally-scenarios/rally.yaml#L11-L12
>  
>  
> 3) Projects can create any number of benchmarks inside their source tree
> (so they don't need to merge anything to rally)
> https://github.com/openstack/glance/tree/master/rally-scenarios/plugins
>  
> 4) Users are getting automated reports of all benchmarks:
> http://logs.openstack.org/81/112181/2/check/gate-rally-dsvm-rally/78b1146/rally-plot/results.html.gz

This is pretty, but I'm not sure how it provides me with information
about whether or not that was a good change to merge. There is a reason
that we reduced this to a decision in largeops to block a change when we
knew the regression exceeded a threshold we were comfortable with in a
very specific context.

> 5) Users can easily install Rally (with this script
> https://github.com/stackforge/rally/blob/master/install_rally.sh)
> and test benchmark locally, using the same benchmark configuration as in
> gate.
>  
>  6) Rally jobs (benchmarks) give you capabilities to check for SLAs in
> your gates itself, which helps immensly to gauge impact of your proposed
> change on the current code in terms of performance and SLA
>  
> Basically with Rally job, one can benchmark changes and compare them
> with master in gates.
> Using below approach:
>  
> 1) Put patch set 1 that changes rally benchmark configuration and
> probably adds some benchmark
> Get base results
>  
> 2) Put patch set2 that includes point 1 + changes that fixes issue
> Get new results
>  
> 3) Compare results and if new results are better push patch set 3 that
> removes changes in task and merge it.

How does Rally account for node variability in the cloud (both between
nodes, between clouds, between times of day)?

How does it provide the user with comparison between local runs and gate
runs to know if things got better or worse?

Getting numbers is step 1, but making those numbers something which
actually can be believed to be impacted directly by your change, vs.
changed by unrelated items, is something which is really important.

We've had turbo-hipster providing feedback in the gate for a while on
benchmark data for the db migrations, and it's false negative rate is
actually quite high for many of these same reasons. Adding more false
fails in the gate is something I think we should avoid.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Which program for Rally

2014-08-07 Thread Sean Dague
On 08/07/2014 07:58 AM, Angus Salkeld wrote:
> On Wed, 2014-08-06 at 15:48 -0600, John Griffith wrote:
>> I have to agree with Duncan here.  I also don't know if I fully
>> understand the limit in options.  Stress test seems like it
>> could/should be different (again overlap isn't a horrible thing) and I
>> don't see it as siphoning off resources so not sure of the issue.
>>  We've become quite wrapped up in projects, programs and the like
>> lately and it seems to hinder forward progress more than anything
>> else.
> h
>>
>> I'm also not convinced that Tempest is where all things belong, in
>> fact I've been thinking more and more that a good bit of what Tempest
>> does today should fall more on the responsibility of the projects
>> themselves.  For example functional testing of features etc, ideally
>> I'd love to have more of that fall on the projects and their
>> respective teams.  That might even be something as simple to start as
>> saying "if you contribute a new feature, you have to also provide a
>> link to a contribution to the Tempest test-suite that checks it".
>>  Sort of like we do for unit tests, cross-project tracking is
>> difficult of course, but it's a start.  The other idea is maybe
>> functional test harnesses live in their respective projects.
>>
> 
> Couldn't we reduce the scope of tempest (and rally) : make tempest the
> API verification and rally the secenario/performance tester? Make each
> tool do less, but better. My point being to split the projects by
> functionality so there is less need to share code and stomp on each
> other's toes.

Who is going to propose the split? Who is going to manage the
coordination of the split? What happens when their is disagreement about
the location of something like booting and listing a server -
https://github.com/stackforge/rally/blob/master/rally/benchmark/scenarios/nova/servers.py#L44-L64

Because today we've got fundamental disagreements between the teams on
scope, long standing (as seen in these threads), so this won't
organically solve itself.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Which program for Rally

2014-08-07 Thread Sean Dague
On 08/06/2014 05:48 PM, John Griffith wrote:
> I have to agree with Duncan here.  I also don't know if I fully
> understand the limit in options.  Stress test seems like it could/should
> be different (again overlap isn't a horrible thing) and I don't see it
> as siphoning off resources so not sure of the issue.  We've become quite
> wrapped up in projects, programs and the like lately and it seems to
> hinder forward progress more than anything else.

Today we have 2 debug domains that developers have to deal with when
tests fails:

 * project level domain (unit tests)
 * cross project (Tempest)

Even 2 debug domains is considered too much for most people, as we get
people that understand one or another, and just throw up their hands
when they are presented with a failure outside their familiar debug domain.

So if Rally was just taken in as a whole, as it exists now, it would
create a 3rd debug domain. It would include running a bunch of tests
that we run in cross project and project level domain, yet again,
written a different way. And when it fails this will be another debug
domain.

I think a 3rd debug domain isn't going to help any of the OpenStack
developers or Operators.

Moving the test payload into Tempest hopefully means getting a more
consistent model for all these tests so when things fail, there is some
common pattern people are familiar with to get to the bottom of things.
As opaque as Tempest runs feel to people, there has been substantial
effort in providing first failure dumps to get as much information about
what's wrong as possible. I agree things could be better, but you will
be starting that work all over from scratch with Rally again.

It also means we could potentially take advantage of the 20,000 Tempest
runs we do every week. We're actually generating a ton of data now that
is not being used for analysis. We're at a point in Tempest development
where to make some data based decisions on which tests need extra
attention, which probably need to get dropped, we need this anyway.

> I'm also not convinced that Tempest is where all things belong, in fact
> I've been thinking more and more that a good bit of what Tempest does
> today should fall more on the responsibility of the projects themselves.
>  For example functional testing of features etc, ideally I'd love to
> have more of that fall on the projects and their respective teams.  That
> might even be something as simple to start as saying "if you contribute
> a new feature, you have to also provide a link to a contribution to the
> Tempest test-suite that checks it".  Sort of like we do for unit tests,
> cross-project tracking is difficult of course, but it's a start.  The
> other idea is maybe functional test harnesses live in their respective
> projects.
> 
> Honestly I think who better to write tests for a project than the folks
> building and contributing to the project.  At some point IMO the QA team
> isn't going to scale.  I wonder if maybe we should be thinking about
> proposals for delineating responsibility and goals in terms of
> functional testing?

I 100% agree in getting some of Tempest existing content out and into
functional tests. Honestly I imagine a Tempest that's 1/2 the # of tests
a year away. Mostly it's going to be about ensuring that projects have
the coverage before we delete the safety nets.

And I 100% agree on getting some better idea on functional boundaries.
But I think that's something we need some practical experience on first.
Setting a policy without figuring out what in practice works is
something I expect wouldn't work so well. My expectation is this is
something we're going to take a few stabs at post J3, and bring into
summit for discussion.

...

So the question is do we think there should be 2 or 3 debug domains for
developers and operators on tests? My feeling is 2 puts us in a much
better place as a community.

The question is should Tempest provide data analysis on it's test runs
or should that be done in completely another program. Doing so in
another program means that all the deficiencies of the existing data get
completely ignored (like variability per run, interactions between
tests, between tests and periodic jobs, difficulty in time accounting of
async ops) to produce some pretty pictures that miss the point, because
they aren't measuring a thing that's real.

And the final question is should Tempest have an easier to understand
starting point than a tox command, like and actual cli for running
things. I think it's probably clear that it should. It would probably
actually make Tempest less big and scary for people.

Because I do think 'do one job and do it well' is completely consistent
with 'run tests across OpenStack projects and present that data in a
consumable way'.

The question basically is whether it's believed that collecting timing
analysis of test results is a separate concern from collecting
correctness results of test results. The Rally team would argue that
they are

Re: [openstack-dev] Which program for Rally

2014-08-07 Thread Angus Salkeld
On Wed, 2014-08-06 at 15:48 -0600, John Griffith wrote:
> I have to agree with Duncan here.  I also don't know if I fully
> understand the limit in options.  Stress test seems like it
> could/should be different (again overlap isn't a horrible thing) and I
> don't see it as siphoning off resources so not sure of the issue.
>  We've become quite wrapped up in projects, programs and the like
> lately and it seems to hinder forward progress more than anything
> else.
h
> 
> I'm also not convinced that Tempest is where all things belong, in
> fact I've been thinking more and more that a good bit of what Tempest
> does today should fall more on the responsibility of the projects
> themselves.  For example functional testing of features etc, ideally
> I'd love to have more of that fall on the projects and their
> respective teams.  That might even be something as simple to start as
> saying "if you contribute a new feature, you have to also provide a
> link to a contribution to the Tempest test-suite that checks it".
>  Sort of like we do for unit tests, cross-project tracking is
> difficult of course, but it's a start.  The other idea is maybe
> functional test harnesses live in their respective projects.
> 

Couldn't we reduce the scope of tempest (and rally) : make tempest the
API verification and rally the secenario/performance tester? Make each
tool do less, but better. My point being to split the projects by
functionality so there is less need to share code and stomp on each
other's toes.

> 
> 
> Honestly I think who better to write tests for a project than the
> folks building and contributing to the project.  At some point IMO the
> QA team isn't going to scale.  I wonder if maybe we should be thinking
> about proposals for delineating responsibility and goals in terms of
> functional testing?
> 

This is planned, I believe.

-Angus

> 
> 
> 
> 
> 
> On Wed, Aug 6, 2014 at 12:25 PM, Duncan Thomas
>  wrote:
> I'm not following here - you complain about rally being
> monolithic,
> then suggest that parts of it should be baked into tempest - a
> tool
> that is already huge and difficult to get into. I'd rather see
> tools
> that do one thing well and some overlap than one tool to rule
> them
> all.

+1

> 
> On 6 August 2014 14:44, Sean Dague  wrote:
> > On 08/06/2014 09:11 AM, Russell Bryant wrote:
> >> On 08/06/2014 06:30 AM, Thierry Carrez wrote:
> >>> Hi everyone,
> >>>
> >>> At the TC meeting yesterday we discussed Rally program
> request and
> >>> incubation request. We quickly dismissed the incubation
> request, as
> >>> Rally appears to be able to live happily on top of
> OpenStack and would
> >>> benefit from having a release cycle decoupled from the
> OpenStack
> >>> "integrated release".
> >>>
> >>> That leaves the question of the program. OpenStack
> programs are created
> >>> by the Technical Committee, to bless existing efforts and
> teams that are
> >>> considered *essential* to the production of the
> "OpenStack" integrated
> >>> release and the completion of the OpenStack project
> mission. There are 3
> >>> ways to look at Rally and official programs at this point:
> >>>
> >>> 1. Rally as an essential QA tool
> >>> Performance testing (and especially performance regression
> testing) is
> >>> an essential QA function, and a feature that Rally
> provides. If the QA
> >>> team is happy to use Rally to fill that function, then
> Rally can
> >>> obviously be adopted by the (already-existing) QA program.
> That said,
> >>> that would put Rally under the authority of the QA PTL,
> and that raises
> >>> a few questions due to the current architecture of Rally,
> which is more
> >>> product-oriented. There needs to be further discussion
> between the QA
> >>> core team and the Rally team to see how that could work
> and if that
> >>> option would be acceptable for both sides.
> >>>
> >>> 2. Rally as an essential operator tool
> >>> Regular benchmarking of OpenStack deployments is a best
> practice for
> >>> cloud operators, and a feature that Rally provides. With a
> bit of a
> >>> stretch, we could consider that benchmarking is essential
> to the
> >>> completion of the OpenStack project mission. That program
> could one day
> >>> evolve to include more such "operations best practices"
> tools. In
> >>> addition to the slight stretch already mentioned, one
> concern here is
> >>> that we still want to have performance testing in QA
> (which is clearly
> 

[openstack-dev] Which program for Rally

2014-08-07 Thread Rohan Kanade
>
> Date: Wed, 06 Aug 2014 09:44:12 -0400
> From: Sean Dague 
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] Which program for Rally
> Message-ID: <53e2312c.8000...@dague.net>
> Content-Type: text/plain; charset=utf-8
>
> Like the fact that right now the rally team is proposing gate jobs which
> have some overlap to the existing largeops jobs. Did they start a
> conversation about it? Nope. They just went off to do their thing
> instead. https://review.openstack.org/#/c/112251/
>

Hi Sean,

Appreciate your analysis
Here is a comparison of the tempest largeops job and similar in Rally.

What large-ops job provides as of now:
Running hard-coded configured benchmarks (in gates), that are taken from
tempest repo.
eg:  "run 100 vms by one request". End result is a +1 or -1 which doesnt
really reflect much in terms of the performance stats and regressions in
performance.


What Rally job provides:
(example in glance:
https://github.com/openstack/glance/tree/master/rally-scenarios)

1) Projects can specify which benchmarks to run:
https://github.com/openstack/glance/blob/master/rally-scenarios/glance.yaml

2) Projects can specify conditions of passing and inputs to benchmarks
(e.g. there is no iteration of benchmark failed and average duration of
iteration is less then X)
https://github.com/stackforge/rally/blob/master/rally-scenarios/rally.yaml#L11-L12


3) Projects can create any number of benchmarks inside their source tree
(so they don't need to merge anything to rally)
https://github.com/openstack/glance/tree/master/rally-scenarios/plugins

4) Users are getting automated reports of all benchmarks:
http://logs.openstack.org/81/112181/2/check/gate-rally-dsvm-rally/78b1146/rally-plot/results.html.gz

5) Users can easily install Rally (with this script
https://github.com/stackforge/rally/blob/master/install_rally.sh)
and test benchmark locally, using the same benchmark configuration as in
gate.

 6) Rally jobs (benchmarks) give you capabilities to check for SLAs in your
gates itself, which helps immensly to gauge impact of your proposed change
on the current code in terms of performance and SLA

Basically with Rally job, one can benchmark changes and compare them with
master in gates.
Using below approach:

1) Put patch set 1 that changes rally benchmark configuration and probably
adds some benchmark
Get base results

2) Put patch set2 that includes point 1 + changes that fixes issue
Get new results

3) Compare results and if new results are better push patch set 3 that
removes changes in task and merge it.



>
> So now we're going to run 2 jobs that do very similar things, with
> different teams adjusting the test loads. Which I think is basically
> madness.
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>

Rally jobs allows every project to chose what benchmarks to have (as a
plugins in their source tree) & run in gates.

Rally is trying to be as open as possible by helping projects define and
set their own benchmarks in their gates which they have full control over.

I think this is very important point as it simplifies a lot work on
performance issues. Hopefully we can discuss out these issues in IRC or
someplace so that we are all on same page in terms of details about what
Rally does and what it doesnt do.

Rohan Kanade
Senior Software Engineer, Red Hat
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Which program for Rally

2014-08-07 Thread marc

Hi John,

see below.

Zitat von John Griffith :


I have to agree with Duncan here.  I also don't know if I fully understand
the limit in options.  Stress test seems like it could/should be different


This is correct, Rally and Tempest Stress test have a different focus. The
stress test framework doesn't do any measurements of performance. This was
done by purpose since it is quite hard to measure performance with
asynchronous requests all over the place and using polling to measure actions.
So anyway I see that Rally already has an integration to run Tempest test
cases as load profile. But it's doesn't have an jenkins job like the stress
test has. In general I think in that area we could benefit in working
closer together and decide together if it makes sense to move to Tempest
or let it completely inside of Rally.

[snip]

Honestly I think who better to write tests for a project than the folks
building and contributing to the project.  At some point IMO the QA team
isn't going to scale.  I wonder if maybe we should be thinking about
proposals for delineating responsibility and goals in terms of functional
testing?


I think we are a bit off-topic now ;) Anyway I do think that moving test
cases closer to the project is a good idea.


Regards
Marc



On Wed, Aug 6, 2014 at 12:25 PM, Duncan Thomas 
wrote:


I'm not following here - you complain about rally being monolithic,
then suggest that parts of it should be baked into tempest - a tool
that is already huge and difficult to get into. I'd rather see tools
that do one thing well and some overlap than one tool to rule them
all.

On 6 August 2014 14:44, Sean Dague  wrote:
> On 08/06/2014 09:11 AM, Russell Bryant wrote:
>> On 08/06/2014 06:30 AM, Thierry Carrez wrote:
>>> Hi everyone,
>>>
>>> At the TC meeting yesterday we discussed Rally program request and
>>> incubation request. We quickly dismissed the incubation request, as
>>> Rally appears to be able to live happily on top of OpenStack and would
>>> benefit from having a release cycle decoupled from the OpenStack
>>> "integrated release".
>>>
>>> That leaves the question of the program. OpenStack programs are created
>>> by the Technical Committee, to bless existing efforts and teams that
are
>>> considered *essential* to the production of the "OpenStack" integrated
>>> release and the completion of the OpenStack project mission. There are
3
>>> ways to look at Rally and official programs at this point:
>>>
>>> 1. Rally as an essential QA tool
>>> Performance testing (and especially performance regression testing) is
>>> an essential QA function, and a feature that Rally provides. If the QA
>>> team is happy to use Rally to fill that function, then Rally can
>>> obviously be adopted by the (already-existing) QA program. That said,
>>> that would put Rally under the authority of the QA PTL, and that raises
>>> a few questions due to the current architecture of Rally, which is more
>>> product-oriented. There needs to be further discussion between the QA
>>> core team and the Rally team to see how that could work and if that
>>> option would be acceptable for both sides.
>>>
>>> 2. Rally as an essential operator tool
>>> Regular benchmarking of OpenStack deployments is a best practice for
>>> cloud operators, and a feature that Rally provides. With a bit of a
>>> stretch, we could consider that benchmarking is essential to the
>>> completion of the OpenStack project mission. That program could one day
>>> evolve to include more such "operations best practices" tools. In
>>> addition to the slight stretch already mentioned, one concern here is
>>> that we still want to have performance testing in QA (which is clearly
>>> essential to the production of "OpenStack"). Letting Rally primarily be
>>> an operational tool might make that outcome more difficult.
>>>
>>> 3. Let Rally be a product on top of OpenStack
>>> The last option is to not have Rally in any program, and not consider
it
>>> *essential* to the production of the "OpenStack" integrated release or
>>> the completion of the OpenStack project mission. Rally can happily
exist
>>> as an operator tool on top of OpenStack. It is built as a monolithic
>>> product: that approach works very well for external complementary
>>> solutions... Also be more integrated in OpenStack or part of the
>>> OpenStack programs might come at a cost (slicing some functionality out
>>> of rally to make it more a framework and less a product) that might not
>>> be what its authors want.
>>>
>>> Let's explore each option to see which ones are viable, and the pros
and
>>> cons of each.
>>
>> My feeling right now is that Rally is trying to accomplish too much at
>> the start (both #1 and #2).  I would rather see the project focus on
>> doing one of them as best as it can before increasing scope.
>>
>> It's my opinion that #1 is the most important thing that Rally can be
>> doing to help ensure the success of OpenStack, so I'd like to explore
>> the "Rally as a QA tool" in m

Re: [openstack-dev] Which program for Rally

2014-08-06 Thread Swapnil Kulkarni
I agree with Duncan and John here, I am not as old contributor in OpneStack
as most of the people commenting here are, but I see we have done this
right throughout the OpenStack lifecycle, at the start we only had Nova and
we could have always said "hey lets have everything in Nova" but we went
ahead to a modularized approach having specific projects concentrate on
specific needs for OpenStack as whole. If we have a project that
concentrates on Performance and Benchmarking with the help of other tools,
we should encourage to have such tool.

Regarding having Rally integrated in OpenStack Release cycles, I think its
better to have this as a integrated tool in OpenStack which Operators can
use at their deployments for benchmarking and performance analysis. It
could be very similar to what we have with branchless tempest is integrated
in OpenStack release.

Best Regards,
Swapnil Kulkarni
irc : coolsvap
cools...@gmail.com
+91-87960 10622(c)
http://in.linkedin.com/in/coolsvap
*"It's better to SHARE"*


On Thu, Aug 7, 2014 at 6:38 AM, Yingjun Li  wrote:

> From a user’s aspect i do think Rally is more suitable for a product-ready
> cloud, and seems like it is where it focused on.  It’s very easy to
> evaluate that if the performance of the cloud is better after we adjust
> some configs or some other tuning. It also provides SLA which maybe not
> so powerful currently but it’s a good start. So I think Rally is good
> enough to be in separated program.
>
> I totally agree that tempest shouldn’t try to cover everything, simple
> makes a thing better.
>
>
> On Aug 7, 2014, at 5:48, John Griffith 
> wrote:
>
> I have to agree with Duncan here.  I also don't know if I fully understand
> the limit in options.  Stress test seems like it could/should be different
> (again overlap isn't a horrible thing) and I don't see it as siphoning off
> resources so not sure of the issue.  We've become quite wrapped up in
> projects, programs and the like lately and it seems to hinder forward
> progress more than anything else.
>
> I'm also not convinced that Tempest is where all things belong, in fact
> I've been thinking more and more that a good bit of what Tempest does today
> should fall more on the responsibility of the projects themselves.  For
> example functional testing of features etc, ideally I'd love to have more
> of that fall on the projects and their respective teams.  That might even
> be something as simple to start as saying "if you contribute a new feature,
> you have to also provide a link to a contribution to the Tempest test-suite
> that checks it".  Sort of like we do for unit tests, cross-project tracking
> is difficult of course, but it's a start.  The other idea is maybe
> functional test harnesses live in their respective projects.
>
> Honestly I think who better to write tests for a project than the folks
> building and contributing to the project.  At some point IMO the QA team
> isn't going to scale.  I wonder if maybe we should be thinking about
> proposals for delineating responsibility and goals in terms of functional
> testing?
>
>
>
>
> On Wed, Aug 6, 2014 at 12:25 PM, Duncan Thomas 
> wrote:
>
>> I'm not following here - you complain about rally being monolithic,
>> then suggest that parts of it should be baked into tempest - a tool
>> that is already huge and difficult to get into. I'd rather see tools
>> that do one thing well and some overlap than one tool to rule them
>> all.
>>
>> On 6 August 2014 14:44, Sean Dague  wrote:
>> > On 08/06/2014 09:11 AM, Russell Bryant wrote:
>> >> On 08/06/2014 06:30 AM, Thierry Carrez wrote:
>> >>> Hi everyone,
>> >>>
>> >>> At the TC meeting yesterday we discussed Rally program request and
>> >>> incubation request. We quickly dismissed the incubation request, as
>> >>> Rally appears to be able to live happily on top of OpenStack and would
>> >>> benefit from having a release cycle decoupled from the OpenStack
>> >>> "integrated release".
>> >>>
>> >>> That leaves the question of the program. OpenStack programs are
>> created
>> >>> by the Technical Committee, to bless existing efforts and teams that
>> are
>> >>> considered *essential* to the production of the "OpenStack" integrated
>> >>> release and the completion of the OpenStack project mission. There
>> are 3
>> >>> ways to look at Rally and official programs at this point:
>> >>>
>> >>> 1. Rally as an essential QA tool
>> >>> Performance testing (and especially performance regression testing) is
>> >>> an essential QA function, and a feature that Rally provides. If the QA
>> >>> team is happy to use Rally to fill that function, then Rally can
>> >>> obviously be adopted by the (already-existing) QA program. That said,
>> >>> that would put Rally under the authority of the QA PTL, and that
>> raises
>> >>> a few questions due to the current architecture of Rally, which is
>> more
>> >>> product-oriented. There needs to be further discussion between the QA
>> >>> core team and the Rally team to see how t

Re: [openstack-dev] Which program for Rally

2014-08-06 Thread Yingjun Li
From a user’s aspect i do think Rally is more suitable for a product-ready 
cloud, and seems like it is where it focused on.  It’s very easy to evaluate 
that if the performance of the cloud is better after we adjust some configs or 
some other tuning. It also provides SLA which maybe not
so powerful currently but it’s a good start. So I think Rally is good enough to 
be in separated program.

I totally agree that tempest shouldn’t try to cover everything, simple makes a 
thing better.

On Aug 7, 2014, at 5:48, John Griffith  wrote:

> I have to agree with Duncan here.  I also don't know if I fully understand 
> the limit in options.  Stress test seems like it could/should be different 
> (again overlap isn't a horrible thing) and I don't see it as siphoning off 
> resources so not sure of the issue.  We've become quite wrapped up in 
> projects, programs and the like lately and it seems to hinder forward 
> progress more than anything else.
> 
> I'm also not convinced that Tempest is where all things belong, in fact I've 
> been thinking more and more that a good bit of what Tempest does today should 
> fall more on the responsibility of the projects themselves.  For example 
> functional testing of features etc, ideally I'd love to have more of that 
> fall on the projects and their respective teams.  That might even be 
> something as simple to start as saying "if you contribute a new feature, you 
> have to also provide a link to a contribution to the Tempest test-suite that 
> checks it".  Sort of like we do for unit tests, cross-project tracking is 
> difficult of course, but it's a start.  The other idea is maybe functional 
> test harnesses live in their respective projects.
> 
> Honestly I think who better to write tests for a project than the folks 
> building and contributing to the project.  At some point IMO the QA team 
> isn't going to scale.  I wonder if maybe we should be thinking about 
> proposals for delineating responsibility and goals in terms of functional 
> testing?
> 
> 
> 
> 
> On Wed, Aug 6, 2014 at 12:25 PM, Duncan Thomas  
> wrote:
> I'm not following here - you complain about rally being monolithic,
> then suggest that parts of it should be baked into tempest - a tool
> that is already huge and difficult to get into. I'd rather see tools
> that do one thing well and some overlap than one tool to rule them
> all.
> 
> On 6 August 2014 14:44, Sean Dague  wrote:
> > On 08/06/2014 09:11 AM, Russell Bryant wrote:
> >> On 08/06/2014 06:30 AM, Thierry Carrez wrote:
> >>> Hi everyone,
> >>>
> >>> At the TC meeting yesterday we discussed Rally program request and
> >>> incubation request. We quickly dismissed the incubation request, as
> >>> Rally appears to be able to live happily on top of OpenStack and would
> >>> benefit from having a release cycle decoupled from the OpenStack
> >>> "integrated release".
> >>>
> >>> That leaves the question of the program. OpenStack programs are created
> >>> by the Technical Committee, to bless existing efforts and teams that are
> >>> considered *essential* to the production of the "OpenStack" integrated
> >>> release and the completion of the OpenStack project mission. There are 3
> >>> ways to look at Rally and official programs at this point:
> >>>
> >>> 1. Rally as an essential QA tool
> >>> Performance testing (and especially performance regression testing) is
> >>> an essential QA function, and a feature that Rally provides. If the QA
> >>> team is happy to use Rally to fill that function, then Rally can
> >>> obviously be adopted by the (already-existing) QA program. That said,
> >>> that would put Rally under the authority of the QA PTL, and that raises
> >>> a few questions due to the current architecture of Rally, which is more
> >>> product-oriented. There needs to be further discussion between the QA
> >>> core team and the Rally team to see how that could work and if that
> >>> option would be acceptable for both sides.
> >>>
> >>> 2. Rally as an essential operator tool
> >>> Regular benchmarking of OpenStack deployments is a best practice for
> >>> cloud operators, and a feature that Rally provides. With a bit of a
> >>> stretch, we could consider that benchmarking is essential to the
> >>> completion of the OpenStack project mission. That program could one day
> >>> evolve to include more such "operations best practices" tools. In
> >>> addition to the slight stretch already mentioned, one concern here is
> >>> that we still want to have performance testing in QA (which is clearly
> >>> essential to the production of "OpenStack"). Letting Rally primarily be
> >>> an operational tool might make that outcome more difficult.
> >>>
> >>> 3. Let Rally be a product on top of OpenStack
> >>> The last option is to not have Rally in any program, and not consider it
> >>> *essential* to the production of the "OpenStack" integrated release or
> >>> the completion of the OpenStack project mission. Rally can happily exist
> >>> as an operator 

Re: [openstack-dev] Which program for Rally

2014-08-06 Thread John Griffith
I have to agree with Duncan here.  I also don't know if I fully understand
the limit in options.  Stress test seems like it could/should be different
(again overlap isn't a horrible thing) and I don't see it as siphoning off
resources so not sure of the issue.  We've become quite wrapped up in
projects, programs and the like lately and it seems to hinder forward
progress more than anything else.

I'm also not convinced that Tempest is where all things belong, in fact
I've been thinking more and more that a good bit of what Tempest does today
should fall more on the responsibility of the projects themselves.  For
example functional testing of features etc, ideally I'd love to have more
of that fall on the projects and their respective teams.  That might even
be something as simple to start as saying "if you contribute a new feature,
you have to also provide a link to a contribution to the Tempest test-suite
that checks it".  Sort of like we do for unit tests, cross-project tracking
is difficult of course, but it's a start.  The other idea is maybe
functional test harnesses live in their respective projects.

Honestly I think who better to write tests for a project than the folks
building and contributing to the project.  At some point IMO the QA team
isn't going to scale.  I wonder if maybe we should be thinking about
proposals for delineating responsibility and goals in terms of functional
testing?




On Wed, Aug 6, 2014 at 12:25 PM, Duncan Thomas 
wrote:

> I'm not following here - you complain about rally being monolithic,
> then suggest that parts of it should be baked into tempest - a tool
> that is already huge and difficult to get into. I'd rather see tools
> that do one thing well and some overlap than one tool to rule them
> all.
>
> On 6 August 2014 14:44, Sean Dague  wrote:
> > On 08/06/2014 09:11 AM, Russell Bryant wrote:
> >> On 08/06/2014 06:30 AM, Thierry Carrez wrote:
> >>> Hi everyone,
> >>>
> >>> At the TC meeting yesterday we discussed Rally program request and
> >>> incubation request. We quickly dismissed the incubation request, as
> >>> Rally appears to be able to live happily on top of OpenStack and would
> >>> benefit from having a release cycle decoupled from the OpenStack
> >>> "integrated release".
> >>>
> >>> That leaves the question of the program. OpenStack programs are created
> >>> by the Technical Committee, to bless existing efforts and teams that
> are
> >>> considered *essential* to the production of the "OpenStack" integrated
> >>> release and the completion of the OpenStack project mission. There are
> 3
> >>> ways to look at Rally and official programs at this point:
> >>>
> >>> 1. Rally as an essential QA tool
> >>> Performance testing (and especially performance regression testing) is
> >>> an essential QA function, and a feature that Rally provides. If the QA
> >>> team is happy to use Rally to fill that function, then Rally can
> >>> obviously be adopted by the (already-existing) QA program. That said,
> >>> that would put Rally under the authority of the QA PTL, and that raises
> >>> a few questions due to the current architecture of Rally, which is more
> >>> product-oriented. There needs to be further discussion between the QA
> >>> core team and the Rally team to see how that could work and if that
> >>> option would be acceptable for both sides.
> >>>
> >>> 2. Rally as an essential operator tool
> >>> Regular benchmarking of OpenStack deployments is a best practice for
> >>> cloud operators, and a feature that Rally provides. With a bit of a
> >>> stretch, we could consider that benchmarking is essential to the
> >>> completion of the OpenStack project mission. That program could one day
> >>> evolve to include more such "operations best practices" tools. In
> >>> addition to the slight stretch already mentioned, one concern here is
> >>> that we still want to have performance testing in QA (which is clearly
> >>> essential to the production of "OpenStack"). Letting Rally primarily be
> >>> an operational tool might make that outcome more difficult.
> >>>
> >>> 3. Let Rally be a product on top of OpenStack
> >>> The last option is to not have Rally in any program, and not consider
> it
> >>> *essential* to the production of the "OpenStack" integrated release or
> >>> the completion of the OpenStack project mission. Rally can happily
> exist
> >>> as an operator tool on top of OpenStack. It is built as a monolithic
> >>> product: that approach works very well for external complementary
> >>> solutions... Also be more integrated in OpenStack or part of the
> >>> OpenStack programs might come at a cost (slicing some functionality out
> >>> of rally to make it more a framework and less a product) that might not
> >>> be what its authors want.
> >>>
> >>> Let's explore each option to see which ones are viable, and the pros
> and
> >>> cons of each.
> >>
> >> My feeling right now is that Rally is trying to accomplish too much at
> >> the start (both #1 and #2).  I 

Re: [openstack-dev] Which program for Rally

2014-08-06 Thread Duncan Thomas
I'm not following here - you complain about rally being monolithic,
then suggest that parts of it should be baked into tempest - a tool
that is already huge and difficult to get into. I'd rather see tools
that do one thing well and some overlap than one tool to rule them
all.

On 6 August 2014 14:44, Sean Dague  wrote:
> On 08/06/2014 09:11 AM, Russell Bryant wrote:
>> On 08/06/2014 06:30 AM, Thierry Carrez wrote:
>>> Hi everyone,
>>>
>>> At the TC meeting yesterday we discussed Rally program request and
>>> incubation request. We quickly dismissed the incubation request, as
>>> Rally appears to be able to live happily on top of OpenStack and would
>>> benefit from having a release cycle decoupled from the OpenStack
>>> "integrated release".
>>>
>>> That leaves the question of the program. OpenStack programs are created
>>> by the Technical Committee, to bless existing efforts and teams that are
>>> considered *essential* to the production of the "OpenStack" integrated
>>> release and the completion of the OpenStack project mission. There are 3
>>> ways to look at Rally and official programs at this point:
>>>
>>> 1. Rally as an essential QA tool
>>> Performance testing (and especially performance regression testing) is
>>> an essential QA function, and a feature that Rally provides. If the QA
>>> team is happy to use Rally to fill that function, then Rally can
>>> obviously be adopted by the (already-existing) QA program. That said,
>>> that would put Rally under the authority of the QA PTL, and that raises
>>> a few questions due to the current architecture of Rally, which is more
>>> product-oriented. There needs to be further discussion between the QA
>>> core team and the Rally team to see how that could work and if that
>>> option would be acceptable for both sides.
>>>
>>> 2. Rally as an essential operator tool
>>> Regular benchmarking of OpenStack deployments is a best practice for
>>> cloud operators, and a feature that Rally provides. With a bit of a
>>> stretch, we could consider that benchmarking is essential to the
>>> completion of the OpenStack project mission. That program could one day
>>> evolve to include more such "operations best practices" tools. In
>>> addition to the slight stretch already mentioned, one concern here is
>>> that we still want to have performance testing in QA (which is clearly
>>> essential to the production of "OpenStack"). Letting Rally primarily be
>>> an operational tool might make that outcome more difficult.
>>>
>>> 3. Let Rally be a product on top of OpenStack
>>> The last option is to not have Rally in any program, and not consider it
>>> *essential* to the production of the "OpenStack" integrated release or
>>> the completion of the OpenStack project mission. Rally can happily exist
>>> as an operator tool on top of OpenStack. It is built as a monolithic
>>> product: that approach works very well for external complementary
>>> solutions... Also be more integrated in OpenStack or part of the
>>> OpenStack programs might come at a cost (slicing some functionality out
>>> of rally to make it more a framework and less a product) that might not
>>> be what its authors want.
>>>
>>> Let's explore each option to see which ones are viable, and the pros and
>>> cons of each.
>>
>> My feeling right now is that Rally is trying to accomplish too much at
>> the start (both #1 and #2).  I would rather see the project focus on
>> doing one of them as best as it can before increasing scope.
>>
>> It's my opinion that #1 is the most important thing that Rally can be
>> doing to help ensure the success of OpenStack, so I'd like to explore
>> the "Rally as a QA tool" in more detail to start with.
>
> I want to clarify some things. I don't think that rally in it's current
> form belongs in any OpenStack project. It's a giant monolythic tool,
> which is apparently a design point. That's the wrong design point for an
> OpenStack project.
>
> For instance:
>
> https://github.com/stackforge/rally/tree/master/rally/benchmark/scenarios 
> should
> all be tests in Tempest (and actually today mostly are via API tests).
> There is an existing stress framework in Tempest which does the
> repetitive looping that rally does on these already. This fact has been
> brought up before.
>
> https://github.com/stackforge/rally/tree/master/rally/verification/verifiers
> - should be baked back into Tempest (at least on the results side,
> though diving in there now it looks largely duplicative from existing
> subunit to html code).
>
> https://github.com/stackforge/rally/blob/master/rally/db/api.py - is
> largely (not entirely) what we'd like from a long term trending piece
> that subunit2sql is working on. Again this was just all thrown into the
> Rally db instead of thinking about how to split it off. Also, notable
> here is there are some fundamental testr bugs (like worker
> misallocation) which mean the data is massively dirty today. It would be
> good for people to actually work on fixing th

Re: [openstack-dev] Which program for Rally

2014-08-06 Thread Russell Bryant
On 08/06/2014 09:44 AM, Sean Dague wrote:
> Something that we need to figure out is given where we are in the
> release cycle do we want to ask the QA team to go off and do Rally deep
> dive now to try to pull it apart into the parts that make sense for
> other programs to take in. There are always trade offs.
> 
> Like the fact that right now the rally team is proposing gate jobs which
> have some overlap to the existing largeops jobs. Did they start a
> conversation about it? Nope. They just went off to do their thing
> instead. https://review.openstack.org/#/c/112251/
> 
> So now we're going to run 2 jobs that do very similar things, with
> different teams adjusting the test loads. Which I think is basically
> madness.

You make a great point about the time needed to do this.  I think the
feedback you've provided in this post is a great start.  Perhaps the
burden should be squarely on the Rally team.  Using the feedback you've
provided thus far, they could go off and work on splitting things up and
making a better integration plan, and we could revisit post-Juno.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Which program for Rally

2014-08-06 Thread Sean Dague
On 08/06/2014 09:11 AM, Russell Bryant wrote:
> On 08/06/2014 06:30 AM, Thierry Carrez wrote:
>> Hi everyone,
>>
>> At the TC meeting yesterday we discussed Rally program request and
>> incubation request. We quickly dismissed the incubation request, as
>> Rally appears to be able to live happily on top of OpenStack and would
>> benefit from having a release cycle decoupled from the OpenStack
>> "integrated release".
>>
>> That leaves the question of the program. OpenStack programs are created
>> by the Technical Committee, to bless existing efforts and teams that are
>> considered *essential* to the production of the "OpenStack" integrated
>> release and the completion of the OpenStack project mission. There are 3
>> ways to look at Rally and official programs at this point:
>>
>> 1. Rally as an essential QA tool
>> Performance testing (and especially performance regression testing) is
>> an essential QA function, and a feature that Rally provides. If the QA
>> team is happy to use Rally to fill that function, then Rally can
>> obviously be adopted by the (already-existing) QA program. That said,
>> that would put Rally under the authority of the QA PTL, and that raises
>> a few questions due to the current architecture of Rally, which is more
>> product-oriented. There needs to be further discussion between the QA
>> core team and the Rally team to see how that could work and if that
>> option would be acceptable for both sides.
>>
>> 2. Rally as an essential operator tool
>> Regular benchmarking of OpenStack deployments is a best practice for
>> cloud operators, and a feature that Rally provides. With a bit of a
>> stretch, we could consider that benchmarking is essential to the
>> completion of the OpenStack project mission. That program could one day
>> evolve to include more such "operations best practices" tools. In
>> addition to the slight stretch already mentioned, one concern here is
>> that we still want to have performance testing in QA (which is clearly
>> essential to the production of "OpenStack"). Letting Rally primarily be
>> an operational tool might make that outcome more difficult.
>>
>> 3. Let Rally be a product on top of OpenStack
>> The last option is to not have Rally in any program, and not consider it
>> *essential* to the production of the "OpenStack" integrated release or
>> the completion of the OpenStack project mission. Rally can happily exist
>> as an operator tool on top of OpenStack. It is built as a monolithic
>> product: that approach works very well for external complementary
>> solutions... Also be more integrated in OpenStack or part of the
>> OpenStack programs might come at a cost (slicing some functionality out
>> of rally to make it more a framework and less a product) that might not
>> be what its authors want.
>>
>> Let's explore each option to see which ones are viable, and the pros and
>> cons of each.
> 
> My feeling right now is that Rally is trying to accomplish too much at
> the start (both #1 and #2).  I would rather see the project focus on
> doing one of them as best as it can before increasing scope.
> 
> It's my opinion that #1 is the most important thing that Rally can be
> doing to help ensure the success of OpenStack, so I'd like to explore
> the "Rally as a QA tool" in more detail to start with.

I want to clarify some things. I don't think that rally in it's current
form belongs in any OpenStack project. It's a giant monolythic tool,
which is apparently a design point. That's the wrong design point for an
OpenStack project.

For instance:

https://github.com/stackforge/rally/tree/master/rally/benchmark/scenarios should
all be tests in Tempest (and actually today mostly are via API tests).
There is an existing stress framework in Tempest which does the
repetitive looping that rally does on these already. This fact has been
brought up before.

https://github.com/stackforge/rally/tree/master/rally/verification/verifiers
- should be baked back into Tempest (at least on the results side,
though diving in there now it looks largely duplicative from existing
subunit to html code).

https://github.com/stackforge/rally/blob/master/rally/db/api.py - is
largely (not entirely) what we'd like from a long term trending piece
that subunit2sql is working on. Again this was just all thrown into the
Rally db instead of thinking about how to split it off. Also, notable
here is there are some fundamental testr bugs (like worker
misallocation) which mean the data is massively dirty today. It would be
good for people to actually work on fixing those things.

The parts that should stay outside of Tempest are the setup tool
(separation of concerns is that Tempest is the load runner, not the
setup environment) and any of the SLA portions.

I think rally brings forward a good point about making Tempest easier to
run. But I think that shouldn't be done outside Tempest. Making the test
tool easier to use should be done in the tool itself. If that means
adding a tempest cmd

Re: [openstack-dev] Which program for Rally

2014-08-06 Thread Russell Bryant
On 08/06/2014 06:30 AM, Thierry Carrez wrote:
> Hi everyone,
> 
> At the TC meeting yesterday we discussed Rally program request and
> incubation request. We quickly dismissed the incubation request, as
> Rally appears to be able to live happily on top of OpenStack and would
> benefit from having a release cycle decoupled from the OpenStack
> "integrated release".
> 
> That leaves the question of the program. OpenStack programs are created
> by the Technical Committee, to bless existing efforts and teams that are
> considered *essential* to the production of the "OpenStack" integrated
> release and the completion of the OpenStack project mission. There are 3
> ways to look at Rally and official programs at this point:
> 
> 1. Rally as an essential QA tool
> Performance testing (and especially performance regression testing) is
> an essential QA function, and a feature that Rally provides. If the QA
> team is happy to use Rally to fill that function, then Rally can
> obviously be adopted by the (already-existing) QA program. That said,
> that would put Rally under the authority of the QA PTL, and that raises
> a few questions due to the current architecture of Rally, which is more
> product-oriented. There needs to be further discussion between the QA
> core team and the Rally team to see how that could work and if that
> option would be acceptable for both sides.
> 
> 2. Rally as an essential operator tool
> Regular benchmarking of OpenStack deployments is a best practice for
> cloud operators, and a feature that Rally provides. With a bit of a
> stretch, we could consider that benchmarking is essential to the
> completion of the OpenStack project mission. That program could one day
> evolve to include more such "operations best practices" tools. In
> addition to the slight stretch already mentioned, one concern here is
> that we still want to have performance testing in QA (which is clearly
> essential to the production of "OpenStack"). Letting Rally primarily be
> an operational tool might make that outcome more difficult.
> 
> 3. Let Rally be a product on top of OpenStack
> The last option is to not have Rally in any program, and not consider it
> *essential* to the production of the "OpenStack" integrated release or
> the completion of the OpenStack project mission. Rally can happily exist
> as an operator tool on top of OpenStack. It is built as a monolithic
> product: that approach works very well for external complementary
> solutions... Also be more integrated in OpenStack or part of the
> OpenStack programs might come at a cost (slicing some functionality out
> of rally to make it more a framework and less a product) that might not
> be what its authors want.
> 
> Let's explore each option to see which ones are viable, and the pros and
> cons of each.

My feeling right now is that Rally is trying to accomplish too much at
the start (both #1 and #2).  I would rather see the project focus on
doing one of them as best as it can before increasing scope.

It's my opinion that #1 is the most important thing that Rally can be
doing to help ensure the success of OpenStack, so I'd like to explore
the "Rally as a QA tool" in more detail to start with.

>From the TC meeting, it seems that the QA group (via sdague, at least)
has provided some feedback to Rally over the last several months.  I
would really like to see an analysis and write-up from the QA group on
the current state of Rally and how it may (or may not) be able to serve
the performance QA needs.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Which program for Rally

2014-08-06 Thread Thierry Carrez
Hi everyone,

At the TC meeting yesterday we discussed Rally program request and
incubation request. We quickly dismissed the incubation request, as
Rally appears to be able to live happily on top of OpenStack and would
benefit from having a release cycle decoupled from the OpenStack
"integrated release".

That leaves the question of the program. OpenStack programs are created
by the Technical Committee, to bless existing efforts and teams that are
considered *essential* to the production of the "OpenStack" integrated
release and the completion of the OpenStack project mission. There are 3
ways to look at Rally and official programs at this point:

1. Rally as an essential QA tool
Performance testing (and especially performance regression testing) is
an essential QA function, and a feature that Rally provides. If the QA
team is happy to use Rally to fill that function, then Rally can
obviously be adopted by the (already-existing) QA program. That said,
that would put Rally under the authority of the QA PTL, and that raises
a few questions due to the current architecture of Rally, which is more
product-oriented. There needs to be further discussion between the QA
core team and the Rally team to see how that could work and if that
option would be acceptable for both sides.

2. Rally as an essential operator tool
Regular benchmarking of OpenStack deployments is a best practice for
cloud operators, and a feature that Rally provides. With a bit of a
stretch, we could consider that benchmarking is essential to the
completion of the OpenStack project mission. That program could one day
evolve to include more such "operations best practices" tools. In
addition to the slight stretch already mentioned, one concern here is
that we still want to have performance testing in QA (which is clearly
essential to the production of "OpenStack"). Letting Rally primarily be
an operational tool might make that outcome more difficult.

3. Let Rally be a product on top of OpenStack
The last option is to not have Rally in any program, and not consider it
*essential* to the production of the "OpenStack" integrated release or
the completion of the OpenStack project mission. Rally can happily exist
as an operator tool on top of OpenStack. It is built as a monolithic
product: that approach works very well for external complementary
solutions... Also be more integrated in OpenStack or part of the
OpenStack programs might come at a cost (slicing some functionality out
of rally to make it more a framework and less a product) that might not
be what its authors want.

Let's explore each option to see which ones are viable, and the pros and
cons of each.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev