There's several issues involved in doing automated regression checking for
benchmarks:

- You need a platform which is stable. Right now all our CI runs on
virtualized instances, and I don't think there's any particular guarantee
it'll be the same underlying hardware, further virtualized systems tend to
be very noisy and not give you the stability you need.
- You need your benchmarks to be very high precision, if you really want to
rule out regressions of more than N% without a lot of false positives.
- You need more than just checks on individual builds, you need long term
trend checking - 100 1% regressions are worse than a single 50% regression.

Alex


On Sun, Oct 20, 2013 at 11:24 AM, Tim Bell <tim.b...@cern.ch> wrote:

>  ** **
>
> From a user perspective, I want that a gate on changes which significantly
> degrade performance to be rejected.****
>
> ** **
>
> Tempest (and its associated CI) provide a current check on functionality.
> It is inline and understood.****
>
> ** **
>
> Let’s just add a set of benchmarks to Tempest which can validate that
> improved functionality does not significantly degrade performance. If we do
> not have this check, the users who are deploying early will be the ones
> whose services are affected which is not the long term direction.****
>
> ** **
>
> Thus, I ask, are there blocking items that would not permit Tempest to be
> extended to cover benchmarking and ensure changes which impact performance
> are picked up early ?****
>
> ** **
>
> Tim****
>
> ** **
>
> ** **
>
> *From:* Boris Pavlovic [mailto:bpavlo...@mirantis.com]
> *Sent:* 20 October 2013 19:34
>
> *To:* OpenStack Development Mailing List
> *Subject:* Re: [openstack-dev] Announce of Rally - benchmarking system
> for OpenStack****
>
>  ** **
>
> Hi Sean, ****
>
> ** **
>
> ** **
>
> >> Honestly, there has been so much work done in Tempest with the stress
> tests and the scenario tests in this release, and a large growing community
> around those, that it doesn't make any sense to me that you put item #3 as
> a non Tempest thing.****
>
> ** **
>
> I really don't like to copy paste functionality that is already
> implemented, but I have some other ideas about benchmark engine, I would
> like to implement them and try as soon as possible, to determine "what we
> actually need". It is much simpler to make such experiments in Rally,
> because there is a small community around it, and I don't care at this
> moment about backward comparability. ****
>
> ** **
>
> When I determine all benchmark engine parameters (what I need). I will
> make one more deep investigation, to see how complex will be to implement
> the same in tempest. If it will be possible and not extra complex we will
> start implementing required functionality in tempest. And when tempest will
> be ready we will switch to it.****
>
> ** **
>
> ** **
>
> >> Are you guys doing a summit session somewhere on this.****
>
> ** **
>
> There is a slot, but it is not approved
> http://summit.openstack.org/cfp/details/158 yet. ****
>
> Also there is a talk:
> http://openstacksummitnovember2013.sched.org/event/661ddc95f6b06ed3a634f12de09afa1d#.UmQITpTk9Z9
> ****
>
> ** **
>
> ** **
>
> >> It also feels like the efforts around #4 would be much better served in
> the OpenStack community if they were integrated around testr and subunit so
> they could be reused in many contexts.****
>
> ** **
>
> I already tried to use pytest as a base and that was the biggest mistake..
> ****
>
> ** **
>
> ** **
>
> ** **
>
> >> I also think 1.b.3 is probably better done in the way the coverage
> extension was done for nova, something which is baked in and can be
> administratively turned on, not something which requires a hot patch to the
> system.****
>
> ** **
>
> ** **
>
> I hope these patches will be merged. But you see there is a "community
> deadlock" here. You are not able to merge such patches in upstream until
> you make two things:****
>
> 1) Get a real examples of usage (so e.g. with hot patching in Rally)****
>
> 2) Approve that such changes don't impact on performance. (Rally)****
>
> ** **
>
> So we will prepare all patches, show live demonstration and approve that
> they don't impact on performance (especially when they are turned off) ;)*
> ***
>
> ** **
>
> ** **
>
> >> It's cool to have performance analysis tooling, but if it arrives in a
> way that doesn't integrate well with the rest of OpenStack, the impact is
> going to be far less than it could be. I'd like us to get the full bang for
> our buck out of efforts like this, especially if there is hope for it to
> graduate from stackforge into one of our standard toolkits.****
>
> ** **
>
> I don't understand the phrase "doesn't integrate well with the rest of
> OpenStack"****
>
> ** **
>
> 1. Project has classical OpenStack structure ****
>
> 2. We use all common code from oslo (db, localizations, oslo.config in
> future we are planing to use oslo.messaging)****
>
> 3. As a base and first engine we decide to use DevStack****
>
> 4. To verify deployments (we will use tempest asap)****
>
> 5. In benchmark engine we are using only native OS python clients to make
> requests to OpenStack API****
>
> 6. OpenStack development workflow:****
>
> 6.a) project is on stackforge and we use "stock jenkins"****
>
> 6.b) launchpad with BPs, Bugs and Questions****
>
> 6.c) wiki on http://wiki.openstack.org/wiki/Rally, ****
>
> ** **
>
> What "the rest of OpenStack" are you speaking?****
>
> ** **
>
> ** **
>
> Best regards,****
>
> Boris Pavlovic****
>
> ---****
>
> Mirantis Inc.****
>
> ** **
>
> ** **
>
> ** **
>
> On Sun, Oct 20, 2013 at 2:38 AM, Sean Dague <s...@dague.net> wrote:****
>
>  On 10/18/2013 12:17 PM, Boris Pavlovic wrote:****
>
> John,
>
> Actually seems like a pretty good suggestion IMO, at least something
> worth some investigation and consideration before quickly discounting
> it.  Rather than "that's not what tempest is", maybe it's something
> tempest "could do".  Don't know, not saying one way or the other, just
> wondering if it's worth some investigation or thought.
>
>
> These investigations I made before start working around Rally. It was
> about 3 months ago.
> It is not "quickly discounting" I didn't have yesterday time to make
> long response, so I will write it today:
>
> I really don't like to make a copy of another projects, so I tried to
> reuse all projects & libs that we already have.
>
> To explain why we shouldn't merge Rally & Tempest in one project (and
> should keep both)  we should analyze their use cases.
>
>
> 1. DevStack - one "click" and get your OpenStack cloud from sources
>
> 2. Tempest - one "click" and get your OpenStack Cloud verified
>
> Both of these projects are great, because they are very useful and solve
> complicated tasks without "pain" for end user. (and I like them)
>
> 3. Rally is also one "click" system that solve OpenStack benchmarking.
>
> To clarify situation. We should analyze what I mean by one "click"
> benchmarking and what are the use cases.
>
> Use Cases:
> 1. Investigate how deployments influence on OS performance (find the set
> of good OpenStack deployment architectures)
> 2. Automatically get numbers & profiling info about how your changes
> influence on OS performance
> 3. Using Rally profiler detect scale & performance issues.
> Like here when we are trying to delete 3 VMs by one request they are
> deleted one by one because of DB lock on quotas table
> http://37.58.79.43:8080/traces/0011f252c9d98e31
> 4. Determine maximal load that could handle production cloud
>
> To cover these cases we should actually test OpenStack deployments
> making simultaneously OpenStack API calls.
>
> So to get results we have to:
> 1. Deploy OpenStack cloud somewhere. (Or get existing cloud)
> 2. Verify It
> 3. Run Benchmarks
> 4. Collect all results & present it in human readable form.
>
>
> The goal of Rally was designed to automate these steps:
> 1.a Use existing cloud.
> 1.b.1 Automatically get (virtual) Servers from (soft layer, Amazon,
> RackSpace or you private public cloud, or OpenStack cloud)
> 1.b.2 DeployOpenStack on these servers from source (using Devstack,
> Anvli, Fuel or TrippleO...).
> 1.b.3 Patch this OpenStack with tomograph to get profiling information
> (I hope we will merge these patches into upstream).
> 2. Using tempest verify this cloud (we are going to switch from
> fuel-ostf-tests)
> 3. Run specified parametrized (to be able to make different load)
> benchmark scenarios
> 4. Collect all information about execution & present it in human
> readable form. (Tomograph, Zipking, matplotlib...)****
>
> ** **
>
> Honestly, there has been so much work done in Tempest with the stress
> tests and the scenario tests in this release, and a large growing community
> around those, that it doesn't make any sense to me that you put item #3 as
> a non Tempest thing.
>
> It feels very "not invented here".
>
> Are you guys doing a summit session somewhere on this.
>
> It also feels like the efforts around #4 would be much better served in
> the OpenStack community if they were integrated around testr and subunit so
> they could be reused in many contexts.
>
> I also think 1.b.3 is probably better done in the way the coverage
> extension was done for nova, something which is baked in and can be
> administratively turned on, not something which requires a hot patch to the
> system.
>
> It's cool to have performance analysis tooling, but if it arrives in a way
> that doesn't integrate well with the rest of OpenStack, the impact is going
> to be far less than it could be. I'd like us to get the full bang for our
> buck out of efforts like this, especially if there is hope for it to
> graduate from stackforge into one of our standard toolkits.
>
>         -Sean
>
> --
> Sean Dague
> http://dague.net****
>
>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev****
>
>  ** **
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
"I disapprove of what you say, but I will defend to the death your right to
say it." -- Evelyn Beatrice Hall (summarizing Voltaire)
"The people's good is the highest law." -- Cicero
GPG Key fingerprint: 125F 5C67 DFE9 4084
_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to