Re: [openstack-dev] [keystone] [infra] Post PTG performance testing needs

2018-03-06 Thread Matthew Treinish
On Tue, Mar 06, 2018 at 03:28:57PM -0600, Lance Bragstad wrote:
> Hey all,
> 
> Last week during the PTG the keystone team sat down with a few of the
> infra folks to discuss performance testing. The major hurdle here has
> always been having dedicated hosts to use for performance testing,
> regardless of that being rally, tempest, or a home-grown script.
> Otherwise results vary wildly from run to run in the gate due to
> differences from providers or noisy neighbor problems.
> 
> Opening up the discussion here because it sounded like some providers
> (mnaser, mtreinish) had some thoughts on how we can reserve specific
> hardware for these cases.

While I like being called a provider, I'm not really one. I was more trying to
find a use case for my closet cloud [1], and was volunteering to open that up
to external/infra use to provide dedicated hardware for consistent performance
testing. That's still an option, (I mean the boxes are just sitting there not
doing anything) and I'd gladly work with infra and keystone to get that
working. But, if mnaser and vexxhost have an alternative route with their real
capacity and modern hardware, that's probably a better route to go.

-Matt Treinish

[1] https://blog.kortar.org/?p=380
> 


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [infra] Post PTG performance testing needs

2018-03-06 Thread Clark Boylan
On Tue, Mar 6, 2018, at 1:28 PM, Lance Bragstad wrote:
> Hey all,
> 
> Last week during the PTG the keystone team sat down with a few of the
> infra folks to discuss performance testing. The major hurdle here has
> always been having dedicated hosts to use for performance testing,
> regardless of that being rally, tempest, or a home-grown script.
> Otherwise results vary wildly from run to run in the gate due to
> differences from providers or noisy neighbor problems.
> 
> Opening up the discussion here because it sounded like some providers
> (mnaser, mtreinish) had some thoughts on how we can reserve specific
> hardware for these cases.
> 
> Thoughts?

Currently the Infra team has access to a variety of clouds, but due to how 
scheduling works we can't rule out noisy neighbors (or even being our own noisy 
neighbor). mtreinish also has data showing that runtimes are too noisy to do 
statistical analysis on, even within a single cloud region. So this is indeed 
an issue in the current setup.

One approach that has been talked about in the past is to measure performance 
impacting operations using metrics other than execution time. For example 
number of sql queries or rabbit requests. I think this would also be valuable 
but won't give you proper performance measurements.

That brought us back to the idea of possibly working with some cloud providers 
like mnaser and/or mtreinish to have a small number of dedicated instances to 
run performance tests on. We could then avoid the noisy neighbor problem as 
well.

For the infra team we would likely need to have at least two providers 
providing these resources so that we could handle the loss of one without 
backing up job queues. I don't think the hardware needs to have an other 
special properties as we don't care about performance on specific hardware as 
much as comparing performance of the project over time on known hardware.

Curious to hear what others may have to say.

Thanks,
Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] [infra] Post PTG performance testing needs

2018-03-06 Thread Lance Bragstad
Hey all,

Last week during the PTG the keystone team sat down with a few of the
infra folks to discuss performance testing. The major hurdle here has
always been having dedicated hosts to use for performance testing,
regardless of that being rally, tempest, or a home-grown script.
Otherwise results vary wildly from run to run in the gate due to
differences from providers or noisy neighbor problems.

Opening up the discussion here because it sounded like some providers
(mnaser, mtreinish) had some thoughts on how we can reserve specific
hardware for these cases.

Thoughts?





signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev