On 9/11/14, 2:11 PM, "Devananda van der Veen" <devananda....@gmail.com>

>OK - those resource usages sound better. At least you generated enough
>load to saturate the uWSGI process CPU, which is a good point to look
>at performance of the system.
>At that peak, what was the:
>- average msgs/sec
>- min/max/avg/stdev time to [post|get|delete] a message

To be honest, it was a quick test and I didn’t note the exact metrics
other than eyeballing them to see that they were similar to the results
that I published for the scenarios that used the same load options (e.g.,
I just re-ran some of the same test scenarios).

Some of the metrics you mention aren’t currently reported by zaqar-bench,
but could be added easily enough. In any case, I think zaqar-bench is
going to end up being mostly useful to track relative performance gains or
losses on a patch-by-patch basis, and also as an easy way to smoke-test
both python-marconiclient and the service. For large-scale testing and
detailed metrics, other tools (e.g., Tsung, JMeter) are better for the
job, so I’ve been considering using them in future rounds.

>Is that 2,181 msg/sec total, or per-producer?

That metric was a combined average rate for all producers.

>I'd really like to see the total throughput and latency graphed as #
>of clients increases. Or if graphing isn't your thing, even just post
>a .csv of the raw numbers and I will be happy to graph it.
>It would also be great to see how that scales as you add more Redis
>instances until all the available CPU cores on your Redis host are in

Yep, I’ve got a long list of things like this that I’d like to see in
future rounds of performance testing (and I welcome anyone in the
community with an interest to join in), but I have to balance that effort
with a lot of other things that are on my plate right now.

Speaking generally, I’d like to see the project bake this in over time as
part of the CI process. It’s definitely useful information not just for
the developers but also for operators in terms of capacity planning. We’ve
talked as a team about doing this with Rally (and in fact, some work has
been started there), but it may be useful to also run a large-scale test
on a regular basis (at least per milestone). Regardless, I think it would
be great for the Zaqar team to connect with other projects (at the
summit?) who are working on perf testing to swap ideas, collaborate on
code/tools, etc.


OpenStack-dev mailing list

Reply via email to