Thanks for the reminder! I’ll make note of that. In these tests the
clients are hitting Nginx (which is acting as a load balancer) so I could
try disabling keep-alive there and seeing what happens. So far I just used
the default that was written into the conf when the package was installed
("keepalive_timeout 65;”).

FWIW, I started an etherpad to track a shortlist of ideas:

On 9/16/14, 11:58 AM, "Jay Pipes" <> wrote:

>On 09/16/2014 11:23 AM, Kurt Griffiths wrote:
>> Hi crew, as promised I’ve continued to work through the performance test
>> plan. I’ve started a wiki page for the next batch of tests and results:
>> I am currently running the same tests again with 2x web heads, and will
>> update the wiki page with them when they finish (it takes a couple hours
>> to run each batch of tests). Then, I plan to add an additional Redis
>> and run everything again. After that, there are a few other things that
>> could do, depending on what everyone wants to see next.
>> * Run all these tests for producer-consumer (task distribution)
>> * Tune Redis, uWSGI and see if we can't improves latency, stdev, etc.
>> * Do a few runs with varying message sizes
>> * Continue increasing load and adding additional web heads
>> * Continue increasing load and adding additional redis procs
>> * Vary number of queues
>> * Vary number of project-ids
>> * Vary message batch sizes on post/get/claim
>Don't forget my request to identify the effect of keepalive settings in
>uWSGI :)
>OpenStack-dev mailing list

OpenStack-dev mailing list

Reply via email to