Hi,
Finally, the issue solved. It is caused by the low number of connections
set to memcached servers: 1024. I noticed too many files open and too
many sockets errors in my logs. Changing the memcached max allowed
connections to 2048 or higher solved my problem.
You can check your memcached serve
Hi,
I found that my controller nodes were a bit overloaded with 16 uwsgi
nova-api-os compute processes. I reduced the nova-api-os uwsgi processes
to 10 and timeout and slowdowns were eliminated. My cloud went stable
and the response times went lower. I have 20 vcpus on a Xeon(R) CPU
E5-2630 v4 @ 2
Hi,
After a few tempest run I noticed slowdowns in the nova-api-os-compute
uwsgi processes. I check the processes with py-spy and found that a lot
of process blocked on read(). Here is my py-spy output from one of my
nova-api-os-compute uwsgi process: http://paste.openstack.org/show/731677/
And