Great Juraj. We merged a fix:
Will make a note to look into it upstream.
Red Hat SDN Team
- Original Message -
From: "Juraj Linkes -X (jlinkes - PANTHEON TECHNOLOGIES at Cisco)"
To: "Tim Rozet"
There were indeed many more processes than 12 per each openstack service. I'm
running functest with the configuration and seems to run smoothly.
Could you find out why it hasn't been fixed upstream?
I also looked at ci results. After Danube we made a number of refactoring
Now I'm thinking this is the same problem as before. I see we had this in
which we are not using in master because this problem was supposed to be fixed
upstream. Can you try including this file
The box is not OOM – it has more than 200GB spare. One thing I forgot to
mention was that I let the setup as is for a day or two (without doing anything
with it) and then the issue appeared.
From: Tim Rozet [mailto:tro...@redhat.com]
Sent: Friday, 04 August, 2017 16:09
To: Juraj Linkes
Hi Juraj,This error looks different than the files limit problem. This
actually looks to be an error in the python sqlalchemy call, complaining there
isn't enough ram to spawn a session thread. Is your box oom?
You can check the daily job's functest results. I believe it deployed that