What I don't understand is why the OOM killer is being invoked when there
is almost no swap space being used at all. Check out the memory output when
it's killed:
http://logs.openstack.org/59/382659/26/check/gate-tempest-dsvm-neutron-linuxbridge-ubuntu-xenial/7de01d0/logs/syslog.txt.gz#_Jan_11_15_
On Thu, Jan 19, 2017 at 10:27 AM, Matt Riedemann wrote:
> On 1/18/2017 4:53 AM, Jens Rosenboom wrote:
>
>> To me it looks like the times of 2G are long gone, Nova is using
>> almost 2G all by itself. And 8G may be getting tight if additional
>> stuff like Ceph is being added.
>>
>>
> I'm not real
On 01/14/2017 02:48 AM, Jakub Libosvar wrote:
> recently I noticed we got oom-killer in action in one of our jobs [1].
> Any other ideas?
I spent quite a while chasing down similar things with centos a while
ago. I do have some ideas :)
The symptom is probably that mysql gets chosen by the OOM
On 1/18/2017 4:53 AM, Jens Rosenboom wrote:
To me it looks like the times of 2G are long gone, Nova is using
almost 2G all by itself. And 8G may be getting tight if additional
stuff like Ceph is being added.
I'm not really surprised at all about Nova being a memory hog with the
versioned obje
On 1/13/2017 9:48 AM, Jakub Libosvar wrote:
Hi,
recently I noticed we got oom-killer in action in one of our jobs [1]. I
saw it several times, so far only with linux bridge job. The consequence
is that usually mysqld gets killed as a processes that consumes most of
the memory, sometimes even nov
2017-01-13 17:56 GMT+01:00 Clark Boylan :
> On Fri, Jan 13, 2017, at 07:48 AM, Jakub Libosvar wrote:
>> Does anybody know whether we can bump memory on nodes in the gate
>> without losing resources for running other jobs?
>> Has anybody experience with memory consumption being higher when using
>>
2017-01-13 11:13 GMT-06:00 Kevin Benton :
> Sounds like we must have a memory leak in the Linux bridge agent if that's
> the only difference between the Linux bridge job and the ovs ones. Is there
> a bug tracking this?
Just created one [1]. For now, this issue was observed in two cases
(mentioned
Sounds like we must have a memory leak in the Linux bridge agent if that's
the only difference between the Linux bridge job and the ovs ones. Is there
a bug tracking this?
On Jan 13, 2017 08:58, "Clark Boylan" wrote:
> On Fri, Jan 13, 2017, at 07:48 AM, Jakub Libosvar wrote:
> > Does anybody kno
On Fri, Jan 13, 2017, at 07:48 AM, Jakub Libosvar wrote:
> Does anybody know whether we can bump memory on nodes in the gate
> without losing resources for running other jobs?
> Has anybody experience with memory consumption being higher when using
> linux bridge agents?
>
> Any other ideas?
Id
On 2017-01-13 16:48:26 +0100 (+0100), Jakub Libosvar wrote:
[...]
> Does anybody know whether we can bump memory on nodes in the gate without
> losing resources for running other jobs?
[...]
We picked 8gb back when typical devstack-gate jobs only used around
2gb of memory, to make sure there was a
Hi,
recently I noticed we got oom-killer in action in one of our jobs [1]. I
saw it several times, so far only with linux bridge job. The consequence
is that usually mysqld gets killed as a processes that consumes most of
the memory, sometimes even nova-api gets killed.
Does anybody know whe
11 matches
Mail list logo