On 09/22/2016 09:36 AM, Gabriele Cerami wrote:
Hi,

As reported on this bug

https://bugs.launchpad.net/tripleo/+bug/1626483

HA gate and periodic jobs for master and sometimes newton started to
fail for errors related to memory shortage. Memory on undercloud
instance was increased to 8G less than a month ago, so the problem
needs a different approach to be solved.

Which was a pretty significant jump for 6 GB before that. Part of the motivation for going to 8 was to move us in line with the rest of the infra Jenkins instances, so it would not be ideal to change it again.


We have some solutions in store. However, with the release date so
close, I don't think it's time for this kind of changes. So I thought
it could be a good compromise to temporarily increase the undercloud
instance memory to 12G, just for this week, unless there's a rapid way
to reduce memory footprint for heat-engine (usually the biggest memory
consumer on the undercloud instance)

This is fine for CI and the handful of us who have beefy development machines, but are we really at a point now where our memory usage _requires_ 12 GB on the undercloud and somewhere north of 6 GB on the overcloud nodes (we're also getting quite a few OOMs on overcloud nodes in HA deployments lately, with 6 GB instances)? For an HA deployment, that means 40 GB of memory just for the VMs, assuming 7 GB overcloud nodes. And _that's_ without ceph or the ability to test scaleup or...you get the idea.

Our developer hardware situation is bad enough as it is. Requiring a 64 GB box just to do one of the most common deploy types feels untenable to me. Would providing a worker config that reduces the number of worker processes be sufficient to keep us at 8 GB? We just added a similar thing to tripleo-heat-templates for the overcloud, so I think that would be reasonable.

Mostly we have to stop bloating the memory usage of even basic deployments. It took us less than a month to use up the extra 2 GB we gave ourselves last time. That's not a good trend. :-/


Any other ideas ?

thanks.

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to