Also CC-ing os-ops as someone else may have encountered this before and have further/better advice...
On 27 September 2017 at 18:40, Blair Bethwaite <blair.bethwa...@gmail.com> wrote: > On 27 September 2017 at 18:14, Stephen Finucane <sfinu...@redhat.com> wrote: >> What you're probably looking for is the 'reserved_host_memory_mb' option. >> This >> defaults to 512 (at least in the latest master) so if you up this to 4192 or >> similar you should resolve the issue. > > I don't see how this would help given the problem description - > reserved_host_memory_mb would only help avoid causing OOM when > launching the last guest that would otherwise fit on a host based on > Nova's simplified notion of memory capacity. It sounds like both CPU > and NUMA pinning are in play here, otherwise the host would have no > problem allocating RAM on a different NUMA node and OOM would be > avoided. > > Jakub, your numbers sound reasonable to me, i.e., use 60 out of 64GB > when only considering QEMU overhead - however I would expect that > might be a problem on NUMA node0 where there will be extra reserved > memory regions for kernel and devices. In such a configuration where > you are wanting to pin multiple guests into each of multiple NUMA > nodes I think you may end up needing different flavor/instance-type > configs (using less RAM) for node0 versus other NUMA nodes. Suggest > freshly booting one of your hypervisors and then with no guests > running take a look at e.g. /proc/buddyinfo/ and /proc/zoneinfo to see > what memory is used/available and where. > > -- > Cheers, > ~Blairo -- Cheers, ~Blairo _______________________________________________ OpenStack-operators mailing list OpenStack-operators@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators