On 09/07/2017 02:27 PM, Sahid Orentino Ferdjaoui wrote:
On Wed, Sep 06, 2017 at 11:57:25PM -0400, Jay Pipes wrote:
Sahid, Stephen, what are your thoughts on this?

On 09/06/2017 10:17 PM, Yaguang Tang wrote:
I think the fact that RamFilter can't deal with huge pages is a bug ,
duo to this limit, we have to set a balance  between normal memory and
huge pages to use RamFilter and NUMATopologyFilter. what do you think
Jay?

Huge Pages has been built on top of the NUMA Topology
implementation. You have to consider isolate all hosts which are going
to handle NUMA instances to a specific host aggregate.

We don't want RAMFilter to handle any NUMA related feature (in Nova
world: hugepages, pinning, realtime...) but we don't want it blocking
scheduling and that should not be the case.

I'm surprised to see this bug I think libvirt is reporting the total
amount of physical RAM available on the host and that is not depending
of the size of the pages.

So If that is true libvirt is reporting only the amount of small pages
memory available we will probably have to fix that point instead of
the RAMFilter.

From what I can see, in Mitaka the RAMFilter uses "free_ram_mb", which is calculated from "self.compute_node.memory_mb" and "self.compute_node.memory_mb_used".

memory_mb is calculated for libvirt as self._host.get_memory_mb_total(), which just calls libvirtmod.virNodeGetInfo() to get the memory available. As far as I can tell, this is the total amount of memory available on the system, without paying any attention to hugepages. (I ran "virsh nodeinfo" to check.)

memory_mb_used() is kind of messed up, it's calculated by running get_memory_mb_total() and subtracting the memory showed as available in /proc/meminfo.

So from what I can see, RAMFilter should be working with all the memory and ignoring hugepages.

(There is some additional complexity involving limits and hypervisor overhead that I've skipped over, that might possibly affect what's going on in the original problem scenario.)

Chris

_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to     : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Reply via email to