Hi Tony,

The short answer is that you cannot do that today. Today, each Nova compute node is either "all in" for NUMA and CPU pinning or it's not.

This means that for resource-constrained environments like "The Edge!", there are not very good ways to finely divide up a compute node and make the most efficient use of its resources.

There is no current way to say "On this dual-Xeon compute node, put all workloads that don't care about dedicated CPUs on this socket and all workloads that DO care about dedicated CPUs on the other socket.".

That said, we have had lengthy discussions about tracking dedicated guest CPU resources and dividing up the available logical host processors into buckets for "shared CPU" and "dedicated CPU" workloads on the following spec:

https://review.openstack.org/#/c/555081/

It is not going to land in Rocky. However, we should be able to make good progress towards the goals in that spec in early Stein.

Best,
-jay

On 07/04/2018 11:08 AM, Toni Mueller wrote:

Hi,

I am still trying to figure how to best utilise the small set of
hardware, and discovered the NUMA configuration mechanism. It allows me
to configure reserved cores for certain VMs, but it does not seem to
allow me to say "you can share these cores, but VMs of, say, appropriate
flavour take precedence and will throw you off these cores in case they
need more power".

How can I achieve that, dynamically?

TIA!


Thanks,
Toni


_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to     : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to     : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Reply via email to