Daniel P. Berrange wrote:
The only problem is the default option for the host side, as libnuma
requires to explicitly name the nodes. Maybe make the pin: part _not_
optional? I would at least want to pin the memory, one could discuss
about the VCPUs...
I think keeping it optional makes things more flexible for people
invoking KVM. If omitted, then query current CPU pinning to determine
which host NUMA nodes to allocate from.
Well, -numa itself is optional. But yes, we could use the default cpu
affinity mask to derive the default host numa nodes.
The topology exposed to a guest will likely be the same every time
you launch a particular VM, while the guest<-> host pinning is a
point in time decision according to current available resources.
Thus some apps / users may find it more convenient to have a fixed set
of args they always use to invoke the KVM process, and instead control
placement during the fork/exec'ing of KVM by explicitly calling
sched_setaffinity or using numactl to launch. It should be easy enough
to use sched_getaffinity to query current pining and from that determine
appropriate NUMA nodes, if they leave out the pin=XXXX arg.
I agree, nice idea.
--
error compiling committee.c: too many arguments to function
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html