André Przywara wrote:
But I wouldn't load the admin with the burden of pinning, but let this be done by QEMU/KVM. Maybe one could introduce a way to tell QEMU/KVM to not pin the threads.
This is where things start to get ugly...
Why? qemu-system-x86_64 -numa 2,pin:none and then use whatever method you prefer (taskset, monitor) to pin the VCPUs (or left them unpinned).

I agree that for e.g. -numa 2, no host binding should occur. Pinning memory or cpus to nodes should only occur if the user explicitly requested it. Otherwise we run the risk of breaking load balancing.

If the user chooses to pin, the responsibility is on them. If not, we should allow the host to do its thing.

// similar to numactl --hardware, * means all nodes (no pinning)
 > numa pin:0;3
// static pinning: guest 0 -> host 0, guest 1 -> host 3
 > numa pin:*;
// guest node 0 -> all nodes, guest node 1: keep as it is
// or maybe: numa pin:0-3;
 > numa migrate:1;2

I suggest using exactly the same syntax as the command line option. Qemu would compute the difference between the current configuration and the desired configuration and migrate vcpus and memory as needed.

--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to