Hi Andre,
This patch series needs to be posted to qemu-devel. I know qemu doesn't
do true SMP yet, but it will in the relatively near future. Either way,
some of the design points needs review from a larger audience than
present on kvm-devel.
I'm not a big fan of the libnuma dependency. I'll willing to concede
this if there's a wide agreement that we should support this directly in
QEMU.
I don't think there's such a thing as a casual NUMA user. The default
NUMA policy in Linux is node-local memory. As long as a VM is smaller
than a single node, everything will work out fine.
In the event that the VM is larger than a single node, if a user is
creating it via qemu-system-x86_64, they're going to either not care at
all about NUMA, or be familiar enough with the numactl tools that
they'll probably just want to use that. Once you've got your head
around the fact that VCPUs are just threads and the memory is just a
shared memory segment, any knowledgable sysadmin will have no problem
doing whatever sort of NUMA layout they want.
The other case is where management tools are creating VMs. In this
case, it's probably better to use numactl as an external tool because
then it keeps things consistent wrt CPU pinning.
There's also a good argument for not introducing CPU pinning directly to
QEMU. There are multiple ways to effectively do CPU pinning. You can
use taskset, you can use cpusets or even something like libcgroup.
If you refactor the series so that the libnuma patch is the very last
one and submit to qemu-devel, I'll review and apply all of the first
patches. We can continue to discuss the last patch independently of the
first three if needed.
Regards,
Anthony Liguori
Andre Przywara wrote:
Hi,
this patch series introduces multiple NUMA nodes support within KVM
guests.
This is the second try incorporating several requests from the list:
- use the QEMU firmware configuration interface instead of CMOS-RAM
- detect presence of libnuma automatically, can be disabled with
./configure --disable-numa
This only applies to the host side, the command line and guest (BIOS)
side are always built and functional, although this configuration
is only useful for research and debugging
- use a more flexible command line interface allowing:
- specifying the distribution of memory across the guest nodes:
mem:1536M;512M
- specifying the distribution of the CPUs:
cpu:0-2;3
- specifying the host nodes the guest nodes should be pinned to:
pin:3;2
All of these options are optional, in case of mem and cpu the
resources are split equally across all guest nodes if omitted. Please
note that at least in Linux SRAT takes precedence over E820, so the
total usable memory will be the sum specified at the mem: option
(although QEMU will still allocate the amount at -m).
If pin: is omitted, the guest nodes will be pinned to those host nodes
where the threads are happen to be scheduled at on start-up time. This
requires the (v)getcpu (v)syscall to be usable, this is true for
kernels up from 2.6.19 and glibc >= 2.6 (sched_getcpu()). I have a
hack if glibc doesn't support this, tell me if you are interested.
The only non-optional argument is the number of guest nodes, a
possible command line looks like:
-numa 3,mem:1024M;512M;512M,cpu:0-1;2;3
Please note that you have to quote the semicolons on the shell.
The monitor command is left out for now and will be send later.
Please apply.
Regards,
Andre.
Signed-off-by: Andre Przywara <[EMAIL PROTECTED]>
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html