Re: [Libvir] [RFC][PATCH 1/2] Tested NUMA patches for available memory and topology
Richard W.M. Jones wrote: beth kon wrote: Patch for accessing available memory. --- libvirt.danielpatch/src/driver.h2007-09-11 15:29:43.0 -0400 +++ libvirt.cellsMemory/src/driver.h2007-09-27 18:39:52.0 -0400 @@ -258,8 +258,9 @@ typedef virDriver *virDriverPtr; typedef int (*virDrvNodeGetCellsFreeMemory) (virConnectPtr conn, - unsigned long *freeMems, - int nbCells); + long long *freeMems, This needs to be declared unsigned long long. If you configure with --enable-compile-warnings=error then the compiler will catch these sorts of errors. --- libvirt.danielpatch/src/xend_internal.c2007-09-10 17:35:39.0 -0400 +++ libvirt.cellsMemory/src/xend_internal.c2007-09-27 18:39:52.0 -0400 @@ -1954,6 +1954,8 @@ xenDaemonOpen(virConnectPtr conn, const { xmlURIPtr uri = NULL; int ret; + +virNodeInfo nodeInfo; This variable is never used. [ And from part 2/2 of the patch ] + * getNumber: sscanf? The reason I created this is because I also wanted to find the length of the segment so I could add it to the parsing offset to check what was next in the string. That level of checking may be unnecessary (overkill), and in any case could be more easily achieved using something like sscanf for some token portion of the string. As I said, I am *certain* there is a prettier way to do this! [ And in general ] I compiled this version was hoping to test it, but I don't seem to have the right combination of Xen to make it work. At least I don't see any topology section in the XML capabilities. What patches do I need for Xen to make this work? I have a 2 socket AMD machine which I assume should work with this. Daniel has built the kernel and xen rpms with the needed patches. Rich. -- Elizabeth Kon (Beth) IBM Linux Technology Center Open Hypervisor Team email: [EMAIL PROTECTED] -- Libvir-list mailing list Libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [Libvir] [RFC][PATCH 1/2] Tested NUMA patches for available memory and topology
beth kon wrote: Patch for accessing available memory. --- libvirt.danielpatch/src/driver.h2007-09-11 15:29:43.0 -0400 +++ libvirt.cellsMemory/src/driver.h2007-09-27 18:39:52.0 -0400 @@ -258,8 +258,9 @@ typedef virDriver *virDriverPtr; typedef int (*virDrvNodeGetCellsFreeMemory) (virConnectPtr conn, - unsigned long *freeMems, -int nbCells); + long long *freeMems, This needs to be declared unsigned long long. If you configure with --enable-compile-warnings=error then the compiler will catch these sorts of errors. --- libvirt.danielpatch/src/xend_internal.c 2007-09-10 17:35:39.0 -0400 +++ libvirt.cellsMemory/src/xend_internal.c 2007-09-27 18:39:52.0 -0400 @@ -1954,6 +1954,8 @@ xenDaemonOpen(virConnectPtr conn, const { xmlURIPtr uri = NULL; int ret; + +virNodeInfo nodeInfo; This variable is never used. [ And from part 2/2 of the patch ] + * getNumber: sscanf? [ And in general ] I compiled this version was hoping to test it, but I don't seem to have the right combination of Xen to make it work. At least I don't see any topology section in the XML capabilities. What patches do I need for Xen to make this work? I have a 2 socket AMD machine which I assume should work with this. Rich. -- Emerging Technologies, Red Hat - http://et.redhat.com/~rjones/ Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SL4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 03798903 smime.p7s Description: S/MIME Cryptographic Signature -- Libvir-list mailing list Libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [Libvir] [RFC][PATCH 1/2] Tested NUMA patches for available memory and topology
Daniel P. Berrange wrote: On Fri, Sep 28, 2007 at 02:52:51PM +0100, Richard W.M. Jones wrote: # src/virsh capabilities [...] topology cells num='1' cell id='0' cpus num='4' cpu id='0'/ cpu id='1'/ cpu id='2'/ cpu id='3'/ /cpus /cell /cells /topology Do we really need such verbose XML. At the very least the 'num' attribute is redundant, since you can trivially do count(/topology/cells/cell) or count(/topology/cells/[EMAIL PROTECTED]/cpus/cpu) XPath exprs in both cases. The addition of extra tags every time we have a list is not the style we have normally used in libvirt. eg, we don't use disks disk .. /disk disk .. /disk /disk to surround the list of disks in a domain. I'd prefer to see it looking more like this: topology cell id='0' cpu id='0'/ cpu id='1'/ cpu id='2'/ cpu id='3'/ /cell /topology Regards, Dan. That would simplify the code since the counts wouldn't need to be known up front. This was the format suggested by Daniel V and I used it, assuming he knows more about libvirt's desired/required xml structure. -- Elizabeth Kon (Beth) IBM Linux Technology Center Open Hypervisor Team email: [EMAIL PROTECTED] -- Libvir-list mailing list Libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [Libvir] [RFC][PATCH 1/2] Tested NUMA patches for available memory and topology
My results are a bit inconclusive. I have a machine here which supposedly supports NUMA (2 socket, 2 core AMD with hypertransport and two separate banks of RAM). BIOS is _not_ configured to interleave memory. Other BIOS settings lead me to suppose that NUMA is enabled (or at least not disabled). Booting with Daniel's Xen kernel does not give any messages about NUMA enabled or disabled. (See attached messages). # numactl --show physcpubind: 0 1 2 3 No NUMA support available on this system. $ grep -i numa /boot/config-2.6.18-numa.52.el5xen [ no configuration lines shown ] Nevertheless, with Beth's patch I see: # src/virsh freecell 0 0: 133292032 kB # src/virsh freecell 1 libvir: Xen error : invalid argument in xenHypervisorNodeGetCellsFreeMemory: invalid argument # src/virsh freecell Total: 133292032 kB # src/virsh freecell -2 -2: 0 kB (Negative numbers should be returning an error). # src/virsh capabilities [...] topology cells num='1' cell id='0' cpus num='4' cpu id='0'/ cpu id='1'/ cpu id='2'/ cpu id='3'/ /cpus /cell /cells /topology [...] Rich. -- Emerging Technologies, Red Hat - http://et.redhat.com/~rjones/ Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SL4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 03798903 __ __ \ \/ /___ _ __ \ // _ \ \047_ \ / \ __/ | | | /_/\_\___|_| |_| _ _ ___ _ |___ / / | / _ \_ __ _ _ _ __ ___ __ _ | ___|___ \ ___| | ___| |_ \ | || | | |__| \047_ \| | | | \047_ ` _ \ / _` | |___ \ __) | / _ \ |___ \ ___) || || |_| |__| | | | |_| | | | | | | (_| |_ ___) / __/ | __/ |___) | |(_)_(_)___/ |_| |_|\__,_|_| |_| |_|\__,_(_)/_(_)___|_|/ http://www.cl.cam.ac.uk/netos/xen University of Cambridge Computer Laboratory Xen version 3.1.0-numa.52.el5 (veillard@) (gcc version 4.1.2 20070626 (Red Hat 4.1.2-14)) Thu Sep 27 14:51:13 CEST 2007 Latest ChangeSet: unavailable (XEN) Command line: (hd0,5)/xen.gz-2.6.18-numa.52.el5 noreboot (XEN) - 0009a000 (usable) (XEN) 0009ac00 - 000a (reserved) (XEN) 000d2000 - 0010 (reserved) (XEN) 0010 - adf0 (usable) (XEN) adf0 - adf0d000 (ACPI data) (XEN) adf0d000 - adf8 (ACPI NVS) (XEN) adf8 - ae00 (reserved) (XEN) e000 - f000 (reserved) (XEN) fec0 - fec1 (reserved) (XEN) fee0 - fee01000 (reserved) (XEN) fff0 - 0001 (reserved) (XEN) 0001 - 00015200 (usable) (XEN) System RAM: 4094MB (4192872kB) (XEN) Xen heap: 13MB (14104kB) (XEN) Domain heap initialised: DMA width 32 bits (XEN) Processor #0 15:1 APIC version 16 (XEN) Processor #1 15:1 APIC version 16 (XEN) Processor #2 15:1 APIC version 16 (XEN) Processor #3 15:1 APIC version 16 (XEN) IOAPIC[0]: apic_id 4, version 17, address 0xfec0, GSI 0-23 (XEN) IOAPIC[1]: apic_id 5, version 17, address 0xd000, GSI 24-47 (XEN) Enabling APIC mode: Flat. Using 2 I/O APICs (XEN) Using scheduler: SMP Credit Scheduler (credit) (XEN) Detected 2814.461 MHz processor. (XEN) AMD SVM: ASIDs disabled. (XEN) HVM: SVM enabled (XEN) CPU0: AMD Dual-Core AMD Opteron(tm) Processor 2220 stepping 03 (XEN) Mapping cpu 0 to node 255 (XEN) Booting processor 1/1 eip 9 (XEN) Mapping cpu 1 to node 255 (XEN) AMD: Disabling C1 Clock Ramping Node #0 (XEN) AMD: Disabling C1 Clock Ramping Node #1 (XEN) AMD SVM: ASIDs disabled. (XEN) CPU1: AMD Dual-Core AMD Opteron(tm) Processor 2220 stepping 03 (XEN) Booting processor 2/2 eip 9 (XEN) Mapping cpu 2 to node 255 (XEN) AMD SVM: ASIDs disabled. (XEN) CPU2: AMD Dual-Core AMD Opteron(tm) Processor 2220 stepping 03 (XEN) Booting processor 3/3 eip 9 (XEN) Mapping cpu 3 to node 255 (XEN) AMD SVM: ASIDs disabled. (XEN) CPU3: AMD Dual-Core AMD Opteron(tm) Processor 2220 stepping 03 (XEN) Total of 4 processors activated. (XEN) ENABLING IO-APIC IRQs (XEN) - Using new ACK method (XEN) Platform timer is 25.000MHz HPET (XEN) Brought up 4 CPUs (XEN) *** LOADING DOMAIN 0 *** (XEN) elf_parse_binary: phdr: paddr=0x8020 memsz=0x2c3c58 (XEN) elf_parse_binary: phdr: paddr=0x804c3c80 memsz=0x113270 (XEN) elf_parse_binary: phdr: paddr=0x805d7000 memsz=0xc08 (XEN) elf_parse_binary: phdr: paddr=0x805d8000 memsz=0x10af04 (XEN) elf_parse_binary: memory: 0x8020 - 0x806e2f04 (XEN) elf_xen_parse_note: GUEST_OS = linux (XEN) elf_xen_parse_note: GUEST_VERSION = 2.6 (XEN) elf_xen_parse_note: XEN_VERSION = xen-3.0 (XEN) elf_xen_parse_note: VIRT_BASE =
Re: [Libvir] [RFC][PATCH 1/2] Tested NUMA patches for available memory and topology
Richard W.M. Jones wrote: beth kon wrote: Patch for accessing available memory. --- libvirt.danielpatch/src/driver.h2007-09-11 15:29:43.0 -0400 +++ libvirt.cellsMemory/src/driver.h2007-09-27 18:39:52.0 -0400 @@ -258,8 +258,9 @@ typedef virDriver *virDriverPtr; typedef int (*virDrvNodeGetCellsFreeMemory) (virConnectPtr conn, - unsigned long *freeMems, - int nbCells); + long long *freeMems, This needs to be declared unsigned long long. If you configure with --enable-compile-warnings=error then the compiler will catch these sorts of errors. --- libvirt.danielpatch/src/xend_internal.c2007-09-10 17:35:39.0 -0400 +++ libvirt.cellsMemory/src/xend_internal.c2007-09-27 18:39:52.0 -0400 @@ -1954,6 +1954,8 @@ xenDaemonOpen(virConnectPtr conn, const { xmlURIPtr uri = NULL; int ret; + +virNodeInfo nodeInfo; This variable is never used. Somehow I missed this part of the note last time. Thanks for the catches. [ And from part 2/2 of the patch ] + * getNumber: sscanf? [ And in general ] I compiled this version was hoping to test it, but I don't seem to have the right combination of Xen to make it work. At least I don't see any topology section in the XML capabilities. What patches do I need for Xen to make this work? I have a 2 socket AMD machine which I assume should work with this. Daniel's RPMs are at http://veillard.com/NUMA/ Rich. -- Elizabeth Kon (Beth) IBM Linux Technology Center Open Hypervisor Team email: [EMAIL PROTECTED] -- Libvir-list mailing list Libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [Libvir] [RFC][PATCH 1/2] Tested NUMA patches for available memory and topology
* Richard W.M. Jones [EMAIL PROTECTED] [2007-09-28 11:20]: beth kon wrote: Richard W.M. Jones wrote: My results are a bit inconclusive. I have a machine here which supposedly supports NUMA (2 socket, 2 core AMD with hypertransport and two separate banks of RAM). BIOS is _not_ configured to interleave memory. Other BIOS settings lead me to suppose that NUMA is enabled (or at least not disabled). Booting with Daniel's Xen kernel does not give any messages about NUMA enabled or disabled. (See attached messages). # numactl --show physcpubind: 0 1 2 3 No NUMA support available on this system. Are you setting numa=on dom0_mem=512m on the kernel line in grub? I'm not sure if the dom0_mem=512m should be required but we were having problems when trying to boot numa without it. Aha, the results are quite a bit better now :-) virsh shows the correct topology: topology cells num='2' cell id='0' cpus num='2' cpu id='0'/ cpu id='1'/ /cpus /cell cell id='1' cpus num='2' cpu id='2'/ cpu id='3'/ /cpus /cell /cells /topology numactl --show still doesn't work (missing support in dom0 kernel or is this just completely incompatible with Xen?) Currently Xen doesn't export any per-domain topology (say a virtual SRAT table), nor the entire system topology; the goal of the current Xen NUMA code is to ensure that domains have local resources within a numa-node. 'virsh freecell 0' and 'virsh freecell 1' show numbers which are plausible (I have no idea if they're actually correct though). Can I pin a domain or vCPU to memory to see if that works? Rich. -- Emerging Technologies, Red Hat - http://et.redhat.com/~rjones/ Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SL4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 03798903 -- Libvir-list mailing list Libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list -- Ryan Harper Software Engineer; Linux Technology Center IBM Corp., Austin, Tx (512) 838-9253 T/L: 678-9253 [EMAIL PROTECTED] -- Libvir-list mailing list Libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [Libvir] [RFC][PATCH 1/2] Tested NUMA patches for available memory and topology
beth kon wrote: Richard W.M. Jones wrote: My results are a bit inconclusive. I have a machine here which supposedly supports NUMA (2 socket, 2 core AMD with hypertransport and two separate banks of RAM). BIOS is _not_ configured to interleave memory. Other BIOS settings lead me to suppose that NUMA is enabled (or at least not disabled). Booting with Daniel's Xen kernel does not give any messages about NUMA enabled or disabled. (See attached messages). # numactl --show physcpubind: 0 1 2 3 No NUMA support available on this system. Are you setting numa=on dom0_mem=512m on the kernel line in grub? I'm not sure if the dom0_mem=512m should be required but we were having problems when trying to boot numa without it. Aha, the results are quite a bit better now :-) virsh shows the correct topology: topology cells num='2' cell id='0' cpus num='2' cpu id='0'/ cpu id='1'/ /cpus /cell cell id='1' cpus num='2' cpu id='2'/ cpu id='3'/ /cpus /cell /cells /topology numactl --show still doesn't work (missing support in dom0 kernel or is this just completely incompatible with Xen?) 'virsh freecell 0' and 'virsh freecell 1' show numbers which are plausible (I have no idea if they're actually correct though). Can I pin a domain or vCPU to memory to see if that works? Rich. -- Emerging Technologies, Red Hat - http://et.redhat.com/~rjones/ Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SL4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 03798903 smime.p7s Description: S/MIME Cryptographic Signature -- Libvir-list mailing list Libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [Libvir] [RFC][PATCH 1/2] Tested NUMA patches for available memory and topology
Richard W.M. Jones wrote: My results are a bit inconclusive. I have a machine here which supposedly supports NUMA (2 socket, 2 core AMD with hypertransport and two separate banks of RAM). BIOS is _not_ configured to interleave memory. Other BIOS settings lead me to suppose that NUMA is enabled (or at least not disabled). Booting with Daniel's Xen kernel does not give any messages about NUMA enabled or disabled. (See attached messages). # numactl --show physcpubind: 0 1 2 3 No NUMA support available on this system. Are you setting numa=on dom0_mem=512m on the kernel line in grub? I'm not sure if the dom0_mem=512m should be required but we were having problems when trying to boot numa without it. It appears from the results you show that numa is not on. $ grep -i numa /boot/config-2.6.18-numa.52.el5xen [ no configuration lines shown ] Nevertheless, with Beth's patch I see: # src/virsh freecell 0 0: 133292032 kB # src/virsh freecell 1 libvir: Xen error : invalid argument in xenHypervisorNodeGetCellsFreeMemory: invalid argument # src/virsh freecell Total: 133292032 kB # src/virsh freecell -2 -2: 0 kB (Negative numbers should be returning an error). Yes, a check for negative numbers would make sense. # src/virsh capabilities [...] topology cells num='1' cell id='0' cpus num='4' cpu id='0'/ cpu id='1'/ cpu id='2'/ cpu id='3'/ /cpus /cell /cells /topology [...] This is how the information would be interpreted for a non-numa box. All cpus on one node. You can compare the results with those in xm info. So if you didn't have numa=on set in grub, this makes sense. Otherwise, something is wrong. Rich. __ __ \ \/ /___ _ __ \ // _ \ \047_ \ / \ __/ | | | /_/\_\___|_| |_| _ _ ___ _ |___ / / | / _ \_ __ _ _ _ __ ___ __ _ | ___|___ \ ___| | ___| |_ \ | || | | |__| \047_ \| | | | \047_ ` _ \ / _` | |___ \ __) | / _ \ |___ \ ___) || || |_| |__| | | | |_| | | | | | | (_| |_ ___) / __/ | __/ |___) | |(_)_(_)___/ |_| |_|\__,_|_| |_| |_|\__,_(_)/_(_)___|_|/ http://www.cl.cam.ac.uk/netos/xen University of Cambridge Computer Laboratory Xen version 3.1.0-numa.52.el5 (veillard@) (gcc version 4.1.2 20070626 (Red Hat 4.1.2-14)) Thu Sep 27 14:51:13 CEST 2007 Latest ChangeSet: unavailable (XEN) Command line: (hd0,5)/xen.gz-2.6.18-numa.52.el5 noreboot (XEN) - 0009a000 (usable) (XEN) 0009ac00 - 000a (reserved) (XEN) 000d2000 - 0010 (reserved) (XEN) 0010 - adf0 (usable) (XEN) adf0 - adf0d000 (ACPI data) (XEN) adf0d000 - adf8 (ACPI NVS) (XEN) adf8 - ae00 (reserved) (XEN) e000 - f000 (reserved) (XEN) fec0 - fec1 (reserved) (XEN) fee0 - fee01000 (reserved) (XEN) fff0 - 0001 (reserved) (XEN) 0001 - 00015200 (usable) (XEN) System RAM: 4094MB (4192872kB) (XEN) Xen heap: 13MB (14104kB) (XEN) Domain heap initialised: DMA width 32 bits (XEN) Processor #0 15:1 APIC version 16 (XEN) Processor #1 15:1 APIC version 16 (XEN) Processor #2 15:1 APIC version 16 (XEN) Processor #3 15:1 APIC version 16 (XEN) IOAPIC[0]: apic_id 4, version 17, address 0xfec0, GSI 0-23 (XEN) IOAPIC[1]: apic_id 5, version 17, address 0xd000, GSI 24-47 (XEN) Enabling APIC mode: Flat. Using 2 I/O APICs (XEN) Using scheduler: SMP Credit Scheduler (credit) (XEN) Detected 2814.461 MHz processor. (XEN) AMD SVM: ASIDs disabled. (XEN) HVM: SVM enabled (XEN) CPU0: AMD Dual-Core AMD Opteron(tm) Processor 2220 stepping 03 (XEN) Mapping cpu 0 to node 255 (XEN) Booting processor 1/1 eip 9 (XEN) Mapping cpu 1 to node 255 (XEN) AMD: Disabling C1 Clock Ramping Node #0 (XEN) AMD: Disabling C1 Clock Ramping Node #1 (XEN) AMD SVM: ASIDs disabled. (XEN) CPU1: AMD Dual-Core AMD Opteron(tm) Processor 2220 stepping 03 (XEN) Booting processor 2/2 eip 9 (XEN) Mapping cpu 2 to node 255 (XEN) AMD SVM: ASIDs disabled. (XEN) CPU2: AMD Dual-Core AMD Opteron(tm) Processor 2220 stepping 03 (XEN) Booting processor 3/3 eip 9 (XEN) Mapping cpu 3 to node 255 (XEN) AMD SVM: ASIDs disabled. (XEN) CPU3: AMD Dual-Core AMD Opteron(tm) Processor 2220 stepping 03 (XEN) Total of 4 processors activated. (XEN) ENABLING IO-APIC IRQs (XEN) - Using new ACK method (XEN) Platform timer is 25.000MHz HPET (XEN) Brought up 4 CPUs (XEN) *** LOADING DOMAIN 0 *** (XEN) elf_parse_binary: phdr: paddr=0x8020 memsz=0x2c3c58 (XEN) elf_parse_binary: phdr: paddr=0x804c3c80 memsz=0x113270 (XEN)
Re: [Libvir] [RFC][PATCH 1/2] Tested NUMA patches for available memory and topology
On Fri, Sep 28, 2007 at 02:52:51PM +0100, Richard W.M. Jones wrote: # src/virsh capabilities [...] topology cells num='1' cell id='0' cpus num='4' cpu id='0'/ cpu id='1'/ cpu id='2'/ cpu id='3'/ /cpus /cell /cells /topology Do we really need such verbose XML. At the very least the 'num' attribute is redundant, since you can trivially do count(/topology/cells/cell) or count(/topology/cells/[EMAIL PROTECTED]/cpus/cpu) XPath exprs in both cases. The addition of extra tags every time we have a list is not the style we have normally used in libvirt. eg, we don't use disks disk .. /disk disk .. /disk /disk to surround the list of disks in a domain. I'd prefer to see it looking more like this: topology cell id='0' cpu id='0'/ cpu id='1'/ cpu id='2'/ cpu id='3'/ /cell /topology Regards, Dan. -- |=- Red Hat, Engineering, Emerging Technologies, Boston. +1 978 392 2496 -=| |=- Perl modules: http://search.cpan.org/~danberr/ -=| |=- Projects: http://freshmeat.net/~danielpb/ -=| |=- GnuPG: 7D3B9505 F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 -=| -- Libvir-list mailing list Libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [Libvir] [RFC][PATCH 1/2] Tested NUMA patches for available memory and topology
On Fri, Sep 28, 2007 at 06:42:10AM -0400, beth kon wrote: Richard W.M. Jones wrote: beth kon wrote: Patch for accessing available memory. --- libvirt.danielpatch/src/driver.h2007-09-11 15:29:43.0 -0400 +++ libvirt.cellsMemory/src/driver.h2007-09-27 18:39:52.0 -0400 @@ -258,8 +258,9 @@ typedef virDriver *virDriverPtr; typedef int (*virDrvNodeGetCellsFreeMemory) (virConnectPtr conn, - unsigned long *freeMems, - int nbCells); + long long *freeMems, This needs to be declared unsigned long long. If you configure with --enable-compile-warnings=error then the compiler will catch these sorts of errors. --- libvirt.danielpatch/src/xend_internal.c2007-09-10 17:35:39.0 -0400 +++ libvirt.cellsMemory/src/xend_internal.c2007-09-27 18:39:52.0 -0400 @@ -1954,6 +1954,8 @@ xenDaemonOpen(virConnectPtr conn, const { xmlURIPtr uri = NULL; int ret; + +virNodeInfo nodeInfo; This variable is never used. [ And from part 2/2 of the patch ] + * getNumber: sscanf? The reason I created this is because I also wanted to find the length of the segment so I could add it to the parsing offset to check what was next in the string. That level of checking may be unnecessary (overkill), and in any case could be more easily achieved using something like sscanf for some token portion of the string. As I said, I am *certain* there is a prettier way to do this! [ And in general ] I compiled this version was hoping to test it, but I don't seem to have the right combination of Xen to make it work. At least I don't see any topology section in the XML capabilities. What patches do I need for Xen to make this work? I have a 2 socket AMD machine which I assume should work with this. Daniel has built the kernel and xen rpms with the needed patches. People can fetch those (based on RHEL-5.1 rpms base) from http://veillard.com/NUMA/ the kernel-xen and xen(-devel) should be sufficient . Server is on my ADSL line please do not DoS it or I will be even slower than usual :-) Daniel -- Red Hat Virtualization group http://redhat.com/virtualization/ Daniel Veillard | virtualization library http://libvirt.org/ [EMAIL PROTECTED] | libxml GNOME XML XSLT toolkit http://xmlsoft.org/ http://veillard.com/ | Rpmfind RPM search engine http://rpmfind.net/ -- Libvir-list mailing list Libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list