Re: [PATCH 1/4] hmat acpi: Don't require initiator value in -numa
Le 28/06/2022 à 16:19, Igor Mammedov a écrit : On Thu, 23 Jun 2022 16:58:28 +0200 Brice Goglin wrote: The "Memory Proximity Domain Attributes" structure of the ACPI HMAT has a "Processor Proximity Domain Valid" flag that is currently always set because Qemu -numa requires an initiator=X value when hmat=on. Unsetting this flag allows to create more complex memory topologies by having multiple best initiators for a single memory target. This patch allows -numa without initiator=X when hmat=on by keeping the default value MAX_NODES in numa_state->nodes[i].initiator. All places reading numa_state->nodes[i].initiator already check whether it's different from MAX_NODES before using it. Tested with qemu-system-x86_64 -accel kvm \ -machine pc,hmat=on \ -drive if=pflash,format=raw,file=./OVMF.fd \ -drive media=disk,format=qcow2,file=efi.qcow2 \ -smp 4 \ -m 3G \ -object memory-backend-ram,size=1G,id=ram0 \ -object memory-backend-ram,size=1G,id=ram1 \ -object memory-backend-ram,size=1G,id=ram2 \ -numa node,nodeid=0,memdev=ram0,cpus=0-1 \ -numa node,nodeid=1,memdev=ram1,cpus=2-3 \ -numa node,nodeid=2,memdev=ram2 \ -numa hmat-lb,initiator=0,target=0,hierarchy=memory,data-type=access-latency,latency=10 \ -numa hmat-lb,initiator=0,target=0,hierarchy=memory,data-type=access-bandwidth,bandwidth=10485760 \ -numa hmat-lb,initiator=0,target=1,hierarchy=memory,data-type=access-latency,latency=20 \ -numa hmat-lb,initiator=0,target=1,hierarchy=memory,data-type=access-bandwidth,bandwidth=5242880 \ -numa hmat-lb,initiator=0,target=2,hierarchy=memory,data-type=access-latency,latency=30 \ -numa hmat-lb,initiator=0,target=2,hierarchy=memory,data-type=access-bandwidth,bandwidth=1048576 \ -numa hmat-lb,initiator=1,target=0,hierarchy=memory,data-type=access-latency,latency=20 \ -numa hmat-lb,initiator=1,target=0,hierarchy=memory,data-type=access-bandwidth,bandwidth=5242880 \ -numa hmat-lb,initiator=1,target=1,hierarchy=memory,data-type=access-latency,latency=10 \ -numa hmat-lb,initiator=1,target=1,hierarchy=memory,data-type=access-bandwidth,bandwidth=10485760 \ -numa hmat-lb,initiator=1,target=2,hierarchy=memory,data-type=access-latency,latency=30 \ -numa hmat-lb,initiator=1,target=2,hierarchy=memory,data-type=access-bandwidth,bandwidth=1048576 which reports NUMA node2 at same distance from both node0 and node1 as seen in lstopo: Machine (2966MB total) + Package P#0 NUMANode P#2 (979MB) Group0 NUMANode P#0 (980MB) Core P#0 + PU P#0 Core P#1 + PU P#1 Group0 NUMANode P#1 (1007MB) Core P#2 + PU P#2 Core P#3 + PU P#3 Before this patch, we had to add ",initiator=X" to "-numa node,nodeid=2,memdev=ram2". The lstopo output difference between initiator=1 and no initiator is: @@ -1,10 +1,10 @@ Machine (2966MB total) + Package P#0 + NUMANode P#2 (979MB) Group0 NUMANode P#0 (980MB) Core P#0 + PU P#0 Core P#1 + PU P#1 Group0 NUMANode P#1 (1007MB) -NUMANode P#2 (979MB) Core P#2 + PU P#2 Core P#3 + PU P#3 Corresponding changes in the HMAT MPDA structure: @@ -49,10 +49,10 @@ [078h 0120 2] Structure Type : [Memory Proximity Domain Attributes] [07Ah 0122 2] Reserved : [07Ch 0124 4] Length : 0028 -[080h 0128 2]Flags (decoded below) : 0001 -Processor Proximity Domain Valid : 1 +[080h 0128 2]Flags (decoded below) : +Processor Proximity Domain Valid : 0 [082h 0130 2]Reserved1 : -[084h 0132 4] Attached Initiator Proximity Domain : 0001 +[084h 0132 4] Attached Initiator Proximity Domain : 0080 where does this value come from? This is #define MAX_NODES 128, the default value for initiator field in Qemu. But it's meaningless here because "Processor Proximity Domain Valid" flag above is 0. Brice OpenPGP_signature Description: OpenPGP digital signature
Re: [PATCH 1/4] hmat acpi: Don't require initiator value in -numa
On Thu, 23 Jun 2022 16:58:28 +0200 Brice Goglin wrote: > The "Memory Proximity Domain Attributes" structure of the ACPI HMAT > has a "Processor Proximity Domain Valid" flag that is currently > always set because Qemu -numa requires an initiator=X value > when hmat=on. Unsetting this flag allows to create more complex > memory topologies by having multiple best initiators for a single > memory target. > > This patch allows -numa without initiator=X when hmat=on by keeping > the default value MAX_NODES in numa_state->nodes[i].initiator. > All places reading numa_state->nodes[i].initiator already check > whether it's different from MAX_NODES before using it. > > Tested with > qemu-system-x86_64 -accel kvm \ > -machine pc,hmat=on \ > -drive if=pflash,format=raw,file=./OVMF.fd \ > -drive media=disk,format=qcow2,file=efi.qcow2 \ > -smp 4 \ > -m 3G \ > -object memory-backend-ram,size=1G,id=ram0 \ > -object memory-backend-ram,size=1G,id=ram1 \ > -object memory-backend-ram,size=1G,id=ram2 \ > -numa node,nodeid=0,memdev=ram0,cpus=0-1 \ > -numa node,nodeid=1,memdev=ram1,cpus=2-3 \ > -numa node,nodeid=2,memdev=ram2 \ > -numa > hmat-lb,initiator=0,target=0,hierarchy=memory,data-type=access-latency,latency=10 > \ > -numa > hmat-lb,initiator=0,target=0,hierarchy=memory,data-type=access-bandwidth,bandwidth=10485760 > \ > -numa > hmat-lb,initiator=0,target=1,hierarchy=memory,data-type=access-latency,latency=20 > \ > -numa > hmat-lb,initiator=0,target=1,hierarchy=memory,data-type=access-bandwidth,bandwidth=5242880 > \ > -numa > hmat-lb,initiator=0,target=2,hierarchy=memory,data-type=access-latency,latency=30 > \ > -numa > hmat-lb,initiator=0,target=2,hierarchy=memory,data-type=access-bandwidth,bandwidth=1048576 > \ > -numa > hmat-lb,initiator=1,target=0,hierarchy=memory,data-type=access-latency,latency=20 > \ > -numa > hmat-lb,initiator=1,target=0,hierarchy=memory,data-type=access-bandwidth,bandwidth=5242880 > \ > -numa > hmat-lb,initiator=1,target=1,hierarchy=memory,data-type=access-latency,latency=10 > \ > -numa > hmat-lb,initiator=1,target=1,hierarchy=memory,data-type=access-bandwidth,bandwidth=10485760 > \ > -numa > hmat-lb,initiator=1,target=2,hierarchy=memory,data-type=access-latency,latency=30 > \ > -numa > hmat-lb,initiator=1,target=2,hierarchy=memory,data-type=access-bandwidth,bandwidth=1048576 > which reports NUMA node2 at same distance from both node0 and node1 as seen > in lstopo: > Machine (2966MB total) + Package P#0 >NUMANode P#2 (979MB) >Group0 > NUMANode P#0 (980MB) > Core P#0 + PU P#0 > Core P#1 + PU P#1 >Group0 > NUMANode P#1 (1007MB) > Core P#2 + PU P#2 > Core P#3 + PU P#3 > > Before this patch, we had to add ",initiator=X" to "-numa > node,nodeid=2,memdev=ram2". > The lstopo output difference between initiator=1 and no initiator is: > @@ -1,10 +1,10 @@ > Machine (2966MB total) + Package P#0 > + NUMANode P#2 (979MB) > Group0 > NUMANode P#0 (980MB) > Core P#0 + PU P#0 > Core P#1 + PU P#1 > Group0 > NUMANode P#1 (1007MB) > -NUMANode P#2 (979MB) > Core P#2 + PU P#2 > Core P#3 + PU P#3 > > Corresponding changes in the HMAT MPDA structure: > @@ -49,10 +49,10 @@ > [078h 0120 2] Structure Type : [Memory Proximity > Domain Attributes] > [07Ah 0122 2] Reserved : > [07Ch 0124 4] Length : 0028 > -[080h 0128 2]Flags (decoded below) : 0001 > -Processor Proximity Domain Valid : 1 > +[080h 0128 2]Flags (decoded below) : > +Processor Proximity Domain Valid : 0 > [082h 0130 2]Reserved1 : > -[084h 0132 4] Attached Initiator Proximity Domain : 0001 > +[084h 0132 4] Attached Initiator Proximity Domain : 0080 where does this value come from? > [088h 0136 4] Memory Proximity Domain : 0002 > [08Ch 0140 4]Reserved2 : > [090h 0144 8]Reserved3 : > > Final HMAT SLLB structures: > [0A0h 0160 2] Structure Type : 0001 [System Locality Latency > and Bandwidth Information] > [0A2h 0162 2] Reserved : > [0A4h 0164 4] Length : 0040 > [0A8h 0168 1]Flags (decoded below) : 00 > Memory Hierarchy : 0 > [0A9h 0169 1]Data Type : 00 > [0AAh 0170 2]Reserved1 : > [0ACh 0172 4] Initiator Proximity Domains # : 0002 > [0B0h 0176 4] Target Proximity Domains # : 0003 > [0B4h 0180 4]Reserved2 : > [0B8h 0184 8] Entry Base Unit : 2710 > [0C0h 0192 4] Initiator Proximity Domain List : > [0C4h 0196 4] Initiator Proximity Domain
[PATCH 1/4] hmat acpi: Don't require initiator value in -numa
The "Memory Proximity Domain Attributes" structure of the ACPI HMAT has a "Processor Proximity Domain Valid" flag that is currently always set because Qemu -numa requires an initiator=X value when hmat=on. Unsetting this flag allows to create more complex memory topologies by having multiple best initiators for a single memory target. This patch allows -numa without initiator=X when hmat=on by keeping the default value MAX_NODES in numa_state->nodes[i].initiator. All places reading numa_state->nodes[i].initiator already check whether it's different from MAX_NODES before using it. Tested with qemu-system-x86_64 -accel kvm \ -machine pc,hmat=on \ -drive if=pflash,format=raw,file=./OVMF.fd \ -drive media=disk,format=qcow2,file=efi.qcow2 \ -smp 4 \ -m 3G \ -object memory-backend-ram,size=1G,id=ram0 \ -object memory-backend-ram,size=1G,id=ram1 \ -object memory-backend-ram,size=1G,id=ram2 \ -numa node,nodeid=0,memdev=ram0,cpus=0-1 \ -numa node,nodeid=1,memdev=ram1,cpus=2-3 \ -numa node,nodeid=2,memdev=ram2 \ -numa hmat-lb,initiator=0,target=0,hierarchy=memory,data-type=access-latency,latency=10 \ -numa hmat-lb,initiator=0,target=0,hierarchy=memory,data-type=access-bandwidth,bandwidth=10485760 \ -numa hmat-lb,initiator=0,target=1,hierarchy=memory,data-type=access-latency,latency=20 \ -numa hmat-lb,initiator=0,target=1,hierarchy=memory,data-type=access-bandwidth,bandwidth=5242880 \ -numa hmat-lb,initiator=0,target=2,hierarchy=memory,data-type=access-latency,latency=30 \ -numa hmat-lb,initiator=0,target=2,hierarchy=memory,data-type=access-bandwidth,bandwidth=1048576 \ -numa hmat-lb,initiator=1,target=0,hierarchy=memory,data-type=access-latency,latency=20 \ -numa hmat-lb,initiator=1,target=0,hierarchy=memory,data-type=access-bandwidth,bandwidth=5242880 \ -numa hmat-lb,initiator=1,target=1,hierarchy=memory,data-type=access-latency,latency=10 \ -numa hmat-lb,initiator=1,target=1,hierarchy=memory,data-type=access-bandwidth,bandwidth=10485760 \ -numa hmat-lb,initiator=1,target=2,hierarchy=memory,data-type=access-latency,latency=30 \ -numa hmat-lb,initiator=1,target=2,hierarchy=memory,data-type=access-bandwidth,bandwidth=1048576 which reports NUMA node2 at same distance from both node0 and node1 as seen in lstopo: Machine (2966MB total) + Package P#0 NUMANode P#2 (979MB) Group0 NUMANode P#0 (980MB) Core P#0 + PU P#0 Core P#1 + PU P#1 Group0 NUMANode P#1 (1007MB) Core P#2 + PU P#2 Core P#3 + PU P#3 Before this patch, we had to add ",initiator=X" to "-numa node,nodeid=2,memdev=ram2". The lstopo output difference between initiator=1 and no initiator is: @@ -1,10 +1,10 @@ Machine (2966MB total) + Package P#0 + NUMANode P#2 (979MB) Group0 NUMANode P#0 (980MB) Core P#0 + PU P#0 Core P#1 + PU P#1 Group0 NUMANode P#1 (1007MB) -NUMANode P#2 (979MB) Core P#2 + PU P#2 Core P#3 + PU P#3 Corresponding changes in the HMAT MPDA structure: @@ -49,10 +49,10 @@ [078h 0120 2] Structure Type : [Memory Proximity Domain Attributes] [07Ah 0122 2] Reserved : [07Ch 0124 4] Length : 0028 -[080h 0128 2]Flags (decoded below) : 0001 -Processor Proximity Domain Valid : 1 +[080h 0128 2]Flags (decoded below) : +Processor Proximity Domain Valid : 0 [082h 0130 2]Reserved1 : -[084h 0132 4] Attached Initiator Proximity Domain : 0001 +[084h 0132 4] Attached Initiator Proximity Domain : 0080 [088h 0136 4] Memory Proximity Domain : 0002 [08Ch 0140 4]Reserved2 : [090h 0144 8]Reserved3 : Final HMAT SLLB structures: [0A0h 0160 2] Structure Type : 0001 [System Locality Latency and Bandwidth Information] [0A2h 0162 2] Reserved : [0A4h 0164 4] Length : 0040 [0A8h 0168 1]Flags (decoded below) : 00 Memory Hierarchy : 0 [0A9h 0169 1]Data Type : 00 [0AAh 0170 2]Reserved1 : [0ACh 0172 4] Initiator Proximity Domains # : 0002 [0B0h 0176 4] Target Proximity Domains # : 0003 [0B4h 0180 4]Reserved2 : [0B8h 0184 8] Entry Base Unit : 2710 [0C0h 0192 4] Initiator Proximity Domain List : [0C4h 0196 4] Initiator Proximity Domain List : 0001 [0C8h 0200 4] Target Proximity Domain List : [0CCh 0204 4] Target Proximity Domain List : 0001 [0D0h 0208 4] Target Proximity Domain List : 0002 [0D4h 0212 2]Entry : 0001 [0D6h 0214 2]Entry : 0002 [0D8h 0216 2]Entry : 0003 [0DAh 0218 2]Entry : 0002 [0DCh 0220 2]Entry : 0001