Re: [hwloc-devel] understanding PCI device to NUMA node connection

2011-11-28 Thread Guy Streeter
On 11/28/2011 03:45 PM, Brice Goglin wrote: ... > Current Intel platforms have 2 QPI links going to I/O hubs. Most servers > with many sockets (4 or more) thus have each I/O hub connected to only 2 > processors directly, so their distance is "equal" as you say. > > However, some BIOS report

Re: [hwloc-devel] understanding PCI device to NUMA node connection

2011-11-28 Thread Brice Goglin
Le 28/11/2011 22:34, Guy Streeter a écrit : > This question may be more about understanding NUMA (which I barely do) than > about hwloc, but perhaps you can help anyway. > > I have a customer with some HP Proliant DL580 G7 servers. HP supplied them > with a block diagram of their system, and it

[hwloc-devel] understanding PCI device to NUMA node connection

2011-11-28 Thread Guy Streeter
This question may be more about understanding NUMA (which I barely do) than about hwloc, but perhaps you can help anyway. I have a customer with some HP Proliant DL580 G7 servers. HP supplied them with a block diagram of their system, and it shows two of the NUMA nodes connected to the PCI