Hi Brice,
thanks a lot for the quick response!
I have tested the patch and it works just fine:-) [1]
I am trying to release hwloc 2.5 "soon". If that's too slow, please let me
> know, I'll see if I can do a 2.4.1 earlier.
There is no rush, 2.5 sounds great.
Merci beaucoup!
Jirka
[1]
$ ./uti
This patch should fix the issue. We had to fix the same issue for CPU#0
being offline recently but I didn't know it could be needed for NUMA
node#0 being offline too.
I am trying to release hwloc 2.5 "soon". If that's too slow, please let
me know, I'll see if I can do a 2.4.1 earlier.
Brice
Hello,
Maybe we have something that assumes that the first NUMA node on Linux
is #0. And something is wrong in the disallowed case anyway since the
NUMA node physical number is 0 instead of 2 there.
Can you run "hwloc-gather-topology lpar" and send the resulting
lpar.tar.bz2? (send it only t
Hi Brice,
how are you doing? I hope you are fine. We are all well and safe.
I have been running hwloc on IBM Power LPAR VM with only 1 CPU core and 8
PUs [1]. There is only one NUMA node. The numbering is however quite
strange, the NUMA node number is "2". See [2].
hwloc reports "Topology does