Dear Brice,
thank you for your answer. Neither upgrade of BIOS nor using the latest
hwloc helped. Finaly we contacted AMD and they fixed a bug in kernel which
coused problems with 12-core AMD processors. They should upstream the
changes to kernel.org soon, so that all the distros
Dear Brice,
thank you for your answer. Neither upgrade of BIOS nor using the latest
hwloc helped. Finaly we contacted AMD and they fixed a bug in kernel which
coused problems with 12-core AMD processors. They should upstream the changes
to kernel.org soon, so that all the distros
Hello
Good to know. Did you see/test the kernel patch yet? If possible, could
you send a link to the kernel commit when it appears upstream?
Thanks
Brice
Le 27/10/2015 09:21, Ondřej Vlček a écrit :
> Dear Brice,
> thank you for your answer. Neither upgrade of BIOS nor using the latest
> hwloc
Here's the fixed XML. For the record, for each NUMA node, I extended the
cpusets of the L3 to match the container NUMA node, and moved all L2
objects as children of that L3.
Now you may load that XML instead of the native discovery by setting
HWLOC_XMLFILE=leo2.xml in your environment.
Brice
Le
Thank you very much for the file.
When I try with PETSc, compiled with open-mpi and icc I get
--
Failed to parse XML input with the minimalistic parser. If it was not
generated by hwloc, try enabling full XML support with libxml2.
I guess the problem is that your OMPI uses an old hwloc internally. That
one may be too old to understand recent XML exports.
Try replacing "Package" with "Socket" everywhere in the XML file.
Brice
Le 27/10/2015 15:31, Fabian Wein a écrit :
> Thank you very much for the file.
>
> When I try
On 10/27/2015 03:42 PM, Brice Goglin wrote:
I guess the problem is that your OMPI uses an old hwloc internally. That
one may be too old to understand recent XML exports.
Try replacing "Package" with "Socket" everywhere in the XML file.
Thanks! That was it.
I now get almost perfectly
I guess the next step would be to look at how these tasks are placed on
the machine. There are 8 NUMA nodes on the machine. Maybe 9 is where it
starts placing a second task per NUMA node?
For OMPI, --report-bindings may help. I am not sure about MPICH.
Brice
Le 27/10/2015 15:52, Fabian Wein a