Re: [OMPI users] Trouble with Mellanox's hcoll component and MPI_THREAD_MULTIPLE support?

2020-02-03 Thread George Bosilca via users
If I'm not mistaken, hcoll is playing with the opal_progress in a way that conflicts with the blessed usage of progress in OMPI and prevents other components from advancing and timely completing requests. The impact is minimal for sequential applications using only blocking calls, but is

Re: [hwloc-users] PCI to NUMA node mapping.

2020-02-03 Thread Brice Goglin
Hello Liam dmidecode is usually reserved to root only because it uses SMBIOS or whatever hardware/ACPI/... tables. Those tables are read by the Linux kernel and exported to non-root users in sysfs: $ cat /sys/bus/pci/devices/:ae:0c.6/numa_node 1 However this file isn't that good because

Re: [OMPI users] OpenFabrics

2020-02-03 Thread Jeff Squyres (jsquyres) via users
> On Feb 3, 2020, at 12:35 PM, Bennet Fauber wrote: > > This is what CentOS installed. > > $ yum list installed hwloc\* > Loaded plugins: langpacks > Installed Packages > hwloc.x86_64 1.11.8-4.el7 > @os > hwloc-devel.x86_64

Re: [OMPI users] OpenFabrics

2020-02-03 Thread Bennet Fauber via users
This is what CentOS installed. $ yum list installed hwloc\* Loaded plugins: langpacks Installed Packages hwloc.x86_64 1.11.8-4.el7@os hwloc-devel.x86_64 1.11.8-4.el7@os hwloc-libs.x86_64

Re: [OMPI users] OpenFabrics

2020-02-03 Thread Jeff Squyres (jsquyres) via users
On Feb 3, 2020, at 10:03 AM, Bennet Fauber wrote: > > Ah, ha! > > Yes, that seems to be it. Thanks. Ok, good. I understand that UCX is the "preferred" mechanism for IB these days. > If I might, on a configure related note ask, whether, if we have > these installed with the CentOS 7.6 we

[hwloc-users] PCI to NUMA node mapping.

2020-02-03 Thread Murphy, Liam
Newbie question. I know that dmidecode uses the num_node files under /sys/devices/pcie..., but hwloc does not seem to use the same mechanism to determine which PCI devices are on which numa node? From which file is it deriving the information? Regards, Liam

[OMPI users] Trouble with Mellanox's hcoll component and MPI_THREAD_MULTIPLE support?

2020-02-03 Thread Angel de Vicente via users
Hi, in one of our codes, we want to create a log of events that happen in the MPI processes, where the number of these events and their timing is unpredictable. So I implemented a simple test code, where process 0 creates a thread that is just busy-waiting for messages from any process, and

Re: [OMPI users] OpenFabrics

2020-02-03 Thread Bennet Fauber via users
Ah, ha! Yes, that seems to be it. Thanks. If I might, on a configure related note ask, whether, if we have these installed with the CentOS 7.6 we are running $ yum list installed libevent\* Loaded plugins: langpacks Installed Packages libevent.x86_64 2.0.21-4.el7