Re: [OMPI users] Trouble with Mellanox's hcoll component and MPI_THREAD_MULTIPLE support?

2020-02-04 Thread Joshua Ladd via users
We cannot reproduce this. On four nodes 20 PPN with and w/o hcoll it takes exactly the same 19 secs (80 ranks). What version of HCOLL are you using? Command line? Josh On Tue, Feb 4, 2020 at 8:44 AM George Bosilca via users < users@lists.open-mpi.org> wrote: > Hcoll will be present in many

Re: [OMPI users] Trouble with Mellanox's hcoll component and MPI_THREAD_MULTIPLE support?

2020-02-04 Thread George Bosilca via users
Hcoll will be present in many cases, you don’t really want to skip them all. I foresee 2 problem with the approach you propose: - collective components are selected per communicator, so even if they will not be used they are still loaded. - from outside the MPI library you have little access to

Re: [OMPI users] Trouble with Mellanox's hcoll component and MPI_THREAD_MULTIPLE support?

2020-02-04 Thread Angel de Vicente via users
Hi, George Bosilca writes: > If I'm not mistaken, hcoll is playing with the opal_progress in a way > that conflicts with the blessed usage of progress in OMPI and prevents > other components from advancing and timely completing requests. The > impact is minimal for sequential applications using