Re: [OMPI devel] Memory performance with Bcast

2019-03-22 Thread Valentin Petrov
Hi, One more comment. Regarding the IB Mcast and its usage. The key advantage of IB Mcast (enabled with hpcx+hcoll when user utilizes MPI_Bcast collective call) is nearly constant scaling. So, it gives most advantage when many multiple nodes are used at the same time for a collective. By default

Re: [OMPI devel] Memory performance with Bcast

2019-03-22 Thread marcin.krotkiewski
On 3/21/19 5:31 PM, George Bosilca wrote: I am not sure I understand your question, a bcast is a collective operation that must be posted by all participants. Independently at what level the bcast is serviced, if some of the participants have not posted their participation to the collective, o

Re: [OMPI devel] Memory performance with Bcast

2019-03-22 Thread marcin.krotkiewski
Josh and Valentin, thanks a lot for your answers! Your understanding of my case is essentially correct, but let me briefly refine. In the case at hand I am running a 3D solver, which computes on grids with at least 26-neighbors. But that also depends on the grid refinement etc, so there could

[OMPI devel] Open MPI 4.0.1rc3 available for testing

2019-03-22 Thread Howard Pritchard
A third release candidate for the Open MPI v4.0.1 release is posted at https://www.open-mpi.org/software/ompi/v4.0/ Fixes since 4.0.1rc2 include - Add acquire semantics to an Open MPI internal lock acquire function. Our goal is to release 4.0.1 by the end of March, so any testing is appreciated.