Hi, I tested on our cluster (UTK). I will give a thumb up but I have some comments.
What I understand with 4.0. - openib btl is disabled by default (can be turned on by mca) - pml ucx will be the default for infiniband hardware. - btl uct is for one-sided but can also be used for two sided as well (needs explicit mca). My question is, what if the user does not have UCX installed (but they have infiniband hardware). The user will not have fast transport for their hardware. As of my testing, this release will fall back to btl/tcp if I dont specify the mca to use uct or force openib. Will this be a problem? Arm > On Sep 16, 2018, at 3:31 PM, Geoffrey Paulsen <gpaul...@us.ibm.com> wrote: > > The first release candidate for the Open MPI v4.0.0 release is posted at > https://www.open-mpi.org/software/ompi/v4.0/ > <https://www.open-mpi.org/software/ompi/v4.0/> > Major changes include: > > 4.0.0 -- September, 2018 > ------------------------ > > - OSHMEM updated to the OpenSHMEM 1.4 API. > - Do not build Open SHMEM layer when there are no SPMLs available. > Currently, this means the Open SHMEM layer will only build if > a MXM or UCX library is found. > - A UCX BTL was added for enhanced MPI RMA support using UCX > - With this release, OpenIB BTL now only supports iWarp and RoCE by default. > - Updated internal HWLOC to 2.0.1 > - Updated internal PMIx to 3.0.1 > - Change the priority for selecting external verses internal HWLOC > and PMIx packages to build. Starting with this release, configure > by default selects available external HWLOC and PMIx packages over > the internal ones. > - Updated internal ROMIO to 3.2.1. > - Removed support for the MXM MTL. > - Improved CUDA support when using UCX. > - Improved support for two phase MPI I/O operations when using OMPIO. > - Added support for Software-based Performance Counters, see > > https://github.com/davideberius/ompi/wiki/How-to-Use-Software-Based-Performance-Counters-(SPCs)-in-Open-MPI > > <https://github.com/davideberius/ompi/wiki/How-to-Use-Software-Based-Performance-Counters-(SPCs)-in-Open-MPI>- > Various improvements to MPI RMA performance when using RDMA > capable interconnects. > - Update memkind component to use the memkind 1.6 public API. > - Fix problems with use of newer map-by mpirun options. Thanks to > Tony Reina for reporting. > - Fix rank-by algorithms to properly rank by object and span > - Allow for running as root of two environment variables are set. > Requested by Axel Huebl. > - Fix a problem with building the Java bindings when using Java 10. > Thanks to Bryce Glover for reporting. > Our goal is to release 4.0.0 by mid Oct, so any testing is appreciated. > > > _______________________________________________ > devel mailing list > devel@lists.open-mpi.org > https://lists.open-mpi.org/mailman/listinfo/devel
_______________________________________________ devel mailing list devel@lists.open-mpi.org https://lists.open-mpi.org/mailman/listinfo/devel