> On 14 Jan 2019, at 11:27, Kenneth Hoste wrote:
>
> Dear Damian,
>
> On 14/01/2019 10:59, Alvarez, Damian wrote:
>> Hi Kenneth,
>> Wouldn't compiling OpenMPI 4.0 with --enable-mpi1-compatibility be an
>> option? (See https://www.open-mpi.org/faq/?category=mpi-removed)
>
> Maybe, but I
Dear Damian,
On 14/01/2019 10:59, Alvarez, Damian wrote:
Hi Kenneth,
Wouldn't compiling OpenMPI 4.0 with --enable-mpi1-compatibility be an option?
(See https://www.open-mpi.org/faq/?category=mpi-removed)
Maybe, but I would like to avoid i) using a new major release, ii) using
a non-default
Hi Kenneth,
Wouldn't compiling OpenMPI 4.0 with --enable-mpi1-compatibility be an option?
(See https://www.open-mpi.org/faq/?category=mpi-removed)
Best,
Damian
On 12.01.19, 19:15, "easybuild-requ...@lists.ugent.be on behalf of Kenneth
Hoste"
wrote:
Dear EasyBuilders,
Based on
Dear EasyBuilders,
Based on the problems that several people are seeing with Intel MPI 2019
update 1 (see also notes of the last EasyBuild conf call [1]), we
concluded that it's better to stick with Intel MPI 2018 update 4 for the
intel/2019a toolchain.
I will change the pull request
Same experience on our systems : we encountered multiple issues,
specifically with libfabric (which is default with Intel MPI 2019).
-Pramod
On Mon, Jan 7, 2019 at 4:45 PM Alvarez, Damian
wrote:
> A word of caution regarding Intel MPI 2019: They changed a lot of things
> under the hood, and we
Hi Damian,
I have no idea if it helps but a new version of libfabric has just been
released:
https://github.com/ofiwg/libfabric/releases/tag/v1.7.0
whereas according to
https://software.intel.com/en-us/articles/intel-mpi-library-release-notes-linux
Intel MPI 2019.1 ships with a customized 1.7.0
A word of caution regarding Intel MPI 2019: They changed a lot of things under
the hood, and we have seen lots of issues in our systems on relatively large
jobs (1.5K+ MPI processes). Basically most collective algorithms don't make it
through. We have seen that on 2 different InfiniBand
7 matches
Mail list logo