A word of caution regarding Intel MPI 2019: They changed a lot of things under
the hood, and we have seen lots of issues in our systems on relatively large
jobs (1.5K+ MPI processes). Basically most collective algorithms don't make it
through. We have seen that on 2 different InfiniBand
Dear EasyBuilders,
By tradition at the start of the year, I have started looking at
updating the 'foss' and 'intel' common toolchains, currently for the
2019a update.
The plan is to include these in the upcoming EasyBuild v3.8.1 release,
which I hope to release in a couple of weeks.
Current
Hi Damian,
I have no idea if it helps but a new version of libfabric has just been
released:
https://github.com/ofiwg/libfabric/releases/tag/v1.7.0
whereas according to
https://software.intel.com/en-us/articles/intel-mpi-library-release-notes-linux
Intel MPI 2019.1 ships with a customized 1.7.0
Same experience on our systems : we encountered multiple issues,
specifically with libfabric (which is default with Intel MPI 2019).
-Pramod
On Mon, Jan 7, 2019 at 4:45 PM Alvarez, Damian
wrote:
> A word of caution regarding Intel MPI 2019: They changed a lot of things
> under the hood, and we
4 matches
Mail list logo