On 01/02/2017 15:32, Pablo Escobar Lopez wrote:
btw, I have iomkl/2017.01 with OpenMPI/2.0.1-iccifort-2017.1.132-GCC-5.4.0-2.26 and I had to apply this patch to be able to compile it with slurm support
https://www.mail-archive.com/[email protected]/msg30048.html

I suppose this is fixed in 2.0.2

seems to be, yes, see https://github.com/open-mpi/ompi/pull/2131


2017-02-01 15:01 GMT+01:00 Kenneth Hoste <[email protected] <mailto:[email protected]>>:

    Hi Joachim,

    On 01/02/2017 14:59, Joachim Hein wrote:
    HI Kenneth et al.

Can’t make if for todays telcon, but got the below this morning. You might want to considerer for foss/2017a, which is on the
    agenda.  We’ll live with foss/2017a until August or so.

    Thanks for the heads-up, I was already aware of it.

    The PR for foss/2017a has already been updated accordingly, see
    https://github.com/hpcugent/easybuild-easyconfigs/pull/3968
    <https://github.com/hpcugent/easybuild-easyconfigs/pull/3968> .

    I also bumped FFTW to 3.3.6 which was released two weeks ago.


    regards,

    Kenneth


    Joachim

    Begin forwarded message:

    *From: *"Jeff Squyres (jsquyres)" <[email protected]
    <mailto:[email protected]>>
    *Subject: **[Open MPI Announce] Open MPI v2.0.2 released*
    *Date: *1 February 2017 at 01:40:35 GMT+1
    *To: *Open MPI Announcements <[email protected]
    <mailto:[email protected]>>
    *Reply-To: *<[email protected] <mailto:[email protected]>>

    The Open MPI Team, representing a consortium of research,
    academic, and industry partners, is pleased to announce the
    release of Open MPI version 2.0.2.

    v2.0.2 is a bug fix release that includes a variety of bug fixes
    and some performance fixes.  All users are encouraged to upgrade
    to v2.0.2 when possible.

    Version 2.0.2 can be downloaded from the main Open MPI web site:

    https://www.open-mpi.org/software/ompi/v2.0/
    <https://www.open-mpi.org/software/ompi/v2.0/>

    NEWS

    2.0.2 -- 31 January 2017
    -------------------------

    Bug fixes/minor improvements:

    - Fix a problem with MPI_FILE_WRITE_SHARED when using
    MPI_MODE_APPEND and
     Open MPI's native MPI-IO implementation. Thanks to Nicolas Joly for
     reporting.
    - Fix a typo in the MPI_WIN_GET_NAME man page.  Thanks to
    Nicolas Joly
     for reporting.
- Fix a race condition with ORTE's session directory setup. Thanks to
     @tbj900 for reporting this issue.
    - Fix a deadlock issue arising from Open MPI's approach to
    catching calls to
     munmap. Thanks to Paul Hargrove for reporting and helping to
    analyze this
     problem.
    - Fix a problem with PPC atomics which caused make check to fail
    unless builtin
     atomics configure option was enabled. Thanks to Orion Poplawski
    for reporting.
    - Fix a problem with use of x86_64 cpuid instruction which led
    to segmentation
faults when Open MPI was configured with -O3 optimization. Thanks to Mark
     Santcroos for reporting this problem.
    - Fix a problem when using built in atomics configure options on
    PPC platforms
     when building 32 bit applications. Thanks to Paul Hargrove for
    reporting.
    - Fix a problem with building Open MPI against an external hwloc
    installation.
     Thanks to Orion Poplawski for reporting this issue.
    - Remove use of DATE in the message queue version string
    reported to debuggers to
     insure bit-wise reproducibility of binaries.  Thanks to
    Alastair McKinstry
     for help in fixing this problem.
    - Fix a problem with early exit of a MPI process without calling
    MPI_FINALIZE
     or MPI_ABORT that could lead to job hangs.  Thanks to Christof
    Koehler for
     reporting.
    - Fix a problem with forwarding of SIGTERM signal from mpirun to
    MPI processes
     in a job.  Thanks to Noel Rycroft for reporting this problem
    - Plug some memory leaks in MPI_WIN_FREE discovered using
    Valgrind.  Thanks
     to Joseph Schuchart for reporting.
    - Fix a problems  MPI_NEIGHOR_ALLTOALL when using a communicator
    with an empty topology
     graph.  Thanks to Daniel Ibanez for reporting.
    - Fix a typo in a PMIx component help file.  Thanks to @njoly
    for reporting this.
    - Fix a problem with Valgrind false positives when using Open
    MPI's internal memchecker.
     Thanks to Yvan Fournier for reporting.
    - Fix a problem with MPI_FILE_DELETE returning MPI_SUCCESS when
     deleting a non-existent file. Thanks to Wei-keng Liao for
    reporting.
    - Fix a problem with MPI_IMPROBE that could lead to hangs in
    subsequent MPI
     point to point or collective calls. Thanks to Chris Pattison
    for reporting.
    - Fix a problem when configure Open MPI for powerpc with
    --enable-mpi-cxx
     enabled.  Thanks to Alastair McKinstry for reporting.
- Fix a problem using MPI_IALLTOALL with MPI_IN_PLACE argument. Thanks to
     Chris Ward for reporting.
    - Fix a problem using MPI_RACCUMULATE with the Portals4
    transport.  Thanks to
     @PDeveze for reporting.
    - Fix an issue with static linking and duplicate symbols arising
    from PMIx
     Slurm components.  Thanks to Limin Gu for reporting.
    - Fix a problem when using MPI dynamic memory windows.  Thanks to
     Christoph Niethammer for reporting.
    - Fix a problem with Open MPI's pkgconfig files.  Thanks to
    Alastair McKinstry
     for reporting.
    - Fix a problem with MPI_IREDUCE when the same buffer is
    supplied for the
     send and recv buffer arguments.  Thanks to Valentin Petrov for
    reporting.
    - Fix a problem with atomic operations on PowerPC.  Thanks to Paul
     Hargrove for reporting.

    Known issues (to be addressed in v2.0.3):

    - See the list of fixes slated for v2.0.3 here:
    https://github.com/open-mpi/ompi/milestone/23
    <https://github.com/open-mpi/ompi/milestone/23>

-- Jeff Squyres
    [email protected] <mailto:[email protected]>

    _______________________________________________
    announce mailing list
    [email protected] <mailto:[email protected]>
    https://rfd.newmexicoconsortium.org/mailman/listinfo/announce
    <https://rfd.newmexicoconsortium.org/mailman/listinfo/announce>





--
Pablo Escobar López
HPC systems engineer
sciCORE, University of Basel
SIB Swiss Institute of Bioinformatics
http://scicore.unibas.ch

Reply via email to