It would have to be for 1.8.5 as there is no way to change that configure.m4 
without re-releasing.

We still apparently have a thread-related performance issue as reported by 
Intel. It appears that we didn’t completely manage to fix the blasted thread 
locks, leaving some still on by default, thereby causing a roughly 10-15% loss 
of performance relative to earlier in the 1.8 series. So a 1.8.5 is going to be 
required fairly soon anyway.

Sigh


> On Dec 22, 2014, at 9:46 AM, Howard Pritchard <hpprit...@gmail.com> wrote:
> 
> I opened an issue 322 <https://github.com/open-mpi/ompi/issues/322> about 
> this and gave put it on 1.8.5 milestone.
> I'll submit a PR to remove the sn/xpmem.h usage in the vader
> config file.
> 
> I think to do justice to SGI UV, someone would have to put in time
> to figure out how to use their GRU api.  I'm pretty sure that's the way the
> sgi mpi delivers small messages efficiently.
> 
> Howard
> 
> 
> 
> 2014-12-22 8:43 GMT-07:00 Nathan Hjelm <hje...@lanl.gov 
> <mailto:hje...@lanl.gov>>:
> 
> Yeah, I figured out why XPMEM is failing on SGI UV but have not figured
> out a fix. If possible can we remove the check for sn/xpmem.h in the
> ompi/mca/btl/vader/configure.m4. I hopefully will have a better fix for
> 1.8.5.
> 
> -Nathan
> 
> On Fri, Dec 19, 2014 at 11:59:29PM -0800, Paul Hargrove wrote:
> >    Sorry to rain on the parade, but SGI UV is still broken by default.
> >    I reported this as present in 1.8.4rc5 and Nathan had claimed to be
> >    working on it.
> >    A reminder that all it takes is a 1-line change in
> >    ompi/mca/btl/vader/configure.m4 to not search for sn/xpmem.h
> >    -Paul
> >    On Fri, Dec 19, 2014 at 7:26 PM, Ralph Castain <r...@open-mpi.org 
> > <mailto:r...@open-mpi.org>> wrote:
> >
> >      The Open MPI Team, representing a consortium of research, academic, and
> >      industry partners, is pleased to announce the release of Open MPI
> >      version 1.8.4.
> >
> >      v1.8.4 is a bug fix release.  All users are encouraged to upgrade to
> >      v1.8.4 when possible.
> >
> >      Version 1.8.4 can be downloaded from the main Open MPI web site or any
> >      of its mirrors  (mirrors will be updating shortly).
> >
> >      Here is a list of changes in v1.8.4 as compared to v1.8.3:
> >
> >      - Fix MPI_SIZEOF; now available in mpif.h for modern Fortran compilers
> >        (see README for more details).  Also fixed various compiler/linker
> >        errors.
> >      - Fixed inadvertant Fortran ABI break between v1.8.1 and v1.8.2 in the
> >        mpi interface module when compiled with gfortran >= v4.9.
> >      - Fix various MPI_THREAD_MULTIPLE issues in the TCP BTL.
> >      - mpirun no longer requires the --hetero-nodes switch; it will
> >        automatically detect when running in heterogeneous scenarios.
> >      - Update LSF support, to include revamped affinity functionality.
> >      - Update embedded hwloc to v1.9.1.
> >      - Fixed max registerable memory computation in the openib BTL.
> >      - Updated error message when debuggers are unable to find various
> >        symbols/types to be more clear.  Thanks to Dave Love for raising the
> >        issue.
> >      - Added proper support for LSF and PBS/Torque libraries in static
> >      builds.
> >      - Rankfiles now support physical processor IDs.
> >      - Fixed potential hang in MPI_ABORT.
> >      - Fixed problems with the PSM MTL and "re-connect" scenarios, such as
> >        MPI_INTERCOMM_CREATE.
> >      - Fix MPI_IREDUCE_SCATTER with a single process.
> >      - Fix (rare) race condition in stdout/stderr funneling to mpirun where
> >        some trailing output could get lost when a process terminated.
> >      - Removed inadvertent change that set --enable-mpi-thread-multiple "on"
> >        by default, thus impacting performance for non-threaded apps.
> >      - Significantly reduced startup time by optimizing internal hash table
> >        implementation.
> >      - Fixed OS X linking with the Fortran mpi module when used with
> >        gfortran >= 4.9.  Thanks to Github user yafshar for raising the
> >        issue.
> >      - Fixed memory leak on Cygwin platforms.  Thanks for Marco Atzeri for
> >        reporting the issue.
> >      - Fixed seg fault in neighborhood collectives when the degree of the
> >        topology is higher than the communicator size.  Thanks to Lisandro
> >        Dalcin for reporting the issue.
> >      - Fixed segfault in neighborhood collectives under certain use-cases.
> >      - Fixed various issues regarding Solaris support.  Thanks to Siegmar
> >        Gross for patiently identifying all the issues.
> >      - Fixed PMI configure tests for certain Slurm installation patterns.
> >      - Fixed param registration issue in Java bindings.  Thanks to Takahiro
> >        Kawashima and Siegmar Gross for identifying the issue.
> >      - Several man page fixes.
> >      - Silence several warnings and close some memory leaks (more remain,
> >        but it's better than it was).
> >      - Re-enabled the use of CMA and knem in the shared memory BTL.
> >      - Updated mpirun manpage to correctly explain new map/rank/binding
> >      options.
> >      - Fixed MPI_IALLGATHER problem with intercommunicators.  Thanks for
> >        Takahiro Kawashima for the patch.
> >      - Numerous updates and performance improvements to OpenSHMEM.
> >      - Turned off message coalescing in the openib BTL until a proper fix
> >        for that capability can be provided (tentatively expected for 1.8.5)
> >      - Fix a bug in iof output that dates back to the dinosaurs which would
> >        output extra bytes if the system was very heavily loaded
> >      - Fix a bug where specifying mca_component_show_load_errors=0 could
> >        cause ompi_info to segfault
> >      - Updated valgrind suppression file
> >
> >      _______________________________________________
> >      announce mailing list
> >      annou...@open-mpi.org <mailto:annou...@open-mpi.org>
> >      Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/announce 
> > <http://www.open-mpi.org/mailman/listinfo.cgi/announce>
> >      Searchable archives:
> >      http://www.open-mpi.org/community/lists/announce/2014/12/index.php 
> > <http://www.open-mpi.org/community/lists/announce/2014/12/index.php>
> >
> >    --
> >    Paul H. Hargrove                          phhargr...@lbl.gov 
> > <mailto:phhargr...@lbl.gov>
> >    Computer Languages & Systems Software (CLaSS) Group
> >    Computer Science Department               Tel: +1-510-495-2352 
> > <tel:%2B1-510-495-2352>
> >    Lawrence Berkeley National Laboratory     Fax: +1-510-486-6900 
> > <tel:%2B1-510-486-6900>
> 
> > _______________________________________________
> > devel mailing list
> > de...@open-mpi.org <mailto:de...@open-mpi.org>
> > Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/devel 
> > <http://www.open-mpi.org/mailman/listinfo.cgi/devel>
> > Link to this post: 
> > http://www.open-mpi.org/community/lists/devel/2014/12/16704.php 
> > <http://www.open-mpi.org/community/lists/devel/2014/12/16704.php>
> 
> 
> _______________________________________________
> devel mailing list
> de...@open-mpi.org <mailto:de...@open-mpi.org>
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/devel 
> <http://www.open-mpi.org/mailman/listinfo.cgi/devel>
> Link to this post: 
> http://www.open-mpi.org/community/lists/devel/2014/12/16710.php 
> <http://www.open-mpi.org/community/lists/devel/2014/12/16710.php>
> 
> _______________________________________________
> devel mailing list
> de...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/devel
> Link to this post: 
> http://www.open-mpi.org/community/lists/devel/2014/12/16715.php

Reply via email to