is this going in to v2.x?

----------

sent from my smart phonr so no good type.

Howard
On Aug 25, 2015 7:54 AM, <git...@crest.iu.edu> wrote:

> This is an automated email from the git hooks/post-receive script. It was
> generated because a ref change was pushed to the repository containing
> the project "open-mpi/ompi".
>
> The branch, master has been updated
>        via  e2124c61fee7bd5a156c90d559ba15f6ded34d53 (commit)
>       from  6f2e8d20737907b474a401d041b5c0b1059e7d3f (commit)
>
> Those revisions listed above that are new to this repository have
> not appeared on any other notification email; so we list those
> revisions in full, below.
>
> - Log -----------------------------------------------------------------
>
> https://github.com/open-mpi/ompi/commit/e2124c61fee7bd5a156c90d559ba15f6ded34d53
>
> commit e2124c61fee7bd5a156c90d559ba15f6ded34d53
> Author: Jeff Squyres <jsquy...@cisco.com>
> Date:   Tue Aug 25 09:53:25 2015 -0400
>
>     README: minor re-flowing on extra-long lines
>
>     No other content changes; just re-flowing of long lines.
>
> diff --git a/README b/README
> index 70f251d..6883d1f 100644
> --- a/README
> +++ b/README
> @@ -436,8 +436,8 @@ General Run-Time Support Notes
>  MPI Functionality and Features
>  ------------------------------
>
> -- Rank reordering support is available using the TreeMatch library. It is
> activated
> -  for the graph and dist_graph topologies.
> +- Rank reordering support is available using the TreeMatch library. It
> +  is activated for the graph and dist_graph topologies.
>
>  - All MPI-3 functionality is supported.
>
> @@ -532,37 +532,39 @@ MPI Collectives
>    MPI process onto Mellanox QDR InfiniBand switch CPUs and HCAs.
>
>  - The "ML" coll component is an implementation of MPI collective
> -  operations that takes advantage of communication hierarchies
> -  in modern systems. A ML collective operation is implemented by
> +  operations that takes advantage of communication hierarchies in
> +  modern systems. A ML collective operation is implemented by
>    combining multiple independently progressing collective primitives
>    implemented over different communication hierarchies, hence a ML
> -  collective operation is also referred to as a hierarchical collective
> -  operation. The number of collective primitives that are included in a
> -  ML collective operation is a function of subgroups(hierarchies).
> -  Typically, MPI processes in a single communication hierarchy such as
> -  CPU socket, node, or subnet are grouped together into a single subgroup
> -  (hierarchy). The number of subgroups are configurable at runtime,
> -  and each different collective operation could be configured to have
> -  a different of number of subgroups.
> +  collective operation is also referred to as a hierarchical
> +  collective operation. The number of collective primitives that are
> +  included in a ML collective operation is a function of
> +  subgroups(hierarchies).  Typically, MPI processes in a single
> +  communication hierarchy such as CPU socket, node, or subnet are
> +  grouped together into a single subgroup (hierarchy). The number of
> +  subgroups are configurable at runtime, and each different collective
> +  operation could be configured to have a different of number of
> +  subgroups.
>
>    The component frameworks and components used by/required for a
>    "ML" collective operation.
>
>    Frameworks:
> -  * "sbgp" - Provides functionality for grouping processes into subgroups
> +  * "sbgp" - Provides functionality for grouping processes into
> +             subgroups
>    * "bcol" - Provides collective primitives optimized for a particular
>               communication hierarchy
>
>    Components:
> -  * sbgp components     - Provides grouping functionality over a CPU
> socket
> -                          ("basesocket"), shared memory ("basesmuma"),
> -                          Mellanox's ConnectX HCA ("ibnet"), and other
> -                          interconnects supported by PML ("p2p")
> -
> -  * BCOL components     - Provides optimized collective primitives for
> -                          shared memory ("basesmuma"), Mellanox's ConnectX
> -                          HCA ("iboffload"), and other interconnects
> supported
> -                          by PML ("ptpcoll")
> +  * sbgp components - Provides grouping functionality over a CPU
> +                      socket ("basesocket"), shared memory
> +                      ("basesmuma"), Mellanox's ConnectX HCA
> +                      ("ibnet"), and other interconnects supported by
> +                      PML ("p2p")
> +  * BCOL components - Provides optimized collective primitives for
> +                      shared memory ("basesmuma"), Mellanox's ConnectX
> +                      HCA ("iboffload"), and other interconnects
> +                      supported by PML ("ptpcoll")
>
>  - The "cuda" coll component provides CUDA-aware support for the
>    reduction type collectives with GPU buffers. This component is only
> @@ -1002,10 +1004,11 @@ RUN-TIME SYSTEM SUPPORT
>    most cases.  This option is only needed for special configurations.
>
>  --with-pmi
> -  Build PMI support (by default on non-Cray XE/XC systems, it is not
> built).
> -  On Cray XE/XC systems, the location of pmi is detected automatically as
> -  part of the configure process.  For non-Cray systems, if the pmi2.h
> header
> -  is found in addition to pmi.h, then support for PMI2 will be built.
> +  Build PMI support (by default on non-Cray XE/XC systems, it is not
> +  built).  On Cray XE/XC systems, the location of pmi is detected
> +  automatically as part of the configure process.  For non-Cray
> +  systems, if the pmi2.h header is found in addition to pmi.h, then
> +  support for PMI2 will be built.
>
>  --with-slurm
>    Force the building of SLURM scheduler support.
> @@ -1635,9 +1638,9 @@ Open MPI API Extensions
>  -----------------------
>
>  Open MPI contains a framework for extending the MPI API that is
> -available to applications.  Each extension is usually a standalone set of
> -functionality that is distinct from other extensions (similar to how
> -Open MPI's plugins are usually unrelated to each other).  These
> +available to applications.  Each extension is usually a standalone set
> +of functionality that is distinct from other extensions (similar to
> +how Open MPI's plugins are usually unrelated to each other).  These
>  extensions provide new functions and/or constants that are available
>  to MPI applications.
>
> @@ -1955,9 +1958,9 @@ Here's how the three sub-groups are defined:
>      get their MPI/OSHMEM application to run correctly.
>   2. Application tuner: Generally, these are parameters that can be
>      used to tweak MPI application performance.
> - 3. MPI/OSHMEM developer: Parameters that either don't fit in the other
> two,
> -    or are specifically intended for debugging / development of Open
> -    MPI itself.
> + 3. MPI/OSHMEM developer: Parameters that either don't fit in the
> +    other two, or are specifically intended for debugging /
> +    development of Open MPI itself.
>
>  Each sub-group is broken down into three classifications:
>
>
>
> -----------------------------------------------------------------------
>
> Summary of changes:
>  README | 67
> ++++++++++++++++++++++++++++++++++--------------------------------
>  1 file changed, 35 insertions(+), 32 deletions(-)
>
>
> hooks/post-receive
> --
> open-mpi/ompi
> _______________________________________________
> ompi-commits mailing list
> ompi-comm...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/ompi-commits
>

Reply via email to