[OMPI devel] New "State" labels in github

2018-09-18 Thread Jeff Squyres (jsquyres) via devel
Brian and I just added some new "State" labels on GitHub to help with managing 
all the open issues.  Please add and keep up to date the "State" labels on your 
open issues.

See this wiki page for more information (might wanna bookmark it):


https://github.com/open-mpi/ompi/wiki/SubmittingBugs#assign-appropriate-labels

Thank you!

-- 
Jeff Squyres
jsquy...@cisco.com

___
devel mailing list
devel@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/devel


Re: [OMPI devel] MTT Perl client

2018-09-18 Thread Ralph H Castain
Are we good to go with this changeover? If so, I’ll delete the Perl client from 
the main MTT repo.

> On Sep 14, 2018, at 10:06 AM, Jeff Squyres (jsquyres) via devel 
>  wrote:
> 
> On Sep 14, 2018, at 12:37 PM, Gilles Gouaillardet 
>  wrote:
>> 
>> IIRC mtt-relay is not only a proxy (squid can do that too).
> 
> Probably true.  IIRC, I think mtt-relay was meant to be a 
> dirt-stupid-but-focused-to-just-one-destination relay.
> 
>> mtt results can be manually copied from a cluster behind a firewall, and 
>> then mtt-relay can “upload” these results to mtt.open-MPI.org
> 
> Yes, but then a human has to be involved, which kinda defeats at least one of 
> the goals of MTT.  Using mtt-relay allowed MTT to still function in an 
> automated fashion.
> 
> FWIW, it may not be necessary to convert mtt-relay to python (IIRC that it's 
> protocol agnostic, but like I said: it's been quite a while since I've looked 
> at that code).  It was pretty small and straightforward.  It could also just 
> stay in mtt-legacy.
> 
> -- 
> Jeff Squyres
> jsquy...@cisco.com
> 
> ___
> devel mailing list
> devel@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/devel

___
devel mailing list
devel@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/devel

Re: [OMPI devel] Announcing Open MPI v4.0.0rc1

2018-09-18 Thread Jeff Squyres (jsquyres) via devel
On Sep 18, 2018, at 3:46 PM, Thananon Patinyasakdikul  
wrote:
> 
> I tested on our cluster (UTK). I will give a thumb up but I have some 
> comments.
> 
> What I understand with 4.0.
> - openib btl is disabled by default (can be turned on by mca)

It is disabled by default *for InfiniBand*.  It is still enabled by default for 
RoCE and iWARP.

> - pml ucx will be the default for infiniband hardware.
> - btl uct is for one-sided but can also be used for two sided as well (needs 
> explicit mca).
> 
> My question is, what if the user does not have UCX installed (but they have 
> infiniband hardware). The user will not have fast transport for their 
> hardware. As of my testing, this release will fall back to btl/tcp if I dont 
> specify the mca to use uct or force openib. Will this be a problem? 

This is a question for Mellanox.

-- 
Jeff Squyres
jsquy...@cisco.com

___
devel mailing list
devel@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/devel


Re: [OMPI devel] Announcing Open MPI v4.0.0rc1

2018-09-18 Thread Thananon Patinyasakdikul
Hi,

I tested on our cluster (UTK). I will give a thumb up but I have some comments.

What I understand with 4.0.
- openib btl is disabled by default (can be turned on by mca)
- pml ucx will be the default for infiniband hardware.
- btl uct is for one-sided but can also be used for two sided as well (needs 
explicit mca).

My question is, what if the user does not have UCX installed (but they have 
infiniband hardware). The user will not have fast transport for their hardware. 
As of my testing, this release will fall back to btl/tcp if I dont specify the 
mca to use uct or force openib. Will this be a problem? 

Arm

> On Sep 16, 2018, at 3:31 PM, Geoffrey Paulsen  wrote:
> 
> The first release candidate for the Open MPI v4.0.0 release is posted at 
> https://www.open-mpi.org/software/ompi/v4.0/ 
> 
> Major changes include:
> 
> 4.0.0 -- September, 2018
> 
> 
> - OSHMEM updated to the OpenSHMEM 1.4 API.
> - Do not build Open SHMEM layer when there are no SPMLs available.
>   Currently, this means the Open SHMEM layer will only build if
>   a MXM or UCX library is found.
> - A UCX BTL was added for enhanced MPI RMA support using UCX
> - With this release,  OpenIB BTL now only supports iWarp and RoCE by default.
> - Updated internal HWLOC to 2.0.1
> - Updated internal PMIx to 3.0.1
> - Change the priority for selecting external verses internal HWLOC
>   and PMIx packages to build.  Starting with this release, configure
>   by default selects available external HWLOC and PMIx packages over
>   the internal ones.
> - Updated internal ROMIO to 3.2.1.
> - Removed support for the MXM MTL.
> - Improved CUDA support when using UCX.
> - Improved support for two phase MPI I/O operations when using OMPIO.
> - Added support for Software-based Performance Counters, see
>   
> https://github.com/davideberius/ompi/wiki/How-to-Use-Software-Based-Performance-Counters-(SPCs)-in-Open-MPI
>  
> -
>  Various improvements to MPI RMA performance when using RDMA
>   capable interconnects.
> - Update memkind component to use the memkind 1.6 public API.
> - Fix problems with use of newer map-by mpirun options.  Thanks to
>   Tony Reina for reporting.
> - Fix rank-by algorithms to properly rank by object and span
> - Allow for running as root of two environment variables are set.
>   Requested by Axel Huebl.
> - Fix a problem with building the Java bindings when using Java 10.
>   Thanks to Bryce Glover for reporting.
> Our goal is to release 4.0.0 by mid Oct, so any testing is appreciated.
>  
> 
> ___
> devel mailing list
> devel@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/devel

___
devel mailing list
devel@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/devel