On Apr 29, 2016, at 2:00 PM, Ralph Castain <r...@open-mpi.org> wrote: > > Didn’t OSHMEM up-level its API?
Yeah, actually, we don't have much in NEWS about OSHMEM -- Mellanox: what should be in there? > I believe we also have some early support in there for DVM and Singularity, > but not the full-blown capability that is in master. Unsure if we want to > advertise that for 2.0, maybe wait for the updates in 2.1? Up to you. Do you have enough support in 2.0.0 that you want to say something about it in NEWS / this migration guide? > >> On Apr 29, 2016, at 10:55 AM, Jeff Squyres (jsquyres) <jsquy...@cisco.com> >> wrote: >> >> I'm thinking something like a simple "User's migration guide: 1.8.x/1.10.x >> --> 2.0.0" >> >> Here's big topics I see so far: >> >> User-Noticeable changes >> (i.e., things that may prevent users from simply re-compiling / >> re-mpirun'ing their existing MPI app) >> ----------------------- >> - mpirun -np behavior >> - OMPIO is now the default (not ROMIO) >> - ...more? >> >> New features >> ------------ >> - Launch scalability improvements (i.e., support for PMIx) >> - Lots of improvements to MPI RMA >> - Improved support for MPI_THREAD_MULTIPLE >> - ompi_info pretty print improvements >> - UCX support >> - PLFS support (via OMPIO) >> - Better Cray build / SLURM support >> - ...more? >> >> Removed support >> --------------- >> - OS X Leopard >> - Cray XT >> - VampirTrace >> - Myrinet MX / OpenMX >> - coll:ml module >> - Alpha processors >> - --enable-mpi-profiling option >> - Checkpoint / restart >> - ...more? >> >> >>> On Apr 29, 2016, at 1:21 PM, Howard Pritchard <hpprit...@gmail.com> wrote: >>> >>> Hi Jeff, >>> >>> checkpoint/restart is not supported in this release. >>> >>> Does this release work with totalview? I recall we had some problems, >>> and do not remember if they were resolved. >>> >>> We may also want to clarify if any PML/MTLs are experimental in this >>> release. >>> >>> MPI_THREAD_MULTIPLE support. >>> >>> >>> Howard >>> >>> >>> 2016-04-29 10:34 GMT-06:00 Cabral, Matias A <matias.a.cab...@intel.com>: >>> How about for “developers that have not been following the transition from >>> 1.x to 2.0? Particularly myself J. I started contributing to some specific >>> parts (psm2 mtl) and following changes. However, I don’t have details of >>> what is changing in 2.0. I see there could be different level of details in >>> the “developer’s transition guide book”, ranging from architectural change >>> to what pieces were moved where. >>> >>> >>> >>> Thanks, >>> >>> >>> >>> _MAC >>> >>> >>> >>> From: devel [mailto:devel-boun...@open-mpi.org] On Behalf Of Joshua Ladd >>> Sent: Friday, April 29, 2016 7:11 AM >>> To: Open MPI Developers <de...@open-mpi.org> >>> Subject: Re: [OMPI devel] 2.0.0 is coming: what do we need to communicate >>> to users? >>> >>> >>> >>> Certainly we need to communicate / advertise / evangelize the improvements >>> in job launch - the largest and most substantial change between the two >>> branches - and provide some best practice guidelines for usage (use direct >>> modex for applications with sparse communication patterns and full modex >>> for dense.) I would be happy to contribute some paragraphs. >>> >>> >>> >>> Obviously, we also need to communicate, reiterate the need to recompile >>> codes built against the 1.10 series. >>> >>> >>> >>> Best, >>> >>> >>> >>> Josh >>> >>> >>> >>> >>> >>> On Thursday, April 28, 2016, Jeff Squyres (jsquyres) <jsquy...@cisco.com> >>> wrote: >>> >>> We're getting darn close to v2.0.0. >>> >>> What "gotchas" do we need to communicate to users? I.e., what will people >>> upgrading from v1.8.x/v1.10.x be surprised by? >>> >>> The most obvious one I can think of is mpirun requiring -np when slots are >>> not specified somehow. >>> >>> What else do we need to communicate? It would be nice to avoid the >>> confusion users experienced regarding affinity functionality/options when >>> upgrading from v1.6 -> v1.8 (because we didn't communicate these changes >>> well, IMHO). >>> >>> -- >>> Jeff Squyres >>> jsquy...@cisco.com >>> For corporate legal information go to: >>> http://www.cisco.com/web/about/doing_business/legal/cri/ >>> >>> _______________________________________________ >>> devel mailing list >>> de...@open-mpi.org >>> Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/devel >>> Link to this post: >>> http://www.open-mpi.org/community/lists/devel/2016/04/18832.php >>> >>> >>> _______________________________________________ >>> devel mailing list >>> de...@open-mpi.org >>> Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/devel >>> Link to this post: >>> http://www.open-mpi.org/community/lists/devel/2016/04/18843.php >>> >>> _______________________________________________ >>> devel mailing list >>> de...@open-mpi.org >>> Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/devel >>> Link to this post: >>> http://www.open-mpi.org/community/lists/devel/2016/04/18844.php >> >> >> -- >> Jeff Squyres >> jsquy...@cisco.com >> For corporate legal information go to: >> http://www.cisco.com/web/about/doing_business/legal/cri/ >> >> _______________________________________________ >> devel mailing list >> de...@open-mpi.org >> Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/devel >> Link to this post: >> http://www.open-mpi.org/community/lists/devel/2016/04/18846.php > > _______________________________________________ > devel mailing list > de...@open-mpi.org > Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/devel > Link to this post: > http://www.open-mpi.org/community/lists/devel/2016/04/18848.php -- Jeff Squyres jsquy...@cisco.com For corporate legal information go to: http://www.cisco.com/web/about/doing_business/legal/cri/