Re: [OMPI devel] Open MPI v2.1.4rc1

2018-08-10 Thread Jeff Squyres (jsquyres) via devel
Thanks Geoffroy.

I don't think I'm worried about this for v2.1.4, and the UCX community hasn't 
responded.  So I'm going to release 2.1.4 as-is.


> On Aug 9, 2018, at 3:33 PM, Vallee, Geoffroy R.  wrote:
> 
> Hi,
> 
> I tested on Summitdev here at ORNL and here are my comments (but I only have 
> a limited set of data for summitdev so my feedback is somewhat limited):
> - netpipe/mpi is showing a slightly lower bandwidth than the 3.x series (I do 
> not believe it is a problem).
> - I am facing a problem with UCX, it is unclear to me that it is relevant 
> since I am using UCX master and I do not know whether it is expected to work 
> with OMPI v2.1.x. Note that I am using the same tool for testing all other 
> releases of Open MPI and I never had that problem before, having in mind that 
> I only tested the 3.x series so far.
> 
> make[2]: Entering directory 
> `/autofs/nccs-svm1_home1/gvh/.ompi-release-tester/scratch/summitdev/2.1.4rc1/scratch/UCX/ompi_build/ompi/mca/pml/ucx'
> /bin/sh ../../../../libtool  --tag=CC   --mode=link gcc -std=gnu99  -O3 
> -DNDEBUG -finline-functions -fno-strict-aliasing -pthread -module 
> -avoid-version  -o mca_pml_ucx.la -rpath 
> /ccs/home/gvh/.ompi-release-tester/scratch/summitdev/2.1.4rc1/scratch/UCX/ompi_install/lib/openmpi
>  pml_ucx.lo pml_ucx_request.lo pml_ucx_datatype.lo pml_ucx_component.lo -lucp 
>  -lrt -lm -lutil  
> libtool: link: gcc -std=gnu99 -shared  -fPIC -DPIC  .libs/pml_ucx.o 
> .libs/pml_ucx_request.o .libs/pml_ucx_datatype.o .libs/pml_ucx_component.o   
> -lucp -lrt -lm -lutil  -O3 -pthread   -pthread -Wl,-soname -Wl,mca_pml_ucx.so 
> -o .libs/mca_pml_ucx.so
> /usr/bin/ld: cannot find -lucp
> collect2: error: ld returned 1 exit status
> make[2]: *** [mca_pml_ucx.la] Error 1
> make[2]: Leaving directory 
> `/autofs/nccs-svm1_home1/gvh/.ompi-release-tester/scratch/summitdev/2.1.4rc1/scratch/UCX/ompi_build/ompi/mca/pml/ucx'
> make[1]: *** [all-recursive] Error 1
> make[1]: Leaving directory 
> `/autofs/nccs-svm1_home1/gvh/.ompi-release-tester/scratch/summitdev/2.1.4rc1/scratch/UCX/ompi_build/ompi'
> make: *** [all-recursive] Error 1
> 
> My 2 cents,
> 
>> On Aug 6, 2018, at 5:04 PM, Jeff Squyres (jsquyres) via devel 
>>  wrote:
>> 
>> Open MPI v2.1.4rc1 has been pushed.  It is likely going to be the last in 
>> the v2.1.x series (since v4.0.0 is now visible on the horizon).  It is just 
>> a bunch of bug fixes that have accumulated since v2.1.3; nothing huge.  
>> We'll encourage users who are still using the v2.1.x series to upgrade to 
>> this release; it should be a non-event for anyone who has already upgraded 
>> to the v3.0.x or v3.1.x series.
>> 
>>   https://www.open-mpi.org/software/ompi/v2.1/
>> 
>> If no serious-enough issues are found, we plan to release 2.1.4 this Friday, 
>> August 10, 2018.
>> 
>> Please test!
>> 
>> Bug fixes/minor improvements:
>> - Disable the POWER 7/BE block in configure.  Note that POWER 7/BE is
>> still not a supported platform, but it is no longer automatically
>> disabled.  See
>> https://github.com/open-mpi/ompi/issues/4349#issuecomment-374970982
>> for more information.
>> - Fix bug with request-based one-sided MPI operations when using the
>> "rdma" component.
>> - Fix issue with large data structure in the TCP BTL causing problems
>> in some environments.  Thanks to @lgarithm for reporting the issue.
>> - Minor Cygwin build fixes.
>> - Minor fixes for the openib BTL:
>> - Support for the QLogic RoCE HCA
>> - Support for the Boradcom Cumulus RoCE HCA
>> - Enable support for HDR link speeds
>> - Fix MPI_FINALIZED hang if invoked from an attribute destructor
>> during the MPI_COMM_SELF destruction in MPI_FINALIZE.  Thanks to
>> @AndrewGaspar for reporting the issue.
>> - Java fixes:
>> - Modernize Java framework detection, especially on OS X/MacOS.
>>   Thanks to Bryce Glover for reporting and submitting the fixes.
>> - Prefer "javac -h" to "javah" to support newer Java frameworks.
>> - Fortran fixes:
>> - Use conformant dummy parameter names for Fortran bindings.  Thanks
>>   to Themos Tsikas for reporting and submitting the fixes.
>> - Build the MPI_SIZEOF() interfaces in the "TKR"-style "mpi" module
>>   whenever possible.  Thanks to Themos Tsikas for reporting the
>>   issue.
>> - Fix array of argv handling for the Fortran bindings of
>>   MPI_COMM_SPAWN_MULTIPLE (and its associated man page).
>> - Make NAG Fortran compiler support more robust in configure.
>> - Disable the "pt2pt" one-sided MPI component when MPI_THREAD_MULTIPLE
>> is used.  This component is simply not safe in MPI_THREAD_MULTIPLE
>> scenarios, and will not be fixed in the v2.1.x series.
>> - Make the "external" hwloc component fail gracefully if it is tries
>> to use an hwloc v2.x.y installation.  hwloc v2.x.y will not be
>> supported in the Open MPI v2.1.x series.
>> - Fix "vader" shared memory support for messages larger than 2GB.
>> Thanks to Heiko Bauke for the bug report.
>> - Configure fixes for external PMI directory detection.  

Re: [OMPI devel] Open MPI v2.1.4rc1

2018-08-09 Thread Vallee, Geoffroy R.
Hi,

I tested on Summitdev here at ORNL and here are my comments (but I only have a 
limited set of data for summitdev so my feedback is somewhat limited):
- netpipe/mpi is showing a slightly lower bandwidth than the 3.x series (I do 
not believe it is a problem).
- I am facing a problem with UCX, it is unclear to me that it is relevant since 
I am using UCX master and I do not know whether it is expected to work with 
OMPI v2.1.x. Note that I am using the same tool for testing all other releases 
of Open MPI and I never had that problem before, having in mind that I only 
tested the 3.x series so far.

make[2]: Entering directory 
`/autofs/nccs-svm1_home1/gvh/.ompi-release-tester/scratch/summitdev/2.1.4rc1/scratch/UCX/ompi_build/ompi/mca/pml/ucx'
/bin/sh ../../../../libtool  --tag=CC   --mode=link gcc -std=gnu99  -O3 
-DNDEBUG -finline-functions -fno-strict-aliasing -pthread -module 
-avoid-version  -o mca_pml_ucx.la -rpath 
/ccs/home/gvh/.ompi-release-tester/scratch/summitdev/2.1.4rc1/scratch/UCX/ompi_install/lib/openmpi
 pml_ucx.lo pml_ucx_request.lo pml_ucx_datatype.lo pml_ucx_component.lo -lucp  
-lrt -lm -lutil  
libtool: link: gcc -std=gnu99 -shared  -fPIC -DPIC  .libs/pml_ucx.o 
.libs/pml_ucx_request.o .libs/pml_ucx_datatype.o .libs/pml_ucx_component.o   
-lucp -lrt -lm -lutil  -O3 -pthread   -pthread -Wl,-soname -Wl,mca_pml_ucx.so 
-o .libs/mca_pml_ucx.so
/usr/bin/ld: cannot find -lucp
collect2: error: ld returned 1 exit status
make[2]: *** [mca_pml_ucx.la] Error 1
make[2]: Leaving directory 
`/autofs/nccs-svm1_home1/gvh/.ompi-release-tester/scratch/summitdev/2.1.4rc1/scratch/UCX/ompi_build/ompi/mca/pml/ucx'
make[1]: *** [all-recursive] Error 1
make[1]: Leaving directory 
`/autofs/nccs-svm1_home1/gvh/.ompi-release-tester/scratch/summitdev/2.1.4rc1/scratch/UCX/ompi_build/ompi'
make: *** [all-recursive] Error 1

My 2 cents,

> On Aug 6, 2018, at 5:04 PM, Jeff Squyres (jsquyres) via devel 
>  wrote:
> 
> Open MPI v2.1.4rc1 has been pushed.  It is likely going to be the last in the 
> v2.1.x series (since v4.0.0 is now visible on the horizon).  It is just a 
> bunch of bug fixes that have accumulated since v2.1.3; nothing huge.  We'll 
> encourage users who are still using the v2.1.x series to upgrade to this 
> release; it should be a non-event for anyone who has already upgraded to the 
> v3.0.x or v3.1.x series.
> 
>https://www.open-mpi.org/software/ompi/v2.1/
> 
> If no serious-enough issues are found, we plan to release 2.1.4 this Friday, 
> August 10, 2018.
> 
> Please test!
> 
> Bug fixes/minor improvements:
> - Disable the POWER 7/BE block in configure.  Note that POWER 7/BE is
>  still not a supported platform, but it is no longer automatically
>  disabled.  See
>  https://github.com/open-mpi/ompi/issues/4349#issuecomment-374970982
>  for more information.
> - Fix bug with request-based one-sided MPI operations when using the
>  "rdma" component.
> - Fix issue with large data structure in the TCP BTL causing problems
>  in some environments.  Thanks to @lgarithm for reporting the issue.
> - Minor Cygwin build fixes.
> - Minor fixes for the openib BTL:
>  - Support for the QLogic RoCE HCA
>  - Support for the Boradcom Cumulus RoCE HCA
>  - Enable support for HDR link speeds
> - Fix MPI_FINALIZED hang if invoked from an attribute destructor
>  during the MPI_COMM_SELF destruction in MPI_FINALIZE.  Thanks to
>  @AndrewGaspar for reporting the issue.
> - Java fixes:
>  - Modernize Java framework detection, especially on OS X/MacOS.
>Thanks to Bryce Glover for reporting and submitting the fixes.
>  - Prefer "javac -h" to "javah" to support newer Java frameworks.
> - Fortran fixes:
>  - Use conformant dummy parameter names for Fortran bindings.  Thanks
>to Themos Tsikas for reporting and submitting the fixes.
>  - Build the MPI_SIZEOF() interfaces in the "TKR"-style "mpi" module
>whenever possible.  Thanks to Themos Tsikas for reporting the
>issue.
>  - Fix array of argv handling for the Fortran bindings of
>MPI_COMM_SPAWN_MULTIPLE (and its associated man page).
>  - Make NAG Fortran compiler support more robust in configure.
> - Disable the "pt2pt" one-sided MPI component when MPI_THREAD_MULTIPLE
>  is used.  This component is simply not safe in MPI_THREAD_MULTIPLE
>  scenarios, and will not be fixed in the v2.1.x series.
> - Make the "external" hwloc component fail gracefully if it is tries
>  to use an hwloc v2.x.y installation.  hwloc v2.x.y will not be
>  supported in the Open MPI v2.1.x series.
> - Fix "vader" shared memory support for messages larger than 2GB.
>  Thanks to Heiko Bauke for the bug report.
> - Configure fixes for external PMI directory detection.  Thanks to
>  Davide Vanzo for the report.
> 
> 
> -- 
> Jeff Squyres
> jsquy...@cisco.com
> 
> ___
> devel mailing list
> devel@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/devel

___
devel mailing list

[OMPI devel] Open MPI v2.1.4rc1

2018-08-06 Thread Jeff Squyres (jsquyres) via devel
Open MPI v2.1.4rc1 has been pushed.  It is likely going to be the last in the 
v2.1.x series (since v4.0.0 is now visible on the horizon).  It is just a bunch 
of bug fixes that have accumulated since v2.1.3; nothing huge.  We'll encourage 
users who are still using the v2.1.x series to upgrade to this release; it 
should be a non-event for anyone who has already upgraded to the v3.0.x or 
v3.1.x series.

https://www.open-mpi.org/software/ompi/v2.1/

If no serious-enough issues are found, we plan to release 2.1.4 this Friday, 
August 10, 2018.

Please test!

Bug fixes/minor improvements:
- Disable the POWER 7/BE block in configure.  Note that POWER 7/BE is
  still not a supported platform, but it is no longer automatically
  disabled.  See
  https://github.com/open-mpi/ompi/issues/4349#issuecomment-374970982
  for more information.
- Fix bug with request-based one-sided MPI operations when using the
  "rdma" component.
- Fix issue with large data structure in the TCP BTL causing problems
  in some environments.  Thanks to @lgarithm for reporting the issue.
- Minor Cygwin build fixes.
- Minor fixes for the openib BTL:
  - Support for the QLogic RoCE HCA
  - Support for the Boradcom Cumulus RoCE HCA
  - Enable support for HDR link speeds
- Fix MPI_FINALIZED hang if invoked from an attribute destructor
  during the MPI_COMM_SELF destruction in MPI_FINALIZE.  Thanks to
  @AndrewGaspar for reporting the issue.
- Java fixes:
  - Modernize Java framework detection, especially on OS X/MacOS.
Thanks to Bryce Glover for reporting and submitting the fixes.
  - Prefer "javac -h" to "javah" to support newer Java frameworks.
- Fortran fixes:
  - Use conformant dummy parameter names for Fortran bindings.  Thanks
to Themos Tsikas for reporting and submitting the fixes.
  - Build the MPI_SIZEOF() interfaces in the "TKR"-style "mpi" module
whenever possible.  Thanks to Themos Tsikas for reporting the
issue.
  - Fix array of argv handling for the Fortran bindings of
MPI_COMM_SPAWN_MULTIPLE (and its associated man page).
  - Make NAG Fortran compiler support more robust in configure.
- Disable the "pt2pt" one-sided MPI component when MPI_THREAD_MULTIPLE
  is used.  This component is simply not safe in MPI_THREAD_MULTIPLE
  scenarios, and will not be fixed in the v2.1.x series.
- Make the "external" hwloc component fail gracefully if it is tries
  to use an hwloc v2.x.y installation.  hwloc v2.x.y will not be
  supported in the Open MPI v2.1.x series.
- Fix "vader" shared memory support for messages larger than 2GB.
  Thanks to Heiko Bauke for the bug report.
- Configure fixes for external PMI directory detection.  Thanks to
  Davide Vanzo for the report.


-- 
Jeff Squyres
jsquy...@cisco.com

___
devel mailing list
devel@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/devel