Re: [OMPI users] OpenMPI (1.8.3) and environment variable export

2015-06-12 Thread borno_borno

Perfect, thanks a lot!

> Gesendet: Freitag, 12. Juni 2015 um 17:13 Uhr
> Von: "Noam Bernstein" 
> An: "Open MPI Users" 
> Betreff: Re: [OMPI users] OpenMPI (1.8.3) and environment variable export
>
> > On Jun 12, 2015, at 11:08 AM, borno_bo...@gmx.de wrote:
> > 
> > Hey there, 
> > 
> > I know that variable export in general can be done with the -x option of 
> > mpirun, but I guess that won't help me.
> > More precisely I have a heterogeneous cluster (number of cores per cpu) and 
> > one process for each node. The application I need to launch uses hybrid 
> > MPI+OpenMP parallelization an I have to set the OMP_NUM_THREADS variable 
> > such that it fits the number of cores on each node. 
> > 
> > I have no access to the application to get the number of cores from within 
> > the process. I just can launch it.
> > 
> > Is there any chance to do this?
> 
> You could wrap the executable in a shell script that gets the number of cores 
> (from /proc/cpuinfo?), setenv OMP_NUM_THREADS, and execs the executable 
> passing $* command line arguments.  Then you mpirun the script you created.
> 
>   
> 
> Noam___
> users mailing list
> us...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post: 
> http://www.open-mpi.org/community/lists/users/2015/06/27126.php


Re: [OMPI users] Undefined ompi_mpi_info_null issue

2015-06-12 Thread Ray Sheppard
Just a follow-up.  RPATH was the trouble.  All is well now in the land 
of the climatologists again.  Thanks again for the help.

Ray


On 6/12/2015 8:00 AM, Ray Sheppard wrote:

Thanks again Gilles,
  You might be on to something.  Dynamic libraries sound like 
something a Python developer might love (no offense intended to the 
stereotype). It would also explain why the build went smoothly but the 
test run crashed.  I am going to try putting an RPATH variable in the 
environment and rebuilding.

  Ray

On 6/12/2015 7:15 AM, Gilles Gouaillardet wrote:

Ray,

one possibility is one of the loaded library was built with -rpath 
and this causes the mess


an other option is you have to link _error.so with libmpi.so

Cheers,

Gilles

On Friday, June 12, 2015, Ray Sheppard > wrote:


Hi Gilles,
  Thanks for the reply. I completely forgot that lived in the
main library.  ldd doesn't show that it read my LD_LIBRARY_PATH
(I also push out an LPATH variable just for fun).  I force
modules to echoed when users initialize them.  You can see
OpenMPI was visible to H5py.  Now I wonder why it didn't pick it
up...  Thanks again.
  Ray
GMP arithmetic library version 5.1.1 loaded.
MPFR version 3.1.1 loaded.
Mpc version 1.0.1 loaded.
gcc version 4.9.2 loaded.
Moab Workload Manager scheduling and management system version
7.1.1 loaded.
Python programming language version 2.7.3 loaded.
Perl programming language version 5.16.2 loaded.
Intel compiler suite version 15.0.1 loaded.
OpenMPI libraries (Intel) version 1.8.4 loaded.
TotalView version 8.15.0-15 loaded.
FFTW (Intel, Double precision) version 3.3.3 loaded.
hdf4 version 4.2.10 loaded.
Curl version 7.28.1 loaded.
HDF5 (MPI) version 1.8.14 loaded.
netcdf-c version 4.3.3 loaded.
netcdf-fortran version 4.4.1 loaded.
Gnuplot graphing utility version 4.6.1 loaded.
[rsheppar@h2 ~]$ ldd

/N/dc2/projects/ray/quarry/h5py/h5py-2.5.0/build/lib.linux-x86_64-2.7/h5py/_errors.so
linux-vdso.so.1 =>  (0x7fff39db7000)
libpthread.so.0 => /lib64/libpthread.so.0
(0x7facfe887000)
libc.so.6 => /lib64/libc.so.6 (0x7facfe4f3000)
/lib64/ld-linux-x86-64.so.2 (0x7facff049000)


On 6/11/2015 8:09 PM, Gilles Gouaillardet wrote:

Ray,

this symbol is defined in libmpi.so.

can you run
ldd

//N/dc2/projects/ray/quarry/h5py/h5py-2.5.0/build/lib.linux-x86_64-2.7/h5py//_errors.so
and make sure this is linked with openmpi 1.8.4 ?

Cheers,

Gilles

On 6/12/2015 1:29 AM, Ray Sheppard wrote:

Hi List,
  I know I saw this issue years ago but have forgotten the
details. I looked through old posts but only found about half a
dozen pertaining to WinDoze.  I am trying to build a Python
(2.7.3) extension (h5py) that calls HDF5 (1.8.14).  I built
both the OpenMPI (1.8.4) and the HDF5 modules so I know they
are consistent.  All goes well until I try to run the tests.
Then I get:

ImportError:

/N/dc2/projects/ray/quarry/h5py/h5py-2.5.0/build/lib.linux-x86_64-2.7/h5py/_errors.so:
undefined symbol: ompi_mpi_info_null

I am not sure I completely trust the h5py package but I don't
have a real good reason for believing that way.  I would
appreciate it if someone could explain where ompi_mpi_info_null
is defined and possibly a way to tell Python about it.  Thanks!
Ray





___
users mailing list
us...@open-mpi.org  
Subscription:http://www.open-mpi.org/mailman/listinfo.cgi/users
Link to this 
post:http://www.open-mpi.org/community/lists/users/2015/06/27117.php




___
users mailing list
us...@open-mpi.org
Subscription:http://www.open-mpi.org/mailman/listinfo.cgi/users
Link to this 
post:http://www.open-mpi.org/community/lists/users/2015/06/27119.php




___
users mailing list
us...@open-mpi.org
Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
Link to this post: 
http://www.open-mpi.org/community/lists/users/2015/06/27121.php


--
 Respectfully,
   Ray Sheppard
   rshep...@iu.edu
   http://rt.uits.iu.edu/systems/SciAPT
   317-274-0016

   Principal Analyst
   Senior Technical Lead
   Scientific Applications and Performance Tuning
   Research Technologies
   University Information Technological Services
   IUPUI campus
   Indiana University

   My "pithy" saying:  Science is the art of 

Re: [OMPI users] OpenMPI (1.8.3) and environment variable export

2015-06-12 Thread Noam Bernstein
> On Jun 12, 2015, at 11:08 AM, borno_bo...@gmx.de wrote:
> 
> Hey there, 
> 
> I know that variable export in general can be done with the -x option of 
> mpirun, but I guess that won't help me.
> More precisely I have a heterogeneous cluster (number of cores per cpu) and 
> one process for each node. The application I need to launch uses hybrid 
> MPI+OpenMP parallelization an I have to set the OMP_NUM_THREADS variable such 
> that it fits the number of cores on each node. 
> 
> I have no access to the application to get the number of cores from within 
> the process. I just can launch it.
> 
> Is there any chance to do this?

You could wrap the executable in a shell script that gets the number of cores 
(from /proc/cpuinfo?), setenv OMP_NUM_THREADS, and execs the executable passing 
$* command line arguments.  Then you mpirun the script you created.


Noam

smime.p7s
Description: S/MIME cryptographic signature


Re: [OMPI users] Help : Slowness with OpenMPI (1.8.1) and Numpy

2015-06-12 Thread Ralph Castain
Is this a threaded code? If so, you should add —bind-to none to your 1.8 series 
command line


> On Jun 12, 2015, at 7:58 AM, kishor sharma  wrote:
> 
> Hi There,
> 
> 
> 
> I am facing slowness running numpy code using mpirun with openmpi 1.8.1 
> version.
> 
> 
> 
> With Open MPI (1.8.1)
> 
> -
> 
> > /usr/lib64/openmpi/bin/mpirun -version
> 
> mpirun (Open MPI) 1.8.1
> 
>  
> Report bugs to http://www.open-mpi.org/community/help/ 
> 
> >  time /usr/lib64/openmpi/bin/mpirun -np 1 python -c 'import numpy; 
> > numpy.linalg.svd(numpy.eye(1000))'
> 
> real 23.75
> 
> user 6.95
> 
> sys 16.68
> 
> > 
> 
> 
> 
> 
> 
> With Open MPI (1.5.4):
> 
> -
> 
> > /usr/lib64/openmpi/bin/mpirun -version
> 
> mpirun (Open MPI) 1.5.4
> 
>  
> Report bugs to http://www.open-mpi.org/community/help/ 
> 
> > time /usr/lib64/openmpi/bin/mpirun -np 1 python -c 'import numpy; 
> > numpy.linalg.svd(numpy.eye(1000))'
> 
> real 1.35
> 
> user 2.11
> 
> sys 0.71
> 
> >
> 
> 
> 
> Do you guys have any idea why the above function is 10-15x with openmpi 
> version 1.8.1
> 
> 
> 
> Thanks,
> 
> Kishor
> 
> ___
> users mailing list
> us...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post: 
> http://www.open-mpi.org/community/lists/users/2015/06/27123.php



[OMPI users] OpenMPI (1.8.3) and environment variable export

2015-06-12 Thread borno_borno
Hey there, 

I know that variable export in general can be done with the -x option of 
mpirun, but I guess that won't help me.
More precisely I have a heterogeneous cluster (number of cores per cpu) and one 
process for each node. The application I need to launch uses hybrid MPI+OpenMP 
parallelization an I have to set the OMP_NUM_THREADS variable such that it fits 
the number of cores on each node. 

I have no access to the application to get the number of cores from within the 
process. I just can launch it.

Is there any chance to do this?

Best regards,

Kurt


[OMPI users] Help : Slowness with OpenMPI (1.8.1) and Numpy

2015-06-12 Thread kishor sharma
Hi There,


I am facing slowness running numpy code using mpirun with openmpi 1.8.1
version.


With Open MPI (1.8.1)

-

> /usr/lib64/openmpi/bin/mpirun -version

mpirun (Open MPI) 1.8.1



Report bugs to http://www.open-mpi.org/community/help/

>  time /usr/lib64/openmpi/bin/mpirun -np 1 python -c 'import numpy;
numpy.linalg.svd(numpy.eye(1000))'

real 23.75

user 6.95

sys 16.68

>



With Open MPI (1.5.4):

-

> /usr/lib64/openmpi/bin/mpirun -version

mpirun (Open MPI) 1.5.4



Report bugs to http://www.open-mpi.org/community/help/

> time /usr/lib64/openmpi/bin/mpirun -np 1 python -c 'import numpy;
numpy.linalg.svd(numpy.eye(1000))'

real 1.35

user 2.11

sys 0.71

>


Do you guys have any idea why the above function is 10-15x with openmpi
version 1.8.1


Thanks,

Kishor


[OMPI users] Slowness with OpenMPI (1.8.1) and Numpy

2015-06-12 Thread kishor sharma
Hi There,


I am facing slowness running numpy code using mpirun with openmpi 1.8.1
version.


With Open MPI (1.8.1)

-

> /usr/lib64/openmpi/bin/mpirun -version

mpirun (Open MPI) 1.8.1



Report bugs to http://www.open-mpi.org/community/help/

>  time /usr/lib64/openmpi/bin/mpirun -np 1 python -c 'import numpy;
numpy.linalg.svd(numpy.eye(1000))'

real 23.75

user 6.95

sys 16.68

>



With Open MPI (1.5.4):

-

> /usr/lib64/openmpi/bin/mpirun -version

mpirun (Open MPI) 1.5.4



Report bugs to http://www.open-mpi.org/community/help/

> time /usr/lib64/openmpi/bin/mpirun -np 1 python -c 'import numpy;
numpy.linalg.svd(numpy.eye(1000))'

real 1.35

user 2.11

sys 0.71

>


Do you guys have any idea why the above function is 10-15x with openmpi
version 1.8.1


Thanks,

Kishor


Re: [OMPI users] Undefined ompi_mpi_info_null issue

2015-06-12 Thread Ray Sheppard

Thanks again Gilles,
  You might be on to something.  Dynamic libraries sound like something 
a Python developer might love (no offense intended to the stereotype). 
It would also explain why the build went smoothly but the test run 
crashed.  I am going to try putting an RPATH variable in the environment 
and rebuilding.

  Ray

On 6/12/2015 7:15 AM, Gilles Gouaillardet wrote:

Ray,

one possibility is one of the loaded library was built with -rpath and 
this causes the mess


an other option is you have to link _error.so with libmpi.so

Cheers,

Gilles

On Friday, June 12, 2015, Ray Sheppard > wrote:


Hi Gilles,
  Thanks for the reply. I completely forgot that lived in the main
library.  ldd doesn't show that it read my LD_LIBRARY_PATH (I also
push out an LPATH variable just for fun).  I force modules to
echoed when users initialize them.  You can see OpenMPI was
visible to H5py.  Now I wonder why it didn't pick it up...  Thanks
again.
  Ray
GMP arithmetic library version 5.1.1 loaded.
MPFR version 3.1.1 loaded.
Mpc version 1.0.1 loaded.
gcc version 4.9.2 loaded.
Moab Workload Manager scheduling and management system version
7.1.1 loaded.
Python programming language version 2.7.3 loaded.
Perl programming language version 5.16.2 loaded.
Intel compiler suite version 15.0.1 loaded.
OpenMPI libraries (Intel) version 1.8.4 loaded.
TotalView version 8.15.0-15 loaded.
FFTW (Intel, Double precision) version 3.3.3 loaded.
hdf4 version 4.2.10 loaded.
Curl version 7.28.1 loaded.
HDF5 (MPI) version 1.8.14 loaded.
netcdf-c version 4.3.3 loaded.
netcdf-fortran version 4.4.1 loaded.
Gnuplot graphing utility version 4.6.1 loaded.
[rsheppar@h2 ~]$ ldd

/N/dc2/projects/ray/quarry/h5py/h5py-2.5.0/build/lib.linux-x86_64-2.7/h5py/_errors.so
linux-vdso.so.1 =>  (0x7fff39db7000)
libpthread.so.0 => /lib64/libpthread.so.0 (0x7facfe887000)
libc.so.6 => /lib64/libc.so.6 (0x7facfe4f3000)
/lib64/ld-linux-x86-64.so.2 (0x7facff049000)


On 6/11/2015 8:09 PM, Gilles Gouaillardet wrote:

Ray,

this symbol is defined in libmpi.so.

can you run
ldd

//N/dc2/projects/ray/quarry/h5py/h5py-2.5.0/build/lib.linux-x86_64-2.7/h5py//_errors.so
and make sure this is linked with openmpi 1.8.4 ?

Cheers,

Gilles

On 6/12/2015 1:29 AM, Ray Sheppard wrote:

Hi List,
  I know I saw this issue years ago but have forgotten the
details. I looked through old posts but only found about half a
dozen pertaining to WinDoze.  I am trying to build a Python
(2.7.3) extension (h5py) that calls HDF5 (1.8.14).  I built both
the OpenMPI (1.8.4) and the HDF5 modules so I know they are
consistent.  All goes well until I try to run the tests. Then I
get:

ImportError:

/N/dc2/projects/ray/quarry/h5py/h5py-2.5.0/build/lib.linux-x86_64-2.7/h5py/_errors.so:
undefined symbol: ompi_mpi_info_null

I am not sure I completely trust the h5py package but I don't
have a real good reason for believing that way.  I would
appreciate it if someone could explain where ompi_mpi_info_null
is defined and possibly a way to tell Python about it.  Thanks!
Ray





___
users mailing list
us...@open-mpi.org  
Subscription:http://www.open-mpi.org/mailman/listinfo.cgi/users
Link to this 
post:http://www.open-mpi.org/community/lists/users/2015/06/27117.php




___
users mailing list
us...@open-mpi.org
Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
Link to this post: 
http://www.open-mpi.org/community/lists/users/2015/06/27119.php




Re: [OMPI users] Looking for LAM-MPI sources to create a mirror

2015-06-12 Thread Cian Davis
Hi Dave,
I use Debian/Ubuntu by default and my first approach (a number of years ago
at this stage) was to install from apt. However, if memory serves, I had
difficulty getting the packages LAM-MPI to work with the FDS5 software at
the time.

Obviously, this is specifc to the FDS5 software.

My intention with regard to requesting sources was to create a mirror so
that people who have to use LAM-MPI (e.g. because their applications were
statically compiled against them) would still have some way to get LAM-MPI
instead of scouring the recesses of the web. Having the sources available
gives the widest possible flexibility (instead of needing a
Debian/FC/CentOS/RedHat system).

I just assumed someone here would have a private copy of the LAM-MPI site
and I was going to host them publicly just in case the wider community
needed them.

Regards,
Cian


On 11 June 2015 at 17:08, Dave Love  wrote:

> "Jeff Squyres (jsquyres)"  writes:
>
> > Sadly, I have minimal experience with .debs... if someone would
> contribute the necessary packaging, we could talk about hosting a source
> deb on the main Open MPI site.
>
> What's wrong with the Debian packages (if you really want LAM)?
>
>   $ apt-cache show lam-runtime
>   Package: lam-runtime
>   Source: lam
>   Version: 7.1.4-3.1
>   Installed-Size: 1363
>   Maintainer: Camm Maguire 
>   Architecture: amd64
>   Replaces: lam, lam1-runtime, lam4-dev (<= 7.1.2-2)
>   Depends: libc6 (>= 2.14), libgcc1 (>= 1:4.1.1), liblam4, libstdc++6 (>=
> 4.4.0), openssh-client | ssh-client | rsh-client, openssh-server |
> ssh-server | rsh-server
>   Conflicts: lam, lam1-runtime, lam4-dev (<= 7.1.2-2)
>   Description-en: LAM runtime environment for executing parallel programs
>LAM (Local Area Multicomputer) is an open source implementation of the
>Message Passing Interface (MPI) standard.
>.
>Some enhancements in LAM 6.3 are:
> o Added the MPI-2 C++ bindings package (chapter 10 from the MPI-2
> standard) from the Laboratory for Scientific Computing at the
> University of Notre Dame.
> o Added ROMIO MPI I/O package (chapter 9 from the MPI-2 standard)
> from the Argonne National Laboratory.
> o Pseudo-tty support for remote IO (e.g., line buffered output).
> o Ability to pass environment variables through mpirun.
> o Ability to mpirun shell scripts/debuggers/etc. (that eventually
> run LAM/MPI programs).
> o Ability to execute non-MPI programs across the multicomputer.
> o Added configurable ability to zero-fill internal LAM buffers
> before they are used (for development tools such as Purify).
> o Greatly expanded error messages; provided for customizable
> local help files.
> o Expanded and updated documentation.
> o Various bug fixes and minor enhancements.
>   Description-md5: 070247a6e39a81b5bb5c1009c75deb58
>   Tag: devel::runtime, implemented-in::fortran, network::configuration,
>role::program, scope::utility
>   Section: utils
>   Priority: extra
>   Filename: pool/main/l/lam/lam-runtime_7.1.4-3.1_amd64.deb
>   Size: 961826
>   MD5sum: 7d21dc63336ea5ba7f0eff3354dcc7cb
>   SHA1: fd7f2941cd3798373fa488235e99a2d9a2d75519
>   SHA256: 5993995b93fe960d58f4fdd55e156a6732aaad3815fe8070dabf1f7c8de17ecd
>
> The LAM site housed one or two things other than LAM which might still
> be of interest, but I don't remember what off-hand.
> ___
> users mailing list
> us...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2015/06/27096.php
>


Re: [OMPI users] Undefined ompi_mpi_info_null issue

2015-06-12 Thread Gilles Gouaillardet
Ray,

one possibility is one of the loaded library was built with -rpath and this
causes the mess

an other option is you have to link _error.so with libmpi.so

Cheers,

Gilles

On Friday, June 12, 2015, Ray Sheppard  wrote:

>  Hi Gilles,
>   Thanks for the reply. I completely forgot that lived in the main
> library.  ldd doesn't show that it read my LD_LIBRARY_PATH (I also push out
> an LPATH variable just for fun).  I force modules to echoed when users
> initialize them.  You can see OpenMPI was visible to H5py.  Now I wonder
> why it didn't pick it up...  Thanks again.
>   Ray
> GMP arithmetic library version 5.1.1 loaded.
> MPFR version 3.1.1 loaded.
> Mpc version 1.0.1 loaded.
> gcc version 4.9.2 loaded.
> Moab Workload Manager scheduling and management system version 7.1.1
> loaded.
> Python programming language version 2.7.3 loaded.
> Perl programming language version 5.16.2 loaded.
> Intel compiler suite version 15.0.1 loaded.
> OpenMPI libraries (Intel) version 1.8.4 loaded.
> TotalView version 8.15.0-15 loaded.
> FFTW (Intel, Double precision) version 3.3.3 loaded.
> hdf4 version 4.2.10 loaded.
> Curl version 7.28.1 loaded.
> HDF5 (MPI) version 1.8.14 loaded.
> netcdf-c version 4.3.3 loaded.
> netcdf-fortran version 4.4.1 loaded.
> Gnuplot graphing utility version 4.6.1 loaded.
> [rsheppar@h2 ~]$ ldd
> /N/dc2/projects/ray/quarry/h5py/h5py-2.5.0/build/lib.linux-x86_64-2.7/h5py/_errors.so
> linux-vdso.so.1 =>  (0x7fff39db7000)
> libpthread.so.0 => /lib64/libpthread.so.0 (0x7facfe887000)
> libc.so.6 => /lib64/libc.so.6 (0x7facfe4f3000)
> /lib64/ld-linux-x86-64.so.2 (0x7facff049000)
>
>
> On 6/11/2015 8:09 PM, Gilles Gouaillardet wrote:
>
> Ray,
>
> this symbol is defined in libmpi.so.
>
> can you run
> ldd
> */N/dc2/projects/ray/quarry/h5py/h5py-2.5.0/build/lib.linux-x86_64-2.7/h5py/*
> _errors.so
> and make sure this is linked with openmpi 1.8.4 ?
>
> Cheers,
>
> Gilles
>
> On 6/12/2015 1:29 AM, Ray Sheppard wrote:
>
> Hi List,
>   I know I saw this issue years ago but have forgotten the details. I
> looked through old posts but only found about half a dozen pertaining to
> WinDoze.  I am trying to build a Python (2.7.3) extension (h5py) that calls
> HDF5 (1.8.14).  I built both the OpenMPI (1.8.4) and the HDF5 modules so I
> know they are consistent.  All goes well until I try to run the tests. Then
> I get:
>
> ImportError:
> /N/dc2/projects/ray/quarry/h5py/h5py-2.5.0/build/lib.linux-x86_64-2.7/h5py/_errors.so:
> undefined symbol: ompi_mpi_info_null
>
> I am not sure I completely trust the h5py package but I don't have a real
> good reason for believing that way.  I would appreciate it if someone could
> explain where ompi_mpi_info_null is defined and possibly a way to tell
> Python about it.  Thanks!
> Ray
>
>
>
>
> ___
> users mailing listus...@open-mpi.org 
> 
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post: 
> http://www.open-mpi.org/community/lists/users/2015/06/27117.php
>
>
>


Re: [OMPI users] Undefined ompi_mpi_info_null issue

2015-06-12 Thread Ray Sheppard

Hi Gilles,
  Thanks for the reply. I completely forgot that lived in the main 
library.  ldd doesn't show that it read my LD_LIBRARY_PATH (I also push 
out an LPATH variable just for fun).  I force modules to echoed when 
users initialize them.  You can see OpenMPI was visible to H5py.  Now I 
wonder why it didn't pick it up...  Thanks again.

  Ray
GMP arithmetic library version 5.1.1 loaded.
MPFR version 3.1.1 loaded.
Mpc version 1.0.1 loaded.
gcc version 4.9.2 loaded.
Moab Workload Manager scheduling and management system version 7.1.1 loaded.
Python programming language version 2.7.3 loaded.
Perl programming language version 5.16.2 loaded.
Intel compiler suite version 15.0.1 loaded.
OpenMPI libraries (Intel) version 1.8.4 loaded.
TotalView version 8.15.0-15 loaded.
FFTW (Intel, Double precision) version 3.3.3 loaded.
hdf4 version 4.2.10 loaded.
Curl version 7.28.1 loaded.
HDF5 (MPI) version 1.8.14 loaded.
netcdf-c version 4.3.3 loaded.
netcdf-fortran version 4.4.1 loaded.
Gnuplot graphing utility version 4.6.1 loaded.
[rsheppar@h2 ~]$ ldd 
/N/dc2/projects/ray/quarry/h5py/h5py-2.5.0/build/lib.linux-x86_64-2.7/h5py/_errors.so

linux-vdso.so.1 =>  (0x7fff39db7000)
libpthread.so.0 => /lib64/libpthread.so.0 (0x7facfe887000)
libc.so.6 => /lib64/libc.so.6 (0x7facfe4f3000)
/lib64/ld-linux-x86-64.so.2 (0x7facff049000)


On 6/11/2015 8:09 PM, Gilles Gouaillardet wrote:

Ray,

this symbol is defined in libmpi.so.

can you run
ldd 
/N/dc2/projects/ray/quarry/h5py/h5py-2.5.0/build/lib.linux-x86_64-2.7/h5py/_errors.so

and make sure this is linked with openmpi 1.8.4 ?

Cheers,

Gilles

On 6/12/2015 1:29 AM, Ray Sheppard wrote:

Hi List,
  I know I saw this issue years ago but have forgotten the details. I 
looked through old posts but only found about half a dozen pertaining 
to WinDoze.  I am trying to build a Python (2.7.3) extension (h5py) 
that calls HDF5 (1.8.14).  I built both the OpenMPI (1.8.4) and the 
HDF5 modules so I know they are consistent.  All goes well until I 
try to run the tests. Then I get:


ImportError: 
/N/dc2/projects/ray/quarry/h5py/h5py-2.5.0/build/lib.linux-x86_64-2.7/h5py/_errors.so: 
undefined symbol: ompi_mpi_info_null


I am not sure I completely trust the h5py package but I don't have a 
real good reason for believing that way.  I would appreciate it if 
someone could explain where ompi_mpi_info_null is defined and 
possibly a way to tell Python about it.  Thanks!

Ray





___
users mailing list
us...@open-mpi.org
Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
Link to this post: 
http://www.open-mpi.org/community/lists/users/2015/06/27117.php