Thak you Jeff Squyres,
I've installed Open MPI v1. and now it's OK.
Kind regards,


Jeff Squyres a écrit :
The warning is a side-effect of how we're probing for OpenFabrics-capable hardware (e.g., IB HCAs). While annoying, the warning is harmless -- it's just noisily indicating that you seem to have no OpenFabrics-capable hardware.

We made a probe a bit smarter in Open MPI v1.3. If you upgrade to OMPI v1.3, those warnings should go away.


On Feb 12, 2009, at 1:25 AM, Nicolas Moulin wrote:

Hello,
  I use openmpi. I have installed it in the directory :
/usr/lib64/openmpi/1.2.5-gcc/

If I execute a test with openMPI (for exemple helloworld!!!) I have the
next problem:
/*[nmoulin@clusterdell ~/mpi-test]$ mpirun -np 4 mpi-test
libibverbs: Fatal: couldn't read uverbs ABI version.
--------------------------------------------------------------------------
[0,1,1]: OpenIB on host node01 was unable to find any HCAs.
Another transport will be used instead, although this may result in
lower performance.
-------------------------------------------------------------------------- --------------------------------------------------------------------------
[0,1,1]: uDAPL on host node01 was unable to find any NICs.
Another transport will be used instead, although this may result in
lower performance.
--------------------------------------------------------------------------
libibverbs: Fatal: couldn't read uverbs ABI version.
--------------------------------------------------------------------------
[0,1,2]: OpenIB on host node02 was unable to find any HCAs.
Another transport will be used instead, although this may result in
lower performance.
-------------------------------------------------------------------------- --------------------------------------------------------------------------
[0,1,2]: uDAPL on host node02 was unable to find any NICs.
Another transport will be used instead, although this may result in
lower performance.
--------------------------------------------------------------------------
libibverbs: Fatal: couldn't read uverbs ABI version.
--------------------------------------------------------------------------
[0,1,0]: OpenIB on host node01 was unable to find any HCAs.
Another transport will be used instead, although this may result in
lower performance.
-------------------------------------------------------------------------- --------------------------------------------------------------------------
[0,1,0]: uDAPL on host node01 was unable to find any NICs.
Another transport will be used instead, although this may result in
lower performance.
--------------------------------------------------------------------------
libibverbs: Fatal: couldn't read uverbs ABI version.
--------------------------------------------------------------------------
[0,1,3]: OpenIB on host node02 was unable to find any HCAs.
Another transport will be used instead, although this may result in
lower performance.
-------------------------------------------------------------------------- --------------------------------------------------------------------------
[0,1,3]: uDAPL on host node02 was unable to find any NICs.
Another transport will be used instead, although this may result in
lower performance.
--------------------------------------------------------------------------
Hello world! from processor 0 out of 4
Hello world! from processor 1 out of 4
Hello world! from processor 2 out of 4
Hello world! from processor 3 out of 4
[nmoulin@clusterdell ~/mpi-test]$ */

So the job is executated but there's some errors. Now, if I execute the
same job with
the mca parameters all is OK.....hum I think.
/*[nmoulin@clusterdell ~/mpi-test]$ mpirun -mca btl tcp,self -np 4 mpi-test
Hello world! from processor 0 out of 4
Hello world! from processor 1 out of 4
Hello world! from processor 3 out of 4
Hello world! from processor 2 out of 4
[nmoulin@clusterdell ~/mpi-test]$*/

I've try to get the mca parameters in the file
/usr/lib64/openmpi/1.2.5-gcc/etc/openmpi-mca-params.conf
but it seems not to take into account....
Can you help me do that?
Kind regards,


_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users




--
_______________________________________________________________
Nicolas MOULIN  (bureau J3-19)
Ingénieur de recherche
Centre SMS / Département MPI
UMR CNRS 5146 - PECM
Ecole Nationale Supérieure des Mines de Saint-Etienne
158, cours Fauriel - 42023 Saint-Etienne cedex 02 - France
Bureau: +33 4 77 42 02 41
Fax:    +33 4 77 42 02 49
e-mail: nicolas.mou...@emse.fr
http://www.emse.fr/~nmoulin
_______________________________________________________________

Reply via email to