The warning is a side-effect of how we're probing for OpenFabrics- capable hardware (e.g., IB HCAs). While annoying, the warning is harmless -- it's just noisily indicating that you seem to have no OpenFabrics-capable hardware.

We made a probe a bit smarter in Open MPI v1.3. If you upgrade to OMPI v1.3, those warnings should go away.


On Feb 12, 2009, at 1:25 AM, Nicolas Moulin wrote:

Hello,
  I use openmpi. I have installed it in the directory :
/usr/lib64/openmpi/1.2.5-gcc/

If I execute a test with openMPI (for exemple helloworld!!!) I have the
next problem:
/*[nmoulin@clusterdell ~/mpi-test]$ mpirun -np 4 mpi-test
libibverbs: Fatal: couldn't read uverbs ABI version.
--------------------------------------------------------------------------
[0,1,1]: OpenIB on host node01 was unable to find any HCAs.
Another transport will be used instead, although this may result in
lower performance.
--------------------------------------------------------------------------
--------------------------------------------------------------------------
[0,1,1]: uDAPL on host node01 was unable to find any NICs.
Another transport will be used instead, although this may result in
lower performance.
--------------------------------------------------------------------------
libibverbs: Fatal: couldn't read uverbs ABI version.
--------------------------------------------------------------------------
[0,1,2]: OpenIB on host node02 was unable to find any HCAs.
Another transport will be used instead, although this may result in
lower performance.
--------------------------------------------------------------------------
--------------------------------------------------------------------------
[0,1,2]: uDAPL on host node02 was unable to find any NICs.
Another transport will be used instead, although this may result in
lower performance.
--------------------------------------------------------------------------
libibverbs: Fatal: couldn't read uverbs ABI version.
--------------------------------------------------------------------------
[0,1,0]: OpenIB on host node01 was unable to find any HCAs.
Another transport will be used instead, although this may result in
lower performance.
--------------------------------------------------------------------------
--------------------------------------------------------------------------
[0,1,0]: uDAPL on host node01 was unable to find any NICs.
Another transport will be used instead, although this may result in
lower performance.
--------------------------------------------------------------------------
libibverbs: Fatal: couldn't read uverbs ABI version.
--------------------------------------------------------------------------
[0,1,3]: OpenIB on host node02 was unable to find any HCAs.
Another transport will be used instead, although this may result in
lower performance.
--------------------------------------------------------------------------
--------------------------------------------------------------------------
[0,1,3]: uDAPL on host node02 was unable to find any NICs.
Another transport will be used instead, although this may result in
lower performance.
--------------------------------------------------------------------------
Hello world! from processor 0 out of 4
Hello world! from processor 1 out of 4
Hello world! from processor 2 out of 4
Hello world! from processor 3 out of 4
[nmoulin@clusterdell ~/mpi-test]$ */

So the job is executated but there's some errors. Now, if I execute the
same job with
the mca parameters all is OK.....hum I think.
/*[nmoulin@clusterdell ~/mpi-test]$ mpirun -mca btl tcp,self -np 4 mpi-test
Hello world! from processor 0 out of 4
Hello world! from processor 1 out of 4
Hello world! from processor 3 out of 4
Hello world! from processor 2 out of 4
[nmoulin@clusterdell ~/mpi-test]$*/

I've try to get the mca parameters in the file
/usr/lib64/openmpi/1.2.5-gcc/etc/openmpi-mca-params.conf
but it seems not to take into account....
Can you help me do that?
Kind regards,


_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


--
Jeff Squyres
Cisco Systems

Reply via email to