Am 22.04.2014 14:02, schrieb Jed Brown:
Niklas Fischer <[email protected]> writes:
thank you for your inputs. Unfortunately MPI does not seem to be the
issue here. The attachment contains a simple MPI hello world program
which runs flawlessly (I will append the output to this mail) and I have
not encountered any problems with other MPI programs. My question still
stands.
The output below is amazingly ugly and concerning.  Matt answered your
question, but in your build system, you should not be adding
-DPETSC_CLANGUAGE_CXX; that is a configure-time option, not something
that your code can change arbitrarily.  Your makefile should include
$PETSC_DIR/conf/variables (and conf/rules if you want, but not if you
want to write your own rules).
Thank you for your advice. As I stated earlier, this is just a test case. I normally use (your) cmake module when using PETSc.

mpirun -np 2 ./mpitest

librdmacm: couldn't read ABI version.
librdmacm: assuming: 4
CMA: unable to get RDMA device list
--------------------------------------------------------------------------
[[44086,1],0]: A high-performance Open MPI point-to-point messaging module
was unable to find any relevant network interfaces:

Module: OpenFabrics (openib)
    Host: dornroeschen.igpm.rwth-aachen.de

Another transport will be used instead, although this may result in
lower performance.
--------------------------------------------------------------------------
librdmacm: couldn't read ABI version.
librdmacm: assuming: 4
CMA: unable to get RDMA device list
Hello world from processor dornroeschen.igpm.rwth-aachen.de, rank 0 out
of 2 processors
Hello world from processor dornroeschen.igpm.rwth-aachen.de, rank 1 out
of 2 processors
[dornroeschen.igpm.rwth-aachen.de:128141] 1 more process has sent help
message help-mpi-btl-base.txt / btl:no-nics
[dornroeschen.igpm.rwth-aachen.de:128141] Set MCA parameter
"orte_base_help_aggregate" to 0 to see all help / error messages
About the MPI output: I have asked the system administrator about this, and he is of the oppinion that everything is as it should be. Initially, I found the messages concerning as well, but all they are really saying is that MPI defaults to using a slower link to exchange messages.

Greetings,
Niklas Fischer

Reply via email to