Niklas Fischer <[email protected]> writes: > Hello, > > I have attached a small test case for a problem I am experiencing. What > this dummy program does is it reads a vector and a matrix from a text > file and then solves Ax=b. The same data is available in two forms: > - everything is in one file (matops.s.0 and vops.s.0) > - the matrix and vector are split between processes (matops.0, > matops.1, vops.0, vops.1) > > The serial version of the program works perfectly fine but unfortunately > errors occure, when running the parallel version: > > make && mpirun -n 2 a.out matops vops > > mpic++ -DPETSC_CLANGUAGE_CXX -isystem > /home/data/fischer/libs/petsc-3.4.3/arch-linux2-c-debug/include -isystem > /home/data/fischer/libs/petsc-3.4.3/include petsctest.cpp -Werror -Wall > -Wpedantic -std=c++11 -L > /home/data/fischer/libs/petsc-3.4.3/arch-linux2-c-debug/lib -lpetsc > /usr/bin/ld: warning: libmpi_cxx.so.0, needed by > /home/data/fischer/libs/petsc-3.4.3/arch-linux2-c-debug/lib/libpetsc.so, > may conflict with libmpi_cxx.so.1 > /usr/bin/ld: warning: libmpi.so.0, needed by > /home/data/fischer/libs/petsc-3.4.3/arch-linux2-c-debug/lib/libpetsc.so, > may conflict with libmpi.so.1 > librdmacm: couldn't read ABI version. > librdmacm: assuming: 4 > CMA: unable to get RDMA device list > -------------------------------------------------------------------------- > [[43019,1],0]: A high-performance Open MPI point-to-point messaging module > was unable to find any relevant network interfaces: > > Module: OpenFabrics (openib) > Host: dornroeschen.igpm.rwth-aachen.de > CMA: unable to get RDMA device list
It looks like your MPI is either broken or some of the code linked into your application was compiled with a different MPI or different version. Make sure you can compile and run simple MPI programs in parallel.
pgpxlvNCzhUys.pgp
Description: PGP signature
