petsch make test runs ex19 with 1 and 2 mpi processes and ex5f with 1 mpi process sucessfully.
mpicc -show returns gcc -I/usr/local/include -L/usr/local/lib -Wl,-rpath,/usr/local/lib -lmpich -lopa -lmpl -lrt -lpthread Thanks again! Jens On 04/17/2012 11:23 PM, Dmitry Karpeev wrote: > The PETSc configuration seems to be fine. > Are you able to run PETSc tests? > cd /home/eftang/fem_software/petsc-3.2-p5 > make PETSC_DIR=/home/eftang/fem_software/petsc-3.2-p5 > PETSC_ARCH=arch-linux2-c-opt test > > The compiler that gets configured by PETSc is a wrapper C compiler > inherited from mpich > Check to see what shared linker paths it really includes: > /home/eftang/fem_software/mpich2-install/bin/mpicc -show > > It's possible that libMesh overrides compilers, though. > Since libMesh needs a C++ compiler and in your case PETSc doesn't > configure one, > I'm not sure what libMesh ends up using to compile its C++ code. > If that's the problem, you might want to reconfigure PETSc > --with-clanguage=C++ > > Dmitry. > > > > > > On Tue, Apr 17, 2012 at 9:47 AM, John Peterson <[email protected] > <mailto:[email protected]>> wrote: > > On Mon, Apr 16, 2012 at 5:45 PM, Jens Lohne Eftang > <[email protected] <mailto:[email protected]>> wrote: > > On 04/16/2012 07:31 PM, John Peterson wrote: > >> > >> On Mon, Apr 16, 2012 at 5:23 PM, Jens Lohne > Eftang<[email protected] <mailto:[email protected]>> > >> wrote: > >>> > >>> Thanks for you reply. > >>> > >>> the libmesh_LIBS output has references to mpi, -lmpich and > -lmpichf90. > >>> Would > >>> it help to post the whole output? > >> > >> Are they preceded by something like -Wl,-rpath, in the libmesh_LIBS > >> output? > >> > >> Perhaps something like: > >> > >> -Wl,-rpath,/home/eftang/fem_software/mpich2-install/lib > > > > Yes, for example ... > > -Wl,-rpath,/home/eftang/fem_software/mpich2-install/lib > > -L/home/eftang/fem_software/mpich2-install/lib > > -Wl,-rpath,/usr/lib/gcc/x86_64-linux-gnu/4.4.6 > > -L/usr/lib/gcc/x86_64-linux-gnu/4.4.6 > -Wl,-rpath,/usr/lib/x86_64-linux-gnu > > -L/usr/lib/x86_64-linux-gnu -Wl,-rpath,/lib/x86_64-linux-gnu > > -L/lib/x86_64-linux-gnu -ldl -lmpich -lopa -lmpl -lrt -lpthread > -lgcc_s > > -lmpichf90 -lgfortran ... > > > > it's a rather long output though... > > > > > >> ? > >> > >> What is the output if you run 'nm' on the MPI shared libraries > of your > >> system, and grep for mpi_bcast_ ? > > > > nm * | grep mpi_bcast_ in the mpich2-install/lib folder returns > > > > 0000000000000000 T mpi_bcast_ > > 0000000000000000 W mpi_bcast__ > > 00000000000164f0 T mpi_bcast_ > > 00000000000164f0 W mpi_bcast__ > > 00000000000164f0 T mpi_bcast_ > > Hmm... unfortunately I don't see anything that's obviously wrong yet. > > Is there any chance you have changed/upgraded compilers between the > time you built built mpich/petsc and the time you tried to build > libmesh? > > One other thing you might try: have petsc download mpich along with > everything else instead of using your existing mpich install... > > -- > John > > > ------------------------------------------------------------------------------ > Better than sec? Nothing is better than sec when it comes to > monitoring Big Data applications. Try Boundary one-second > resolution app monitoring today. Free. > http://p.sf.net/sfu/Boundary-dev2dev > _______________________________________________ > Libmesh-users mailing list > [email protected] > <mailto:[email protected]> > https://lists.sourceforge.net/lists/listinfo/libmesh-users > > ------------------------------------------------------------------------------ Better than sec? Nothing is better than sec when it comes to monitoring Big Data applications. Try Boundary one-second resolution app monitoring today. Free. http://p.sf.net/sfu/Boundary-dev2dev _______________________________________________ Libmesh-users mailing list [email protected] https://lists.sourceforge.net/lists/listinfo/libmesh-users
