On Wed, Apr 18, 2012 at 11:47 AM, Jens Lohne Eftang <[email protected]>wrote:

>  petsch make test runs ex19 with 1 and 2 mpi processes and ex5f with 1 mpi
> process sucessfully.
>
I'm guessing the problem is with the way libMesh uses PETSc's compilers.
I'm not sure exactly how libMesh deals with it when PETSc doesn't define a
C++ compiler.
Perhaps then an mpicxx from another mpi install ends up being used?
Maybe John can answer that.

Without digging deep into libMesh I would recommend using a different PETSc
configuration using
--with-clanguage=C++ to ensure that PETSc configures a C++ compiler.


> mpicc -show returns
>
In light of what I said above this may be irrelevant (since we need to
figure out which C++ (not C) compiler libMesh uses),
but still:  which mpicc is this?  The fact that it links executables
against a different mpich than the one you built
makes me suspect that this isn't the right mpicc (i.e., not the one PETSc
was built with).

Thanks.
Dmitry.

>
> gcc -I/usr/local/include -L/usr/local/lib -Wl,-rpath,/usr/local/lib
> -lmpich -lopa -lmpl -lrt -lpthread
>
> Thanks again!
>
> Jens
>
>
>
> On 04/17/2012 11:23 PM, Dmitry Karpeev wrote:
>
> The PETSc configuration seems to be fine.
> Are you able to run PETSc tests?
> cd /home/eftang/fem_software/petsc-3.2-p5
> make PETSC_DIR=/home/eftang/fem_software/petsc-3.2-p5
> PETSC_ARCH=arch-linux2-c-opt test
>
>  The compiler that gets configured by PETSc is a wrapper C compiler
> inherited from mpich
> Check to see what shared linker paths it really includes:
> /home/eftang/fem_software/mpich2-install/bin/mpicc -show
>
>  It's possible that libMesh overrides compilers, though.
> Since libMesh needs a C++ compiler and in your case PETSc doesn't
> configure one,
> I'm not sure what libMesh ends up using to compile its C++ code.
> If that's the problem, you might want to reconfigure PETSc
> --with-clanguage=C++
>
>  Dmitry.
>
>
>
>
>
> On Tue, Apr 17, 2012 at 9:47 AM, John Peterson <[email protected]>wrote:
>
>>  On Mon, Apr 16, 2012 at 5:45 PM, Jens Lohne Eftang <[email protected]>
>> wrote:
>> > On 04/16/2012 07:31 PM, John Peterson wrote:
>> >>
>> >> On Mon, Apr 16, 2012 at 5:23 PM, Jens Lohne Eftang<[email protected]>
>> >>  wrote:
>> >>>
>> >>> Thanks for you reply.
>> >>>
>> >>> the libmesh_LIBS output has references to mpi, -lmpich and -lmpichf90.
>> >>> Would
>> >>> it help to post the whole output?
>> >>
>> >> Are they preceded by something like -Wl,-rpath, in the libmesh_LIBS
>> >> output?
>> >>
>> >> Perhaps something like:
>> >>
>> >> -Wl,-rpath,/home/eftang/fem_software/mpich2-install/lib
>> >
>> > Yes, for example ...
>> >  -Wl,-rpath,/home/eftang/fem_software/mpich2-install/lib
>> > -L/home/eftang/fem_software/mpich2-install/lib
>> > -Wl,-rpath,/usr/lib/gcc/x86_64-linux-gnu/4.4.6
>> > -L/usr/lib/gcc/x86_64-linux-gnu/4.4.6
>> -Wl,-rpath,/usr/lib/x86_64-linux-gnu
>> > -L/usr/lib/x86_64-linux-gnu -Wl,-rpath,/lib/x86_64-linux-gnu
>> > -L/lib/x86_64-linux-gnu -ldl -lmpich -lopa -lmpl -lrt -lpthread -lgcc_s
>> > -lmpichf90 -lgfortran ...
>> >
>> > it's a rather long output though...
>> >
>> >
>> >> ?
>> >>
>> >> What is the output if you run 'nm' on the MPI shared libraries of your
>> >> system, and grep for mpi_bcast_ ?
>> >
>> > nm * | grep mpi_bcast_ in the mpich2-install/lib folder returns
>> >
>> > 0000000000000000 T mpi_bcast_
>> > 0000000000000000 W mpi_bcast__
>> > 00000000000164f0 T mpi_bcast_
>> > 00000000000164f0 W mpi_bcast__
>> > 00000000000164f0 T mpi_bcast_
>>
>>  Hmm... unfortunately I don't see anything that's obviously wrong yet.
>>
>> Is there any chance you have changed/upgraded compilers between the
>> time you built built mpich/petsc and the time you tried to build
>> libmesh?
>>
>> One other thing you might try: have petsc download mpich along with
>> everything else instead of using your existing mpich install...
>>
>> --
>> John
>>
>>
>> ------------------------------------------------------------------------------
>> Better than sec? Nothing is better than sec when it comes to
>> monitoring Big Data applications. Try Boundary one-second
>> resolution app monitoring today. Free.
>> http://p.sf.net/sfu/Boundary-dev2dev
>>  _______________________________________________
>> Libmesh-users mailing list
>> [email protected]
>> https://lists.sourceforge.net/lists/listinfo/libmesh-users
>>
>
>
>
------------------------------------------------------------------------------
Better than sec? Nothing is better than sec when it comes to
monitoring Big Data applications. Try Boundary one-second 
resolution app monitoring today. Free.
http://p.sf.net/sfu/Boundary-dev2dev
_______________________________________________
Libmesh-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/libmesh-users

Reply via email to