Hi there Wolfgang,

You are dead right about the MPI_Initialize, calling PetscInitialize
and PetscFinalize as in step-17 removes the error, as does calling
MPI_Init (&argc, &argv) and MPI_Finalize() directly. The draw backs to
this are that each non MPI example needs to be modified by hand to
include this if Trilinos is used, and that even the simple, serial
programs are now running under MPI.

In terms of what ConstraintMatrix is doing with MPI, I have looked at
the code and fortunately only the constructor is called and it is a
very simple constructor. I notice that new to 6.3-pre
lac/trilinos_vector.h is included and a private member
mutable TrilinosWrappers::MPI::Vector vec_distribute;
is made. The constructor does not explicitly do anything with
vec_distribute so its default constructor would be called and this is
probably were the MPI call comes from.

Looking into where vec_distribute is used, it only in the method
"ConstraintMatrix::distribute (TrilinosWrappers::MPI::Vector &vec)
const". The method first seems to run a setup procedure on the
vec_distribute, which is presumably expensive otherwise vec_distribute
could be local to the class. Avoiding calling the default constructor
on vec_distribute seems to difficult in ConstraintMatrix and on the
other hand it doesn't seem in appropriate for ConstraintMatrix to call
the default constructor if MPI isn't used, since the object produced
by the default constructor is not intended to be used without a reinit
call anyway.

I am attaching some code with modifications to test MPI_Initialized
before calling Epetra_MpiComm and if the test is false, to divert the
Epetra_Maps to use Epetra_SerialComm. The modifications mean that
step-2 runs without any errors without MPI_Init and I have done a
quick verification that step-31 is producing the same results as 6.2.1
on the same machine. Right now the changes probably aren't elegant,
and I'm sure they aren't great for the efficiency of the actual MPI
methods (the changes basically make a map with Epetra_SerialComm and
then overwrite it with a map from Epetra_MpiComm if MPI_Init has been
called). I also expect that users who make Trilinos:MPI:Vectors before
MPI_Init is run might see some strange errors, on the other hand the
reinit method may avoid this.

Either way, hopefully the files be a useful suggestion of a possible
direction to avoid the run time errors.

Regards,
Michael

Attachment: trilinos_vector_rev.tar.gz
Description: GNU Zip compressed data

_______________________________________________
dealii mailing list http://poisson.dealii.org/mailman/listinfo/dealii

Reply via email to