Hi folks

I'm posting this here for want of any other obvious place.  As always,
if it should be redirected please let me know.


Questions:  How much does using the MPI wrappers influence the memory
management at runtime?  What has changed in this regard from 1.2.3 to
1.2.4?


The reason I ask is that I have an f90 code that does very strange
things.  The structure of the code is not all that straightforward, with
a "tree" of modules usually allocating their own storage (all with save
applied globally within the module).  Compiling with OpenMPI 1.2.4
coupled to a gcc 4.3.0 prerelease and running as a single process (with
no explicit mpirun), the elements of one particular array seem to revert
to previous values between where they are set and a later part of the
code.  (I'll refer to this as The Bug, and having the matrix elements
stay as set as "expected behaviour".)

The most obvious explanation would be a coding error.  However,
compiling and running this with OpenMPI 1.2.3 gives me the expected
behaviour!  As does compiling and running with a different MPI
implementation and compiler set.  Replacing the prerelease gcc 4.3.0
with the released 4.2.2 gives the same results.

The Bug is unstable.  Removing calls to various routines in used modules
(that otherwise do not effect the results) returns to expected behaviour
at runtime.  Removing a call to MPI_Recv that is never called returns to
expected behaviour.

Because of this I can't reduce the problem to a small testcase, and so
have not included any code at this stage.


OK, after writing all that I just tried running the code with an
explicit mpirun -np 1... and the bug disappeared!

Does anyone care to comment?

Ciao
Terry


-- 
Dr Terry Frankcombe
Physical Chemistry, Department of Chemistry
Göteborgs Universitet
SE-412 96 Göteborg Sweden
Ph: +46 76 224 0887   Skype: terry.frankcombe
<te...@chem.gu.se>

Reply via email to