Dear All,
My FiPy/Trilinos code has a memory leak. How to recreate the problem:
0. prerequisite: trilinos built against MPI
1. set FIPY_SOLVER=Pysparse
2. run the anisotropy.py demo directly from the command line. Specifically,
make sure that nx=500 (or some other "big" number) and steps=1000 (or so).
3. monitor memory utilization for that python task. For example,
top -b -d 2 | grep python
4. watch while... nothing happens
5. now set FIPY_SOLVER=Trilinos
6. repeat step 2. You do *not* need to use mpiexec: one processor is
plenty. (I have tried both the trilinos default and linearPCGSolver solvers,
using Grid1D and Grid2D examples.)
7. repeat step 3
A few details:
o On our system (FiPy-2.1, Trilinos 9.0.3, OpenMPI 1.3.3, mpi4py 1.1.0) step
7 results in an immediate and obvious memory leak. (To call this a "leak"
is a little like calling what is happening in the Gulf a "spill".)
o pysparse shows no sign of leaking; I do not presently have a serial
trilinos build with which to test.
o running (modified) demos from the PyTrilinos example directory failed to
recreate the problem; test programs I wrote passing 100MB arrays around with
mpi4py failed to recreate the problem. Both of these tests are far from
conclusive, as I have only the vaguest understanding of their relationship
to the anisotropy code.
While I begin the prayer and fasting rituals required in order to undertake
more trilinos builds, I am asking if some kind soul from the list has any
insight into this problem or would try this test and report what they see. For
example, did I forgot to disable the ENABLE_CATASTROPHIC_MEMORY_GUSHER
option in trilinos (Default = True)?
Thanks and regards,
+jtg+