Update to pysparse's latest version of trunk, I just fixed a memory
leak yesterday in pysparse (I'll do a minor release of the tarball
either today or Monday). People have fixed a number of memory leaks in
pysparse over the last few months if you check the logs.The memory
leak is extremely intermittent and happens for me when I call
LinearGMRESSolver in pysparse or some combinations of solvers and
preconditioners in trilinos. It is in the find() method of ll_mat and
we started using it when we released the parallel version of fipy.

There is also a separate memory leak in trilinos in IFPACK. I think
the only preconditioner that uses this is the incomplete cholesy
precondtioner, so you might not want to use that. It's quite an
important preconditioner so I'd like to get back to the trilinos
people if I can make a reasonable diagnosis.

Try updating to the latest pysparse and make sure you are not using
the icPreconditioner. If you still have problems either send me the
script or try running it through valgrind. If you want to try
valgrind, I can give you some pointers for using it with python.

Cheers

On Thu, Jun 3, 2010 at 7:34 PM, jtg <[email protected]> wrote:
>
> Dear All,
>
> My FiPy/Trilinos code has a memory leak.  How to recreate the problem:
>
> 0. prerequisite:  trilinos built against MPI
>
> 1. set FIPY_SOLVER=Pysparse
>
> 2. run the anisotropy.py demo directly from the command line. Specifically,
> make sure that nx=500 (or some other "big" number) and steps=1000 (or so).
>
> 3. monitor memory utilization for that python task.  For example,
>       top -b -d 2 | grep python
>
> 4. watch while... nothing happens
>
> 5. now set FIPY_SOLVER=Trilinos
>
> 6. repeat step 2.  You do *not* need to use mpiexec:  one processor is
> plenty. (I have tried both the trilinos default and linearPCGSolver solvers,
> using Grid1D and Grid2D examples.)
>
> 7. repeat step 3
>
>
> A few details:
>
> o On our system (FiPy-2.1, Trilinos 9.0.3, OpenMPI 1.3.3, mpi4py 1.1.0) step
> 7 results in an immediate and obvious memory leak.  (To call this a "leak"
> is a little like calling what is happening in the Gulf a "spill".)
>
> o pysparse shows no sign of leaking; I do not presently have a serial
> trilinos build with which to test.
>
> o running (modified) demos from the PyTrilinos example directory failed to
> recreate the problem; test programs I wrote passing 100MB arrays around with
> mpi4py failed to recreate the problem.  Both of these tests are far from
> conclusive, as I have only the vaguest understanding of their relationship
> to the anisotropy code.
>
> While I begin the prayer and fasting rituals required in order to undertake
> more trilinos builds, I am asking if some kind soul from the list has any
> insight into this problem or would try this test and report what they see.
> For example, did I forgot to disable the ENABLE_CATASTROPHIC_MEMORY_GUSHER
> option in trilinos (Default = True)?
>
> Thanks and regards,
> +jtg+
>
>



-- 
Daniel Wheeler


Reply via email to