Hello all,

this is not technically a Cactus question (actually I am using a
different code), though maybe someone on the list has experienced this
before.

I am finding that using the default stack of modules on stampede (well
mostly, I have loaded):

module list
Currently Loaded Modules:
  1) TACC            6) intel/13.0.2.146  11) petsc/3.3
  2) TACC-paths      7) mvapich2/1.9a2    12) python/2.7.3-epd-7.3.2
  3) Linux           8) fftw3/3.3.2       13) git/1.8.1.1
  4) cluster         9) gsl/1.15
  5) cluster-paths  10) phdf5/1.8.9

I get a linear increase in memory usage (as measured by /proc/self/stat)
which eventually kills the run. Switching the MPI stack to Intel's MPI
(module swap mvapich2 impi/4.1.1.03) on the other hand I do not see this
issue at all, but unfortunately can no longer load the petsc module.

Since I'd like to use petsc, does anyone know a workaround for mvapich2
on stampede to reduce its memory consumption? It does not seem as if the
reason fragmentation (at least mallinfo does not show an increase in the
allocated-from-os but not used-by-application memory blocks [fordblks]).

For illustration, attached is a plot of memory consumption (in GB) vs.
simulation time for all processes in the run. Clearly IMPI performs
better for the two processes using most memory (the black and orange lines).

Yours,
Roland

Attachment: IntelMPI_Memory_long.pdf
Description: Adobe PDF document

Attachment: signature.asc
Description: OpenPGP digital signature

_______________________________________________
Users mailing list
[email protected]
http://cactuscode.org/mailman/listinfo/users

Reply via email to