Dear libMesh team,

Is there any chance that the memory scaling of 
System::current_local_solution will be improved in the near future?

In my application, I have a large number of systems and a large number 
of cells, and the fact that System::current_local_solution is always a 
serial vector seems to destroy the memory scalability completely.  The 
cluster I'm computing on has 8 CPUs per node but I cannot use more 
than 2 (or perhaps three) since the nodes run out of memory otherwise.

I don't think that the (serial) grid itself is the decisive thing in 
my application.  (I watched the memory consumption, and it remains 
small during the grid creation procedure and increases drastically 
when the systems are created and initialized.)

I think that adding the required functionality to NumericVector and 
PetscVector would not be too complicated (PETSc's VecCreateGhost() 
seems to do the trick).  I can try to do this part myself, that is to 
add a constructor that aditionally takes a list of ghost indices that 
we want to store (and implement everything on the PETSc side).  The 
other side is to make the System class use that new constructor (in 
PETSc case at least) and, in particular, determine which indices are 
actually required.  And to do the correct thing when the grid is 
refined/coarsened.  I feel not able to implement that part because I'm 
not familiar enough with the interna of libMesh.

Let me know what you guys think.

Best Regards,

Tim

-- 
Dr. Tim Kroeger
[email protected]            Phone +49-421-218-7710
[email protected]            Fax   +49-421-218-4236

Fraunhofer MEVIS, Institute for Medical Image Computing
Universitaetsallee 29, 28359 Bremen, Germany


------------------------------------------------------------------------------
This SF.net email is sponsored by:
SourcForge Community
SourceForge wants to tell your story.
http://p.sf.net/sfu/sf-spreadtheword
_______________________________________________
Libmesh-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/libmesh-users

Reply via email to