I have just committed a change to the DofObject with completely restructures
its internal storage.  It is now using a contiguous buffer for all index
information.  I think this is something John wanted me to do about 5 years
ago now... 

Two primary goals here:

(1) make it more obvious to myself how this class can easily be packed and
communicated via MPI, and
(2) reduce the malloc/delete bookkeeping inside the class.

I think I accomplished both, but at a slight expense in code clarity.  It
works on everything I can test it on, but please let me know if you
experience issues.  It took a couple pieces of paper and note scribbling to
figure out where to position the iterators and properly accomplish system
addition, I can tell you.

A nice side effect is that it uses less memory when you start adding
systems.

As a test case I created a 3D mesh, adding 3 explicit systems.   #1 has 9
DOFs, #2 has 10, and #3 has 1.  I ran the old and new implementation through
massif, valgrind's heap profiler.  The old implementation peaked out at
775MB used, the new one peaks at 679MB.

I think I can get this down even further - for my case all the elements have
0 components for each var (pure lagrange basis), yet I am allocating storage
for those indices.  I don't see an easy way to avoid this with the new data
structure but I am going to think about it some more...

I also found that we were setting old_dof_objects even if they were not
needed in the case of multiple systems.   I fixed that and it reduced the
memory requirement further from 679MB to 539MB.

-Ben



------------------------------------------------------------------------------
Achieve unprecedented app performance and reliability
What every C/C++ and Fortran developer should know.
Learn how Intel has extended the reach of its next-generation tools
to help boost performance applications - inlcuding clusters.
http://p.sf.net/sfu/intel-dev2devmay
_______________________________________________
Libmesh-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/libmesh-devel

Reply via email to