This was originally an off-list discussion, but after writing this
reply I realized that my last two paragraphs are of interest to other
developers, and it's possible that any of the rest might inspire ideas
from other users:


---------- Forwarded message ----------
Date: Fri, 23 Sep 2011 11:53:10 -0500 (CDT)
From: Roy Stogner <[email protected]>
To: Truman Ellis <[email protected]>
Subject: Re: Allocating RHS


On Fri, 23 Sep 2011, Truman Ellis wrote:

> I've run into an issue with my local solve for the optimal test functions. I 
> am going to need to assemble multiple right hand
> sides corresponding to each trial variable and degree of freedom. My idea was 
> to create a
> std::vector<std::vector<DenseVector<Number> > > test_F
> where the first index corresponds to the index of the trial variable, and the 
> second to the index of the DOFs for that variable.

We're in a coupled system, so at both global and local levels it's not
hard to get unique DoF indices even when considering all variables at
once.  But I guess the extra level of vector<> indirection is likely
to lead to simpler code; I've done it for that reason elsewhere.

> I was going to create a corresponding
> std::vector<std::vector<std::vectpr<DenseSubVector<Number> *> > > test_subF;
> 
> The problem is that in the constructor of DPGContext, no particular element 
> has selected and thus dof_indices_var[v].size() is
> zero for each variable v. This also raises issues of how to handle p 
> refinement.

Don't forget hybrid meshes.  Even with constant p the number of DoFs
can change every time the geometric element type does.

> I know elem_residuals just does a resize, but since we will have a
> variable number of test_subF[v][d].push_back(new
> DenseSubVector<Number>(test_F)) we would probably have to
> dynamically allocate and delete memory. This doesn't sound like the
> optimal solution.

phi, dphi, etc. all do resizes too.  It doesn't sound optimal, but
nothing bad has ever shown up in benchmarking.  On the other hand,
there's only two levels of resize there and having to do even more
might make a difference.  With one level of resize there wouldn't have
been a problem at all, since std::vector typically doesn't *really*
ever reduce its own size, it just leaves reserved memory for future
increases.

> Can you think of anything better?

boost::multi_array would at least have the memory allocations happen
in big contiguous blocks rather than a bunch of little allocations.
But I don't know if boost is "smart" enough to leave the stride length
unchanged when doing reductive resizes (since for most people the
efficiency tradeoffs of that kind of memory reservation would no
longer be smart as you go past 1D).

What I've always wanted to do eventually is encapsulate the needs of
libMesh multiarrays into a short little inline/templated class, then
try using that class as a shim to vector<vector<> >, multi_array,
Eigen, etc. to see what works faster.

My suggestion to you would be to stick with nested vector<> for now;
if performance benchmarking shows a problem with that then we'll
change "eventually" to "right now".
---
Roy

------------------------------------------------------------------------------
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security
threats, fraudulent activity, and more. Splunk takes this data and makes
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2dcopy2
_______________________________________________
Libmesh-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/libmesh-users

Reply via email to