On Wed, 13 Feb 2013, Cody Permann wrote:
We don't want every processor to know about every
boundary id, only for every id on its own part of the boundary.
I don't think this'll be what we want to do for those maps you added,
though.
Hmm, so which is it? If you don't want ids everywhere, why would you want
strings everywhere?
It's not "value==boundary_id_type" vs "value==string" that I was
thinking about, it's "key==Node*" (or Elem*,side) vs
"key==boundary_id_type".
The boundary ids are indexed by node and elem; i.e. on a typical mesh
with a million elements and a dozen side sets, you've got tens of
thousands of ids to worry about, and the number keeps growing with the
size of the coarse mesh. The boundary names are indexed by id; i.e.
on that same mesh you've got a dozen strings to pass around, and the
number is fixed for a given BVP.
Those ought to be serialized, right? It would save a little
memory but add a lot of communication if we tried to have each rank
only store labels corresponding to boundary ids that had some support
on that rank's partition.
Ah, so we have to decide on which trade-off we want...
Right. But now that I think about it longer: communications
efficiency means we definitely wouldn't want to stick boundary id
names into the Node/Elem packing, but that means we have to send the
boundary id names separately, and so we could always get the set of
boundary ids that are supported on a given set of Nodes/Elems and then
send just those names. Combine that with a deletion step when
removing nonsemilocal elements and there's no reason it would have to
be inefficient to keep the boundary names fully parallelized.
I'd love to add Parallel::pack/unpack/etc. implementations for
std::pair and std::string.
Then you'd be able to broadcast each whole
map with a single function call.
std::string for sure, but std::pair? Why not add an implementation
for std::map that just send pairs of vectors?
If we do std::pair, then we get std::map automatically, and we also
get multimap, unordered_map/multimap, GCC's hash versions, Google's
btree versions, whatever, for free.
We can also often do fancier things when passing iterators than we can
when passing containers. This is why much of the mesh redistribution
code is so simple now - for example, if we need to send elements in
ascending order by level so as to build up the parent pointers
correctly, we don't have to copy each set into containers of Elem
pointers first, we just pass the same predicate-iterators that libMesh
already provides into packed_range Parallel:: routines.
I guess we'll see - I don't know if we need this right now or not.
We have a huge backlog all the time so we just have to decide what
gets done and what doesn't, I'm sure you know the feeling.
Indeed, this sentence makes me want to laugh and cry all at once.
---
Roy
------------------------------------------------------------------------------
Free Next-Gen Firewall Hardware Offer
Buy your Sophos next-gen firewall before the end March 2013
and get the hardware for free! Learn more.
http://p.sf.net/sfu/sophos-d2d-feb
_______________________________________________
Libmesh-devel mailing list
Libmesh-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/libmesh-devel