On Thu, 15 Jan 2009, Kirk, Benjamin (JSC-EG) wrote:

> I believe this should be handled by the send_list.

You're right that it should be possible to make the send_list
redundant with (and then use it for) PETSc's local->global lookup, but
whether PETSc's APIs make that easy (and whether we can do that
without introducing too much DofMap/NumericVector tying ourselves) I'm
less sure of.

> Main advantage here is (i) the data already exist, and (ii) the integer
> index array will be small.

It's possible that a proper hash table would do better than the binary
search required by an integer array... but that's fine; no need to
make the system unnecessarily complex for the theoretical possibility
of a small performance boost.

> As Roy asks/suggests, though, this information could *still* be
> shared among several vectors with the same partitioning if desired.

The only trouble would be when the send_list is updated before the
vectors (which are storing a pointer/reference to it?) are changed to
match.  System::project_vector doesn't need to use
current_local_solution now, because it builds inefficient temporary
serialized vectors to handle potential major partitioning changes...
but I don't want ot make it harder to fix that in the future.

> BTW, I think I promised this a while ago, so I am delinquent in my delivery.
> Roy, I'd be happy to help.  I think we should copy off NumericVector and
> work on it in parallel (pun intended?) to the the existing implementation.
> It should then be a drop-in replacement when we are done.

That might be the safe way to do things.  Are we going to hold Tim up,
though, if we give too much "help"?  I've already got a forked-off
FEMSystem that I'm finishing up, you're getting us per-subdomain
variables; it might be fastest if we let Tim concentrate on
NumericVector changes first, then we hook in the ghost index list
second (at which point the code should be working), then we figure out
how/whether to keep from duplicating the send_list data third.
---
Roy

------------------------------------------------------------------------------
This SF.net email is sponsored by:
SourcForge Community
SourceForge wants to tell your story.
http://p.sf.net/sfu/sf-spreadtheword
_______________________________________________
Libmesh-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/libmesh-devel

Reply via email to