I seem to recall when localizing a parallel petsc vector it wants all the 
indices mapping from src to dest, including local ones.

Going forward, that should be a much less common activity, so I have no problem 
changing it.

-Ben



----- Original Message -----
From: Roy Stogner <[email protected]>
To: Tim Kroeger <[email protected]>
Cc: Kirk, Benjamin (JSC-EG); [email protected] 
<[email protected]>
Sent: Mon Jan 19 12:56:11 2009
Subject: Re: [Libmesh-devel] Memory scaling of current_local_solution


On Mon, 19 Jan 2009, Tim Kroeger wrote:

> Please find attached the first version of "my part".

Looks good so far.

> I haven't tested it because I had no idea how to do that.  Well, at
> least it compiles. Perhaps it is best that you guys implement your
> part now and then run one of the examples on it, and in the case
> that it doesn't work as expected, we will have to somehow cooperate
> in finding the bugs.  If you have a better idea, let me know.

That sounds reasonable.  And our part should be simple enough; just
plugging in the send_list at the right times.

But seeing your patch makes me a little perturbed at the way the
send_list is being handled in the library right now, so, Ben:

Are we really populating the send_list with local dof indices as well
as ghost indices?

If so: do we really need to do that?  (i.e. is there some PETSc or
Trilinos requirement?)  I'd think the distributed linear algebra would
only need to know ghost indices plus first and last local indices to
handle the synchronization properly.
---
Roy
------------------------------------------------------------------------------
This SF.net email is sponsored by:
SourcForge Community
SourceForge wants to tell your story.
http://p.sf.net/sfu/sf-spreadtheword
_______________________________________________
Libmesh-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/libmesh-devel

Reply via email to