> I just went through the code looking for places where we could
> profitably use a two-part close, and the best thing I could come up
> with is that it might potentially be a more efficient way to close
> multiple vectors/matrices at once.  But that's not worth futzing with
> the API for; if someone ever does find some computation to usefully
> slip in between assembly and solve, we'll add close_start() and
> close_finish() member functions then.

I think you could provide a hook, using a function pointer, to be called
between the assembly_begin and assembly_end calls. And making this as an
optional argument would also ensure backward compatibility. Just a thought !

> -----Original Message-----
> From: [EMAIL PROTECTED] [mailto:libmesh-devel-
> [EMAIL PROTECTED] On Behalf Of Roy Stogner
> Sent: Wednesday, June 25, 2008 3:36 PM
> To: Benjamin Kirk
> Cc: [email protected]
> Subject: Re: [Libmesh-devel] Matrix Free Memory Scaling with ParallelMesh
> 
> 
> On Wed, 25 Jun 2008, Benjamin Kirk wrote:
> 
> >>> DistributedVector<Real> vec(n_global, n_local,
> array_of_remote_indices);
> >>>
> >>> Where "vec" is a vector which stores n_local entries, and also has
> "ghost
> >>> storage" for array_of_remote_indices.size() entries.
> >>
> >> We'd also need an "unsigned int first_local_index" argument for the
> >> offset of n_local, right?  In any case that sounds like an excellent
> >> idea.
> >
> > Is there ever a case where you need it?  Was thinking it can always be
> > computed from the partial sums of n_local over all the processors lower
> than
> > you.
> 
> Hmm... good point.  But since we already do that operation in the
> DofMap, we could probably avoid doing it a second time (except in
> debug mode, where we'd double-check) by making vector constructors
> more explicit.  It's not a big communication, but it is all-to-all.
> 
> > Similarly, n_global really would not be used except to assert that it is
> the
> > same as the sum of n_local.
> 
> In that case let's leave it out too; forcing n_global to be supplied but
> then computing local offsets inside the constructor pretty much forces
> redundant communication, I think.
> 
> >> While you're explaining the fundamentals of our parallel vector
> >> structures to me, could you explain the VecAssemblyBegin /
> >> VecAssemblyEnd pair in PetscVector::close?  That's what does all the
> >> communication required by VecSetValues, right?  What's the reason for
> >> the dual API call; does the communication get started asynchronously
> >> by Begin() and then End() blocks waiting for completion?
> >
> > That is absolutely the case.  I fretted for a while about implementing
> them
> > separately, but I feared there would be a million places you could get
> out
> > of sync.  For example, you call vec.close_start() at the end of your
> matrix
> > assembly routine, then vec.close_finish() before the linear solver?
> What
> > computation would you want to slip between those two?  I figured we'd be
> > setting ourselves up for a most frequently-asked question, and it
> doesn't
> > seem to have hurt anything to date...
> 
> I just went through the code looking for places where we could
> profitably use a two-part close, and the best thing I could come up
> with is that it might potentially be a more efficient way to close
> multiple vectors/matrices at once.  But that's not worth futzing with
> the API for; if someone ever does find some computation to usefully
> slip in between assembly and solve, we'll add close_start() and
> close_finish() member functions then.
> 
> If the PETSc people went to all the trouble of adding asynchronous
> communication to their assembly code, were they smart enough to start
> that communication as soon as possible?  Because there is one bit of
> computation that could definitely be started before the start of the
> assembly has been communicated: the rest of the assembly.
> ---
> Roy
> 
> -------------------------------------------------------------------------
> Check out the new SourceForge.net Marketplace.
> It's the best place to buy or sell services for
> just about anything Open Source.
> http://sourceforge.net/services/buy/index.php
> _______________________________________________
> Libmesh-devel mailing list
> [email protected]
> https://lists.sourceforge.net/lists/listinfo/libmesh-devel


-------------------------------------------------------------------------
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services for
just about anything Open Source.
http://sourceforge.net/services/buy/index.php
_______________________________________________
Libmesh-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/libmesh-devel

Reply via email to