Garth Wells <gn...@cam.ac.uk> writes:

> On Thu, 2018-03-08 at 11:25 -0700, Jed Brown wrote:
>> Garth Wells <gn...@cam.ac.uk> writes:
>> 
>> > On Wed, 2018-03-07 at 16:13 -0700, Jed Brown wrote:
>> > > No reason, just didn't need it.  I don't think any PETSc
>> > > developers
>> > > use
>> > > VecSetValues* because VecScatter is a more natural interface with
>> > > lower
>> > > overhead.
>> > > 
>> > 
>> > Any demos of its recommended use?
>> 
>> snes/examples/tutorials/ex28.c is my recommended demo for both
>> MatGetLocalSubMatrix() and composite vectors.  You can follow the
>> example (using DMComposite) or call VecScatter directly.
>
> Thanks, Jed. Could you expand on VecSetValues* vs VecScatter? I'm a bit
> perplexed because I can't find anything in the docs discussing use of 
> VecSetValues* vs VecScatter.

Most PETSc examples create a local vector and use VecScatterBegin/End
(or DMLocalToGlobalBegin/End which wraps it) to communicate from local
vector to global vector.  This is typical when using finite element
methods with a non-overlapping element partition.  (Some low-order FEM
practitioners do a bit of redundant computation to assemble using a
vertex partition.)  The advantage of VecScatter is that it knows in
advance exactly what values are moving where.  In contrast, VecSetValues
+ VecAssemblyBegin/End needs to determine where all the entries are
going (sort of like a reduction operation; can more efficient when MPI-3
is available) and how many to expect.  If you need to use
VecAssemblyBegin/End and it is a bottleneck, there is an optimization to
cache the communication pattern if you promise to only communicate a
subset of the values on the first assembly.  But when VecScatter does
the job, it tends to be more convenient and a bit faster.

Reply via email to