On Sep 17, 2014, at 7:29 PM, Matthew Knepley <[email protected]> wrote:

> On Wed, Sep 17, 2014 at 5:08 PM, Barry Smith <[email protected]> wrote:
> 
> On Sep 17, 2014, at 3:38 PM, Matthew Knepley <[email protected]> wrote:
> 
> > On Wed, Sep 17, 2014 at 3:12 PM, Barry Smith <[email protected]> wrote:
> >
> > On Sep 17, 2014, at 3:03 PM, Patrick Sanan <[email protected]> wrote:
> >
> > > On 9/16/14 9:43 PM, Barry Smith wrote:
> > >> On Sep 16, 2014, at 2:29 PM, Matthew Knepley <[email protected]> wrote:
> > >>
> > >>> On Tue, Sep 16, 2014 at 2:23 PM, Barry Smith <[email protected]> wrote:
> > >>>
> > >>>    Patrick,
> > >>>
> > >>>      This "local part of the subdomains for this processor” term in  
> > >>> PCASMSetLocalSubdomains is, IMHO, extremely confusing. WTHWTS? Anyways, 
> > >>> I think that if you set the is_local[] to be different than the is[] 
> > >>> you will always end up with a nonsymetric preconditioner. I think for 
> > >>> one dimension you need to use
> > >>>
> > >>> No I don't think that is right. The problem below is that you have 
> > >>> overlap in only one direction. Process 0 overlaps
> > >>> Process 1, but Process 1 has no overlap of Process 0. This is not how 
> > >>> Schwarz is generally envisioned.
> > >>   Sure it is.
> > >>> Imagine the linear algebra viewpoint, which I think is cleaner here. 
> > >>> You partition the matrix rows into non-overlapping
> > >>> sets. These sets are is_local[]. Then any information you get from 
> > >>> another domain is another row, which is put into
> > >>> is[]. You can certainly have a non-symmetric overlap, which you have 
> > >>> below, but it mean one way information
> > >>> transmission which is strange for convergence.
> > >>   No, not a all.
> > >>
> > >>
> > >> |    0      1      2      3     4      5     6    |
> > >>
> > >>   Domain 0 is the region from  |   to  4  with Dirichlet boundary 
> > >> conditions at each end (| and 4). Domain 1 is from 2 to | with Dirichlet 
> > >> boundary conditions at each end (2 and |) .
> > >>
> > >>   If you look at the PCSetUp_ASM() and PCApply_ASM() you’ll see all 
> > >> kinds of VecScatter creations from the various is and is_local, 
> > >> “restriction”, “prolongation” and “localization” then in the apply the 
> > >> different scatters are applied in the two directions, which results in a 
> > >> non-symmetric operator.
> > >
> > > I was able to get my uniprocessor example to give the (symmetric) 
> > > preconditioner I expected by commenting out the check in PCSetUp_ASM 
> > > (line 311 in asm.c)
> >
> >     if (firstRow != lastRow) SETERRQ2(PETSC_COMM_SELF,PETSC_ERR_PLIB, 
> > "Specified ASM subdomain sizes were invalid: %d != %d", firstRow, lastRow);
> >
> >    This check is absurd and needs to be removed.
> >
> > > and using PCASMSetLocalSubdomains with the same (overlapping) IS's for 
> > > both is and is_local ([0 1 2 3] and [3 4 5 6] in the example above). It 
> > > also works passing NULL for is_local.
> >
> >   Great.
> > >
> > > I assume that the purpose of the check mentioned above is to ensure that 
> > > every grid point is assigned to exactly one processor, which is needed by 
> > > whatever interprocess scattering goes on in the implementation. Also, I 
> > > assume that augmenting the domain definition with an explicit 
> > > specification of the way domains are distributed over processes allows 
> > > for more controllable use of PC_ASM_RESTRICT, with all its attractive 
> > > properties.
> > >
> > > Anyhow, Barry's advice previously in this thread works locally (for one 
> > > test case) if you remove the check above, but the current implementation 
> > > enforces something related to what Matt describes, which might be overly 
> > > restrictive if multiple domains share a process.  The impression I got 
> > > initially from the documentation was that if one uses PC_ASM_BASIC, the 
> > > choice of is_local should only influence the details of the communication 
> > > pattern, not (in exact arithmetic, with process-count-independent 
> > > subsolves) the preconditioner being defined.
> >
> >    The “communication pattern” does determine the preconditioner being 
> > defined.
> >
> >     The introduction of is_local[] broke the clean usage of PC_ASM_* that 
> > use to exist, so your confusion is our fault, not yours.
> > >
> > >
> > > For regular grids this all seems pretty pathological (in practice I 
> > > imagine people want to use symmetric overlaps,
> >
> >    As I said above you are using "symmetric overlaps,”. It just looks 
> > “unsymmetric” if you introduce this concept of “non-overlapping initial 
> > subdomains” which is an unneeded harmful concept.
> >
> > I really do not understand what you are saying here. If you want to do 
> > RASM, then you must be able to
> > tell the difference between your domain and the overlap. That is all that 
> > the distinction between is and
> > islocal does. Your original implementation of RASM was wanting because it 
> > merely dropped communication.
> > If you have several domains on a process, then this is not RASM.
> 
>    Correct.
> 
>    The problem is that your extra test prevented perfectly valid ASM 
> configurations. You are right that these “perfectly valid ASM configurations” 
> do not have an RASM form (if we define RASM as strictly requiring 
> non-overlapping domains that then get extended and either the restriction or 
> prolongation skips the overlap region) but they are still valid ASM 
> preconditioners so shouldn’t error out just because they cannot be used for 
> RASM.
> 
>    Barry
> 
> Note that I am calling any collection of domains which may or may not 
> overlap, which may have a “single grid point” overlap or more as valid ASM 
> configurations, because they are (i.e. the set of valid ASM configurations is 
> larger than the set of RASM configurations). So my valid ASM configurations 
> is more then just domains obtained by taking a non-overlapping set of domains 
> and then "growing the domains” and I wanted the code to support this, hence I 
> removed the extra test.
> 
> That is fine. We must make sure PETSc properly throws an error if someone 
> selects PC_ASM_RESTRICT.

  I’m not sure. It depends on your definition of PC_ASM_RESTRICT
> 
> 
>    Matt
>  
> 
> >
> >   Matt
> >
> >
> >    Barry
> >
> >
> > > and I assume that one domain per node is the most common use case), but I 
> > > could imagine it being more of a real concern when working with 
> > > unstructured grids.
> > >
> > >>
> > >>    Barry
> > >>
> > >>
> > >>
> > >>>   Matt
> > >>>
> > >>>> is[0] <-- 0 1 2 3
> > >>>> is[1] <-- 3 4 5 6
> > >>>> is_local[0] <-- 0 1 2 3
> > >>>> is_local[1] <-- 3 4 5 6
> > >>> Or you can pass NULL for is_local use PCASMSetOverlap(pc,0);
> > >>>
> > >>>   Barry
> > >>>
> > >>>
> > >>> Note that is_local[] doesn’t have to be non-overlapping or anything.
> > >>>
> > >>>
> > >>> On Sep 16, 2014, at 10:48 AM, Patrick Sanan <[email protected]> 
> > >>> wrote:
> > >>>
> > >>>> For the purposes of reproducing an example from a paper, I'd like to 
> > >>>> use PCASM with subdomains which 'overlap minimally' (though this is 
> > >>>> probably never a good idea in practice).
> > >>>>
> > >>>> In one dimension with 7 unknowns and 2 domains, this might look like
> > >>>>
> > >>>> 0  1  2  3  4  5  6  (unknowns)
> > >>>> ------------          (first subdomain  : 0 .. 3)
> > >>>>         -----------  (second subdomain : 3 .. 6)
> > >>>>
> > >>>> The subdomains share only a single grid point, which differs from the 
> > >>>> way PCASM is used in most of the examples.
> > >>>>
> > >>>> In two dimensions, minimally overlapping rectangular subdomains would 
> > >>>> overlap one exactly one row or column of the grid. Thus, for example, 
> > >>>> if the grid unknowns were
> > >>>>
> > >>>> 0  1  2  3  4  5  |
> > >>>> 6  7  8  9  10 11 | |
> > >>>> 12 13 14 15 16 17   |
> > >>>>         --------
> > >>>> -----------
> > >>>>
> > >>>> then one minimally-overlapping set of 4 subdomains would be
> > >>>> 0 1 2 3 6 7 8 9
> > >>>> 3 4 5 9 10 11
> > >>>> 6 7 8 9 12 13 14 15
> > >>>> 9 10 11 15 16 17
> > >>>> as suggested by the dashes and pipes above. The subdomains only 
> > >>>> overlap by a single row or column of the grid.
> > >>>>
> > >>>> My question is whether and how one can use the PCASM interface to work 
> > >>>> with these sorts of decompositions (It's fine for my purposes to use a 
> > >>>> single MPI process). In particular, I don't quite understand if should 
> > >>>> be possible to define these decompositions by correctly providing is 
> > >>>> and is_local arguments to PCASMSetLocalSubdomains.
> > >>>>
> > >>>> I have gotten code to run defining the is_local entries to be subsets 
> > >>>> of the is entries which define a partition of the global degrees of 
> > >>>> freedom*, but I'm not certain that this was the correct choice, as it 
> > >>>> appears to produce an unsymmetric preconditioner for a symmetric 
> > >>>> system when I use direct subdomain solves and the 'basic' type for 
> > >>>> PCASM.
> > >>>>
> > >>>> * For example, in the 1D example above this would correspond to
> > >>>> is[0] <-- 0 1 2 3
> > >>>> is[1] <-- 3 4 5 6
> > >>>> is_local[0] <-- 0 1 2
> > >>>> is_local[1] <-- 3 4 5 6
> > >>>>
> > >>>>
> > >>>>
> > >>>>
> > >>>
> > >>>
> > >>>
> > >>> --
> > >>> What most experimenters take for granted before they begin their 
> > >>> experiments is infinitely more interesting than any results to which 
> > >>> their experiments lead.
> > >>> -- Norbert Wiener
> >
> >
> >
> >
> > --
> > What most experimenters take for granted before they begin their 
> > experiments is infinitely more interesting than any results to which their 
> > experiments lead.
> > -- Norbert Wiener
> 
> 
> 
> 
> -- 
> What most experimenters take for granted before they begin their experiments 
> is infinitely more interesting than any results to which their experiments 
> lead.
> -- Norbert Wiener

Reply via email to