Re: [petsc-users] Neumann BC with non-symmetric matrix

2016-03-01 Thread Mohammad Mirzadeh
On Tue, Mar 1, 2016 at 3:03 PM, Boyce Griffith 
wrote:

>
> On Mar 1, 2016, at 2:41 PM, Mohammad Mirzadeh  wrote:
>
>
>
> On Tue, Mar 1, 2016 at 2:07 PM, Boyce Griffith 
> wrote:
>
>>
>> On Mar 1, 2016, at 12:06 PM, Mohammad Mirzadeh 
>> wrote:
>>
>> Nice discussion.
>>
>>
>> On Tue, Mar 1, 2016 at 10:16 AM, Boyce Griffith 
>> wrote:
>>
>>>
>>> On Mar 1, 2016, at 9:59 AM, Mark Adams  wrote:
>>>
>>>
>>>
>>> On Mon, Feb 29, 2016 at 5:42 PM, Boyce Griffith 
>>> wrote:
>>>

 On Feb 29, 2016, at 5:36 PM, Mark Adams  wrote:


>> GAMG is use for AMR problems like this a lot in BISICLES.
>>
>
> Thanks for the reference. However, a quick look at their paper
> suggests they are using a finite volume discretization which should be
> symmetric and avoid all the shenanigans I'm going through!
>

 No, they are not symmetric.  FV is even worse than vertex centered
 methods.  The BCs and the C-F interfaces add non-symmetry.


 If you use a different discretization, it is possible to make the c-f
 interface discretization symmetric --- but symmetry appears to come at a
 cost of the reduction in the formal order of accuracy in the flux along the
 c-f interface. I can probably dig up some code that would make it easy to
 compare.

>>>
>>> I don't know.  Chombo/Boxlib have a stencil for C-F and do F-C with
>>> refluxing, which I do not linearize.  PETSc sums fluxes at faces directly,
>>> perhaps this IS symmetric? Toby might know.
>>>
>>>
>>> If you are talking about solving Poisson on a composite grid, then
>>> refluxing and summing up fluxes are probably the same procedure.
>>>
>>
>> I am not familiar with the terminology used here. What does the refluxing
>> mean?
>>
>>
>>>
>>> Users of these kinds of discretizations usually want to use the
>>> conservative divergence at coarse-fine interfaces, and so the main question
>>> is how to set up the viscous/diffusive flux stencil at coarse-fine
>>> interfaces (or, equivalently, the stencil for evaluating ghost cell values
>>> at coarse-fine interfaces). It is possible to make the overall
>>> discretization symmetric if you use a particular stencil for the flux
>>> computation. I think this paper (
>>> http://www.ams.org/journals/mcom/1991-56-194/S0025-5718-1991-1066831-5/S0025-5718-1991-1066831-5.pdf)
>>> is one place to look. (This stuff is related to "mimetic finite difference"
>>> discretizations of Poisson.) This coarse-fine interface discretization
>>> winds up being symmetric (although possibly only w.r.t. a weighted inner
>>> product --- I can't remember the details), but the fluxes are only
>>> first-order accurate at coarse-fine interfaces.
>>>
>>>
>> Right. I think if the discretization is conservative, i.e. discretizing
>> div of grad, and is compact, i.e. only involves neighboring cells sharing a
>> common face, then it is possible to construct symmetric discretization. An
>> example, that I have used before in other contexts, is described here:
>> http://physbam.stanford.edu/~fedkiw/papers/stanford2004-02.pdf
>>
>> An interesting observation is although the fluxes are only first order
>> accurate, the final solution to the linear system exhibits super
>> convergence, i.e. second-order accurate, even in L_inf. Similar behavior is
>> observed with non-conservative, node-based finite difference
>> discretizations.
>>
>>
>> I don't know about that --- check out Table 1 in the paper you cite,
>> which seems to indicate first-order convergence in all norms.
>>
>
> Sorry my bad. That was the original work which was later extended in
> doi:10.1016/j.compfluid.2005.01.006
>  to second order (c.f.
> section 3.3) by using flux weighting in the traverse direction.
>
>
> I don't follow the argument about why it is a bad thing for the fine
> fluxes to have different values than the overlying coarse flux, but this
> probably works out to be the same as the Ewing discretization in certain
> cases (although possibly only in 2D with a refinement ratio of 2).
>
>
In general, you can show that by weighting fluxes in the traverse direction
(a.k.a using the coarse-grid flux), you can eliminates the leading-order
truncation error (which turns out to be O(1)) in that direction. It seems,
to me, this is the reason you go from 1st to 2nd second-order accuracy.
Unfortunately this is not pointed out in the paper and while I cannot speak
for the authors, I think what motivated them to use the coarse flux was to
keep the normal velocity continuous across C-F interface after the
projection step.

It looks like that throughout the AMR literature people have encountered
the super-convergence over and over again and while most of the
"justifications" makes sense in an intuitive way, I don't think I 

Re: [petsc-users] Neumann BC with non-symmetric matrix

2016-03-01 Thread Boyce Griffith

> On Mar 1, 2016, at 2:41 PM, Mohammad Mirzadeh  wrote:
> 
> 
> 
> On Tue, Mar 1, 2016 at 2:07 PM, Boyce Griffith  > wrote:
> 
>> On Mar 1, 2016, at 12:06 PM, Mohammad Mirzadeh > > wrote:
>> 
>> Nice discussion.
>> 
>> 
>> On Tue, Mar 1, 2016 at 10:16 AM, Boyce Griffith > > wrote:
>> 
>>> On Mar 1, 2016, at 9:59 AM, Mark Adams >> > wrote:
>>> 
>>> 
>>> 
>>> On Mon, Feb 29, 2016 at 5:42 PM, Boyce Griffith >> > wrote:
>>> 
 On Feb 29, 2016, at 5:36 PM, Mark Adams > wrote:
 
 
 GAMG is use for AMR problems like this a lot in BISICLES.
 
 Thanks for the reference. However, a quick look at their paper suggests 
 they are using a finite volume discretization which should be symmetric 
 and avoid all the shenanigans I'm going through! 
 
 No, they are not symmetric.  FV is even worse than vertex centered 
 methods.  The BCs and the C-F interfaces add non-symmetry.
>>> 
>>> 
>>> If you use a different discretization, it is possible to make the c-f 
>>> interface discretization symmetric --- but symmetry appears to come at a 
>>> cost of the reduction in the formal order of accuracy in the flux along the 
>>> c-f interface. I can probably dig up some code that would make it easy to 
>>> compare.
>>> 
>>> I don't know.  Chombo/Boxlib have a stencil for C-F and do F-C with 
>>> refluxing, which I do not linearize.  PETSc sums fluxes at faces directly, 
>>> perhaps this IS symmetric? Toby might know.
>> 
>> If you are talking about solving Poisson on a composite grid, then refluxing 
>> and summing up fluxes are probably the same procedure.
>> 
>> I am not familiar with the terminology used here. What does the refluxing 
>> mean?
>>  
>> 
>> Users of these kinds of discretizations usually want to use the conservative 
>> divergence at coarse-fine interfaces, and so the main question is how to set 
>> up the viscous/diffusive flux stencil at coarse-fine interfaces (or, 
>> equivalently, the stencil for evaluating ghost cell values at coarse-fine 
>> interfaces). It is possible to make the overall discretization symmetric if 
>> you use a particular stencil for the flux computation. I think this paper 
>> (http://www.ams.org/journals/mcom/1991-56-194/S0025-5718-1991-1066831-5/S0025-5718-1991-1066831-5.pdf
>>  
>> )
>>  is one place to look. (This stuff is related to "mimetic finite difference" 
>> discretizations of Poisson.) This coarse-fine interface discretization winds 
>> up being symmetric (although possibly only w.r.t. a weighted inner product 
>> --- I can't remember the details), but the fluxes are only first-order 
>> accurate at coarse-fine interfaces.
>> 
>> 
>> Right. I think if the discretization is conservative, i.e. discretizing div 
>> of grad, and is compact, i.e. only involves neighboring cells sharing a 
>> common face, then it is possible to construct symmetric discretization. An 
>> example, that I have used before in other contexts, is described here: 
>> http://physbam.stanford.edu/~fedkiw/papers/stanford2004-02.pdf 
>> 
>> 
>> An interesting observation is although the fluxes are only first order 
>> accurate, the final solution to the linear system exhibits super 
>> convergence, i.e. second-order accurate, even in L_inf. Similar behavior is 
>> observed with non-conservative, node-based finite difference 
>> discretizations. 
> 
> I don't know about that --- check out Table 1 in the paper you cite, which 
> seems to indicate first-order convergence in all norms.
> 
> Sorry my bad. That was the original work which was later extended in 
> doi:10.1016/j.compfluid.2005.01.006 
>  to second order (c.f. 
> section 3.3) by using flux weighting in the traverse direction.

I don't follow the argument about why it is a bad thing for the fine fluxes to 
have different values than the overlying coarse flux, but this probably works 
out to be the same as the Ewing discretization in certain cases (although 
possibly only in 2D with a refinement ratio of 2).

> A (discrete) Green's functions argument explains why one gets higher-order 
> convergence despite localized reductions in accuracy along the coarse-fine 
> interface --- it has to do with the fact that errors from individual grid 
> locations do not have that large of an effect on the solution, and these c-f 
> interface errors are concentrated along on a lower dimensional surface in the 
> domain.
> 
> This intuitively makes sense and in fact when you plot the error, you do 

Re: [petsc-users] Neumann BC with non-symmetric matrix

2016-03-01 Thread Matthew Knepley
On Tue, Mar 1, 2016 at 1:48 PM, Mohammad Mirzadeh 
wrote:

> On Tue, Mar 1, 2016 at 1:15 PM, Jed Brown  wrote:
>
>> Mohammad Mirzadeh  writes:
>>
>> > I am not familiar with the terminology used here. What does the
>> refluxing
>> > mean?
>>
>> The Chombo/BoxLib family of methods evaluate fluxes between coarse grid
>> cells overlaying refined grids, then later visit the fine grids and
>> reevaluate those fluxes.  The correction needs to be propagated back to
>> the adjoining coarse grid cell to maintain conservation.  It's an
>> implementation detail that they call refluxing.
>>
>
> Thanks for clarification.
>
>
>>
>> > Right. I think if the discretization is conservative, i.e. discretizing
>> div
>> > of grad, and is compact, i.e. only involves neighboring cells sharing a
>> > common face, then it is possible to construct symmetric discretization.
>> An
>> > example, that I have used before in other contexts, is described here:
>> > http://physbam.stanford.edu/~fedkiw/papers/stanford2004-02.pdf
>>
>> It's unfortunate that this paper repeats some unfounded multigrid
>> slander and then basically claims to have uniform convergence using
>> incomplete Cholesky with CG.  In reality, incomplete Cholesky is
>> asymptotically no better than Jacobi.
>>
>> > An interesting observation is although the fluxes are only first order
>> > accurate, the final solution to the linear system exhibits super
>> > convergence, i.e. second-order accurate, even in L_inf.
>>
>> Perhaps for aligned coefficients; definitely not for unaligned
>> coefficients.
>>
>
> Could you elaborate what you mean by aligned/unaligned coefficients? Do
> you mean anisotropic diffusion coefficient?
>

Jed (I think) means coefficients where the variation is aligned to the
grid. For example, where coefficient jumps or large variation
happens on cell boundaries.

   Matt

-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener


Re: [petsc-users] Neumann BC with non-symmetric matrix

2016-03-01 Thread Mohammad Mirzadeh
On Tue, Mar 1, 2016 at 1:15 PM, Jed Brown  wrote:

> Mohammad Mirzadeh  writes:
>
> > I am not familiar with the terminology used here. What does the refluxing
> > mean?
>
> The Chombo/BoxLib family of methods evaluate fluxes between coarse grid
> cells overlaying refined grids, then later visit the fine grids and
> reevaluate those fluxes.  The correction needs to be propagated back to
> the adjoining coarse grid cell to maintain conservation.  It's an
> implementation detail that they call refluxing.
>

Thanks for clarification.


>
> > Right. I think if the discretization is conservative, i.e. discretizing
> div
> > of grad, and is compact, i.e. only involves neighboring cells sharing a
> > common face, then it is possible to construct symmetric discretization.
> An
> > example, that I have used before in other contexts, is described here:
> > http://physbam.stanford.edu/~fedkiw/papers/stanford2004-02.pdf
>
> It's unfortunate that this paper repeats some unfounded multigrid
> slander and then basically claims to have uniform convergence using
> incomplete Cholesky with CG.  In reality, incomplete Cholesky is
> asymptotically no better than Jacobi.
>
> > An interesting observation is although the fluxes are only first order
> > accurate, the final solution to the linear system exhibits super
> > convergence, i.e. second-order accurate, even in L_inf.
>
> Perhaps for aligned coefficients; definitely not for unaligned
> coefficients.
>

Could you elaborate what you mean by aligned/unaligned coefficients? Do you
mean anisotropic diffusion coefficient?


Re: [petsc-users] Neumann BC with non-symmetric matrix

2016-03-01 Thread Mohammad Mirzadeh
On Tue, Mar 1, 2016 at 2:07 PM, Boyce Griffith 
wrote:

>
> On Mar 1, 2016, at 12:06 PM, Mohammad Mirzadeh  wrote:
>
> Nice discussion.
>
>
> On Tue, Mar 1, 2016 at 10:16 AM, Boyce Griffith 
> wrote:
>
>>
>> On Mar 1, 2016, at 9:59 AM, Mark Adams  wrote:
>>
>>
>>
>> On Mon, Feb 29, 2016 at 5:42 PM, Boyce Griffith 
>> wrote:
>>
>>>
>>> On Feb 29, 2016, at 5:36 PM, Mark Adams  wrote:
>>>
>>>
> GAMG is use for AMR problems like this a lot in BISICLES.
>

 Thanks for the reference. However, a quick look at their paper suggests
 they are using a finite volume discretization which should be symmetric and
 avoid all the shenanigans I'm going through!

>>>
>>> No, they are not symmetric.  FV is even worse than vertex centered
>>> methods.  The BCs and the C-F interfaces add non-symmetry.
>>>
>>>
>>> If you use a different discretization, it is possible to make the c-f
>>> interface discretization symmetric --- but symmetry appears to come at a
>>> cost of the reduction in the formal order of accuracy in the flux along the
>>> c-f interface. I can probably dig up some code that would make it easy to
>>> compare.
>>>
>>
>> I don't know.  Chombo/Boxlib have a stencil for C-F and do F-C with
>> refluxing, which I do not linearize.  PETSc sums fluxes at faces directly,
>> perhaps this IS symmetric? Toby might know.
>>
>>
>> If you are talking about solving Poisson on a composite grid, then
>> refluxing and summing up fluxes are probably the same procedure.
>>
>
> I am not familiar with the terminology used here. What does the refluxing
> mean?
>
>
>>
>> Users of these kinds of discretizations usually want to use the
>> conservative divergence at coarse-fine interfaces, and so the main question
>> is how to set up the viscous/diffusive flux stencil at coarse-fine
>> interfaces (or, equivalently, the stencil for evaluating ghost cell values
>> at coarse-fine interfaces). It is possible to make the overall
>> discretization symmetric if you use a particular stencil for the flux
>> computation. I think this paper (
>> http://www.ams.org/journals/mcom/1991-56-194/S0025-5718-1991-1066831-5/S0025-5718-1991-1066831-5.pdf)
>> is one place to look. (This stuff is related to "mimetic finite difference"
>> discretizations of Poisson.) This coarse-fine interface discretization
>> winds up being symmetric (although possibly only w.r.t. a weighted inner
>> product --- I can't remember the details), but the fluxes are only
>> first-order accurate at coarse-fine interfaces.
>>
>>
> Right. I think if the discretization is conservative, i.e. discretizing
> div of grad, and is compact, i.e. only involves neighboring cells sharing a
> common face, then it is possible to construct symmetric discretization. An
> example, that I have used before in other contexts, is described here:
> http://physbam.stanford.edu/~fedkiw/papers/stanford2004-02.pdf
>
> An interesting observation is although the fluxes are only first order
> accurate, the final solution to the linear system exhibits super
> convergence, i.e. second-order accurate, even in L_inf. Similar behavior is
> observed with non-conservative, node-based finite difference
> discretizations.
>
>
> I don't know about that --- check out Table 1 in the paper you cite, which
> seems to indicate first-order convergence in all norms.
>

Sorry my bad. That was the original work which was later extended in
doi:10.1016/j.compfluid.2005.01.006
 to second order (c.f.
section 3.3) by using flux weighting in the traverse direction.


>
> The symmetric discretization in the Ewing paper is only slightly more
> complicated, but will give full 2nd-order accuracy in L-1 (and maybe also
> L-2 and L-infinity). One way to think about it is that you are using simple
> linear interpolation at coarse-fine interfaces (3-point interpolation in
> 2D, 4-point interpolation in 3D) using a stencil that is symmetric with
> respect to the center of the coarse grid cell.
>
>
I'll look into that paper. One can never get enough of ideas for C-F
treatments in AMR applications :).


> A (discrete) Green's functions argument explains why one gets higher-order
> convergence despite localized reductions in accuracy along the coarse-fine
> interface --- it has to do with the fact that errors from individual grid
> locations do not have that large of an effect on the solution, and these
> c-f interface errors are concentrated along on a lower dimensional surface
> in the domain.
>

This intuitively makes sense and in fact when you plot the error, you do
see spikes at the C-F interfaces. Do you know of a resource that does a
rigorous analysis of the C-F treatment on the solution error?


>
> -- Boyce
>


Re: [petsc-users] Neumann BC with non-symmetric matrix

2016-03-01 Thread Griffith, Boyce Eugene

On Mar 1, 2016, at 10:56 AM, Jed Brown 
> wrote:

Boyce Griffith > writes:
Jed, can you also do this for Stokes?  It seems like something like
RT0 is the right place to start.

See, for example, Arnold, Falk, and Winther's 2007 paper on mixed FEM
for elasticity with weakly imposed symmetry.  It's the usual H(div)
methodology and should apply equally well to Stokes.  I'm not aware of
any analysis or results of choosing quadrature to eliminate flux terms
in these discretizations.

Two papers that are along the direction that I have in mind are:

http://onlinelibrary.wiley.com/doi/10.1002/fld.1566/abstract
http://onlinelibrary.wiley.com/doi/10.1002/fld.1723/abstract

I would love to know how to do this kind of thing on a SAMR or octree grid.

-- Boyce


Re: [petsc-users] Neumann BC with non-symmetric matrix

2016-03-01 Thread Boyce Griffith

> On Mar 1, 2016, at 12:06 PM, Mohammad Mirzadeh  wrote:
> 
> Nice discussion.
> 
> 
> On Tue, Mar 1, 2016 at 10:16 AM, Boyce Griffith  > wrote:
> 
>> On Mar 1, 2016, at 9:59 AM, Mark Adams > > wrote:
>> 
>> 
>> 
>> On Mon, Feb 29, 2016 at 5:42 PM, Boyce Griffith > > wrote:
>> 
>>> On Feb 29, 2016, at 5:36 PM, Mark Adams >> > wrote:
>>> 
>>> 
>>> GAMG is use for AMR problems like this a lot in BISICLES.
>>> 
>>> Thanks for the reference. However, a quick look at their paper suggests 
>>> they are using a finite volume discretization which should be symmetric and 
>>> avoid all the shenanigans I'm going through! 
>>> 
>>> No, they are not symmetric.  FV is even worse than vertex centered methods. 
>>>  The BCs and the C-F interfaces add non-symmetry.
>> 
>> 
>> If you use a different discretization, it is possible to make the c-f 
>> interface discretization symmetric --- but symmetry appears to come at a 
>> cost of the reduction in the formal order of accuracy in the flux along the 
>> c-f interface. I can probably dig up some code that would make it easy to 
>> compare.
>> 
>> I don't know.  Chombo/Boxlib have a stencil for C-F and do F-C with 
>> refluxing, which I do not linearize.  PETSc sums fluxes at faces directly, 
>> perhaps this IS symmetric? Toby might know.
> 
> If you are talking about solving Poisson on a composite grid, then refluxing 
> and summing up fluxes are probably the same procedure.
> 
> I am not familiar with the terminology used here. What does the refluxing 
> mean?
>  
> 
> Users of these kinds of discretizations usually want to use the conservative 
> divergence at coarse-fine interfaces, and so the main question is how to set 
> up the viscous/diffusive flux stencil at coarse-fine interfaces (or, 
> equivalently, the stencil for evaluating ghost cell values at coarse-fine 
> interfaces). It is possible to make the overall discretization symmetric if 
> you use a particular stencil for the flux computation. I think this paper 
> (http://www.ams.org/journals/mcom/1991-56-194/S0025-5718-1991-1066831-5/S0025-5718-1991-1066831-5.pdf
>  
> )
>  is one place to look. (This stuff is related to "mimetic finite difference" 
> discretizations of Poisson.) This coarse-fine interface discretization winds 
> up being symmetric (although possibly only w.r.t. a weighted inner product 
> --- I can't remember the details), but the fluxes are only first-order 
> accurate at coarse-fine interfaces.
> 
> 
> Right. I think if the discretization is conservative, i.e. discretizing div 
> of grad, and is compact, i.e. only involves neighboring cells sharing a 
> common face, then it is possible to construct symmetric discretization. An 
> example, that I have used before in other contexts, is described here: 
> http://physbam.stanford.edu/~fedkiw/papers/stanford2004-02.pdf 
> 
> 
> An interesting observation is although the fluxes are only first order 
> accurate, the final solution to the linear system exhibits super convergence, 
> i.e. second-order accurate, even in L_inf. Similar behavior is observed with 
> non-conservative, node-based finite difference discretizations. 

I don't know about that --- check out Table 1 in the paper you cite, which 
seems to indicate first-order convergence in all norms.

The symmetric discretization in the Ewing paper is only slightly more 
complicated, but will give full 2nd-order accuracy in L-1 (and maybe also L-2 
and L-infinity). One way to think about it is that you are using simple linear 
interpolation at coarse-fine interfaces (3-point interpolation in 2D, 4-point 
interpolation in 3D) using a stencil that is symmetric with respect to the 
center of the coarse grid cell.

A (discrete) Green's functions argument explains why one gets higher-order 
convergence despite localized reductions in accuracy along the coarse-fine 
interface --- it has to do with the fact that errors from individual grid 
locations do not have that large of an effect on the solution, and these c-f 
interface errors are concentrated along on a lower dimensional surface in the 
domain.

-- Boyce

Re: [petsc-users] Neumann BC with non-symmetric matrix

2016-03-01 Thread Jed Brown
Mohammad Mirzadeh  writes:

> I am not familiar with the terminology used here. What does the refluxing
> mean?

The Chombo/BoxLib family of methods evaluate fluxes between coarse grid
cells overlaying refined grids, then later visit the fine grids and
reevaluate those fluxes.  The correction needs to be propagated back to
the adjoining coarse grid cell to maintain conservation.  It's an
implementation detail that they call refluxing.

> Right. I think if the discretization is conservative, i.e. discretizing div
> of grad, and is compact, i.e. only involves neighboring cells sharing a
> common face, then it is possible to construct symmetric discretization. An
> example, that I have used before in other contexts, is described here:
> http://physbam.stanford.edu/~fedkiw/papers/stanford2004-02.pdf

It's unfortunate that this paper repeats some unfounded multigrid
slander and then basically claims to have uniform convergence using
incomplete Cholesky with CG.  In reality, incomplete Cholesky is
asymptotically no better than Jacobi.

> An interesting observation is although the fluxes are only first order
> accurate, the final solution to the linear system exhibits super
> convergence, i.e. second-order accurate, even in L_inf. 

Perhaps for aligned coefficients; definitely not for unaligned
coefficients.


signature.asc
Description: PGP signature


Re: [petsc-users] Neumann BC with non-symmetric matrix

2016-03-01 Thread Mohammad Mirzadeh
Nice discussion.


On Tue, Mar 1, 2016 at 10:16 AM, Boyce Griffith 
wrote:

>
> On Mar 1, 2016, at 9:59 AM, Mark Adams  wrote:
>
>
>
> On Mon, Feb 29, 2016 at 5:42 PM, Boyce Griffith 
> wrote:
>
>>
>> On Feb 29, 2016, at 5:36 PM, Mark Adams  wrote:
>>
>>
 GAMG is use for AMR problems like this a lot in BISICLES.

>>>
>>> Thanks for the reference. However, a quick look at their paper suggests
>>> they are using a finite volume discretization which should be symmetric and
>>> avoid all the shenanigans I'm going through!
>>>
>>
>> No, they are not symmetric.  FV is even worse than vertex centered
>> methods.  The BCs and the C-F interfaces add non-symmetry.
>>
>>
>> If you use a different discretization, it is possible to make the c-f
>> interface discretization symmetric --- but symmetry appears to come at a
>> cost of the reduction in the formal order of accuracy in the flux along the
>> c-f interface. I can probably dig up some code that would make it easy to
>> compare.
>>
>
> I don't know.  Chombo/Boxlib have a stencil for C-F and do F-C with
> refluxing, which I do not linearize.  PETSc sums fluxes at faces directly,
> perhaps this IS symmetric? Toby might know.
>
>
> If you are talking about solving Poisson on a composite grid, then
> refluxing and summing up fluxes are probably the same procedure.
>

I am not familiar with the terminology used here. What does the refluxing
mean?


>
> Users of these kinds of discretizations usually want to use the
> conservative divergence at coarse-fine interfaces, and so the main question
> is how to set up the viscous/diffusive flux stencil at coarse-fine
> interfaces (or, equivalently, the stencil for evaluating ghost cell values
> at coarse-fine interfaces). It is possible to make the overall
> discretization symmetric if you use a particular stencil for the flux
> computation. I think this paper (
> http://www.ams.org/journals/mcom/1991-56-194/S0025-5718-1991-1066831-5/S0025-5718-1991-1066831-5.pdf)
> is one place to look. (This stuff is related to "mimetic finite difference"
> discretizations of Poisson.) This coarse-fine interface discretization
> winds up being symmetric (although possibly only w.r.t. a weighted inner
> product --- I can't remember the details), but the fluxes are only
> first-order accurate at coarse-fine interfaces.
>
>
Right. I think if the discretization is conservative, i.e. discretizing div
of grad, and is compact, i.e. only involves neighboring cells sharing a
common face, then it is possible to construct symmetric discretization. An
example, that I have used before in other contexts, is described here:
http://physbam.stanford.edu/~fedkiw/papers/stanford2004-02.pdf

An interesting observation is although the fluxes are only first order
accurate, the final solution to the linear system exhibits super
convergence, i.e. second-order accurate, even in L_inf. Similar behavior is
observed with non-conservative, node-based finite difference
discretizations.

-- Boyce
>


Re: [petsc-users] Neumann BC with non-symmetric matrix

2016-03-01 Thread Jed Brown
Boyce Griffith  writes:
> Jed, can you also do this for Stokes?  It seems like something like
> RT0 is the right place to start.

See, for example, Arnold, Falk, and Winther's 2007 paper on mixed FEM
for elasticity with weakly imposed symmetry.  It's the usual H(div)
methodology and should apply equally well to Stokes.  I'm not aware of
any analysis or results of choosing quadrature to eliminate flux terms
in these discretizations.


signature.asc
Description: PGP signature


Re: [petsc-users] Neumann BC with non-symmetric matrix

2016-03-01 Thread Boyce Griffith

> On Mar 1, 2016, at 10:08 AM, Jed Brown  wrote:
> 
> Mark Adams  writes:
> 
>> I don't know.  Chombo/Boxlib have a stencil for C-F and do F-C with
>> refluxing, which I do not linearize.  PETSc sums fluxes at faces directly,
>> perhaps this IS symmetric? 
> 
> Depends on scaling.  Refluxing versus evaluating fluxes once on the fine
> faces should give the same result.  Note that you can make a symmetric
> FV discretization (for elliptic problems) by writing it as a mixed FE
> method and choosing a quadrature so that fluxes can be eliminated.

Jed, can you also do this for Stokes?  It seems like something like RT0 is the 
right place to start.

Thanks,

-- Boyce

Re: [petsc-users] Neumann BC with non-symmetric matrix

2016-03-01 Thread Boyce Griffith

> On Mar 1, 2016, at 9:59 AM, Mark Adams  wrote:
> 
> 
> 
> On Mon, Feb 29, 2016 at 5:42 PM, Boyce Griffith  > wrote:
> 
>> On Feb 29, 2016, at 5:36 PM, Mark Adams > > wrote:
>> 
>> 
>> GAMG is use for AMR problems like this a lot in BISICLES.
>> 
>> Thanks for the reference. However, a quick look at their paper suggests they 
>> are using a finite volume discretization which should be symmetric and avoid 
>> all the shenanigans I'm going through!
>> 
>> No, they are not symmetric.  FV is even worse than vertex centered methods.  
>> The BCs and the C-F interfaces add non-symmetry.
> 
> 
> If you use a different discretization, it is possible to make the c-f 
> interface discretization symmetric --- but symmetry appears to come at a cost 
> of the reduction in the formal order of accuracy in the flux along the c-f 
> interface. I can probably dig up some code that would make it easy to compare.
> 
> I don't know.  Chombo/Boxlib have a stencil for C-F and do F-C with 
> refluxing, which I do not linearize.  PETSc sums fluxes at faces directly, 
> perhaps this IS symmetric? Toby might know.

If you are talking about solving Poisson on a composite grid, then refluxing 
and summing up fluxes are probably the same procedure.

Users of these kinds of discretizations usually want to use the conservative 
divergence at coarse-fine interfaces, and so the main question is how to set up 
the viscous/diffusive flux stencil at coarse-fine interfaces (or, equivalently, 
the stencil for evaluating ghost cell values at coarse-fine interfaces). It is 
possible to make the overall discretization symmetric if you use a particular 
stencil for the flux computation. I think this paper 
(http://www.ams.org/journals/mcom/1991-56-194/S0025-5718-1991-1066831-5/S0025-5718-1991-1066831-5.pdf
 
)
 is one place to look. (This stuff is related to "mimetic finite difference" 
discretizations of Poisson.) This coarse-fine interface discretization winds up 
being symmetric (although possibly only w.r.t. a weighted inner product --- I 
can't remember the details), but the fluxes are only first-order accurate at 
coarse-fine interfaces.

-- Boyce

Re: [petsc-users] Neumann BC with non-symmetric matrix

2016-03-01 Thread Jed Brown
Mark Adams  writes:

> I don't know.  Chombo/Boxlib have a stencil for C-F and do F-C with
> refluxing, which I do not linearize.  PETSc sums fluxes at faces directly,
> perhaps this IS symmetric? 

Depends on scaling.  Refluxing versus evaluating fluxes once on the fine
faces should give the same result.  Note that you can make a symmetric
FV discretization (for elliptic problems) by writing it as a mixed FE
method and choosing a quadrature so that fluxes can be eliminated.


signature.asc
Description: PGP signature


Re: [petsc-users] Neumann BC with non-symmetric matrix

2016-03-01 Thread Mark Adams
On Mon, Feb 29, 2016 at 5:42 PM, Boyce Griffith 
wrote:

>
> On Feb 29, 2016, at 5:36 PM, Mark Adams  wrote:
>
>
>>> GAMG is use for AMR problems like this a lot in BISICLES.
>>>
>>
>> Thanks for the reference. However, a quick look at their paper suggests
>> they are using a finite volume discretization which should be symmetric and
>> avoid all the shenanigans I'm going through!
>>
>
> No, they are not symmetric.  FV is even worse than vertex centered
> methods.  The BCs and the C-F interfaces add non-symmetry.
>
>
> If you use a different discretization, it is possible to make the c-f
> interface discretization symmetric --- but symmetry appears to come at a
> cost of the reduction in the formal order of accuracy in the flux along the
> c-f interface. I can probably dig up some code that would make it easy to
> compare.
>

I don't know.  Chombo/Boxlib have a stencil for C-F and do F-C with
refluxing, which I do not linearize.  PETSc sums fluxes at faces directly,
perhaps this IS symmetric? Toby might know.


>
> -- Boyce
>


Re: [petsc-users] Neumann BC with non-symmetric matrix

2016-03-01 Thread Jed Brown
Barry Smith  writes:
>>4-2) Another observation is that the true residual stagnates way
>>earlier which I assume is an indication that the RHS is in fact
>>not in the range of A.
>
>You can hope this is the case.
>
>Of course one cannot really know if the residual is stagnating due
>to an inconsistent right hand side or for some other reason. But if
>it stagnates at 10^-4 it is probably due to inconsistent right hand
>side if it stagnates at 10^-12 the right hand side is probably
>consistent. If it stagnates at 10^-8 then it could be due to
>inconsistent rhs and or some other reason.

If the true residual stagnates while the preconditioned residual
continues to converge, it may be that the preconditioner is singular or
nearly so.  This often happens if you use a black-box method for a
saddle point problem, for example.


signature.asc
Description: PGP signature


Re: [petsc-users] Neumann BC with non-symmetric matrix

2016-02-29 Thread Boyce Griffith

> On Feb 29, 2016, at 5:36 PM, Mark Adams  wrote:
> 
> 
> GAMG is use for AMR problems like this a lot in BISICLES.
> 
> Thanks for the reference. However, a quick look at their paper suggests they 
> are using a finite volume discretization which should be symmetric and avoid 
> all the shenanigans I'm going through!
> 
> No, they are not symmetric.  FV is even worse than vertex centered methods.  
> The BCs and the C-F interfaces add non-symmetry.


If you use a different discretization, it is possible to make the c-f interface 
discretization symmetric --- but symmetry appears to come at a cost of the 
reduction in the formal order of accuracy in the flux along the c-f interface. 
I can probably dig up some code that would make it easy to compare.

-- Boyce

Re: [petsc-users] Neumann BC with non-symmetric matrix

2016-02-29 Thread Mark Adams
>
>
>> GAMG is use for AMR problems like this a lot in BISICLES.
>>
>
> Thanks for the reference. However, a quick look at their paper suggests
> they are using a finite volume discretization which should be symmetric and
> avoid all the shenanigans I'm going through!
>

No, they are not symmetric.  FV is even worse than vertex centered
methods.  The BCs and the C-F interfaces add non-symmetry.


Re: [petsc-users] Neumann BC with non-symmetric matrix

2016-02-26 Thread Barry Smith

> On Feb 26, 2016, at 12:46 PM, Mohammad Mirzadeh  wrote:
> 
> Mark,
> 
> On Fri, Feb 26, 2016 at 8:12 AM, Mark Adams  wrote:
> 
>4-4) Along the same lines, I tried a couple of other PCs such as {jacobi, 
> sor, gamg, ilu} and none of them were able to converge with bcgs as the KSP. 
> However, with gmres, almost all of them converge with the exception of gamg. 
> 
> Note, I'm not sure why you need the null space of A^T, you want the null 
> space of A.
> 
> 
> So the idea was to provide nullspace of A^T to make sure the true residual 
> also converges to zero by projecting the RHS onto the range of A. It however 
> looks like that GMRES (and sometimes BiCGSTAB) converge in the least-square 
> sense for which you only need the nullspace of A and not A^T.
> 
> And for singular systems like yours you need to use a pseudo inverse of the 
> coarse grid because it is singular -- if you represent the null space exactly.
> 
> GAMG is use for AMR problems like this a lot in BISICLES.
> 
> Thanks for the reference. However, a quick look at their paper suggests they 
> are using a finite volume discretization which should be symmetric and avoid 
> all the shenanigans I'm going through! I think it would actually be a good 
> idea for me to swap my solver with a conservative one and see if it makes 
> things better.
> 
> 
> You need to use an 'svd' coarse grid solver, or an appropriate iterative 
> solver. LU is the default.
> 
> 
> I see. How can I change the GAMG coarse grid solver? Is there an analogue of 
> "-pc_hypre_boomeramg_relax_type_coarse"?

  -mg_coarse_pc_type svd   or maybe -mg_coarse_redundant_pc_type svd  or maybe 
-mg_coarse_pc_type redundant -mg_coarse_redundant_pc_type svd  run with -help 
and grep for coarse for the exact syntax.


> 
> Mark
> 
> Thanks,
> Mohammad



Re: [petsc-users] Neumann BC with non-symmetric matrix

2016-02-26 Thread Mohammad Mirzadeh
Mark,

On Fri, Feb 26, 2016 at 8:12 AM, Mark Adams  wrote:

>
>>4-4) Along the same lines, I tried a couple of other PCs such as
>> {jacobi, sor, gamg, ilu} and none of them were able to converge with bcgs
>> as the KSP. However, with gmres, almost all of them converge with the
>> exception of gamg.
>>
>
> Note, I'm not sure why you need the null space of A^T, you want the null
> space of A.
>
>
So the idea was to provide nullspace of A^T to make sure the true residual
also converges to zero by projecting the RHS onto the range of A. It
however looks like that GMRES (and sometimes BiCGSTAB) converge in the
least-square sense for which you only need the nullspace of A and not A^T.

And for singular systems like yours you need to use a pseudo inverse of the
> coarse grid because it is singular -- if you represent the null space
> exactly.
>
> GAMG is use for AMR problems like this a lot in BISICLES.
>

Thanks for the reference. However, a quick look at their paper suggests
they are using a finite volume discretization which should be symmetric and
avoid all the shenanigans I'm going through! I think it would actually be a
good idea for me to swap my solver with a conservative one and see if it
makes things better.


> You need to use an 'svd' coarse grid solver, or an appropriate iterative
> solver. LU is the default.
>
>
I see. How can I change the GAMG coarse grid solver? Is there an analogue
of "-pc_hypre_boomeramg_relax_type_coarse"?

Mark
>

Thanks,
Mohammad


Re: [petsc-users] Neumann BC with non-symmetric matrix

2016-02-26 Thread Mark Adams
>
>
>4-4) Along the same lines, I tried a couple of other PCs such as
> {jacobi, sor, gamg, ilu} and none of them were able to converge with bcgs
> as the KSP. However, with gmres, almost all of them converge with the
> exception of gamg.
>

Note, I'm not sure why you need the null space of A^T, you want the null
space of A.

And for singular systems like yours you need to use a pseudo inverse of the
coarse grid because it is singular -- if you represent the null space
exactly.

GAMG is use for AMR problems like this a lot in BISICLES.

You need to use an 'svd' coarse grid solver, or an appropriate iterative
solver. LU is the default.

Mark


Re: [petsc-users] Neumann BC with non-symmetric matrix

2016-02-25 Thread Mohammad Mirzadeh
Thanks a lot Barry. This was very helpful :)

On Thu, Feb 25, 2016 at 6:05 PM, Barry Smith  wrote:

>
> > On Feb 25, 2016, at 4:18 PM, Mohammad Mirzadeh 
> wrote:
> >
> > Barry,
> >
> > Thanks a lot for the detailed discussion. Things make much more sense
> now, especially I was confused why the manual says to provide call both
> 'MatSetNullSpace' and 'MatSetTransposeNullSpace'. I have couple of more
> questions and some observations I have made since yesterday.
> >
> > 1) Is there a systematic way to tell KSP to stop when it stagnates? I am
> _not_ using SNES.
>
>   No, I added the issue
> https://bitbucket.org/petsc/petsc/issues/122/ksp-convergence-test-for-inconsistent
> writing the code for a new test is pretty simple, but you need to decide
> mathematically how you are going to detect "stagnation".
>
> >
> > 2) Once KSP converges to the least-square solution, the residual must be
> in the nullspace of A^T because otherwise it would have been reduced by the
> solver. So is it (at lest theoretically) possible to use the residual
> vector as an approximate basis for the n(A^T)? In general this wouldn't be
> enough but I'm thinking since the nullspace is one-dimensional, maybe I
> could use the residual itself to manually project solution onto range of A
> after calling KSPSolve?
>
>   I don't see why this wouldn't work. Just run one initial solve till
> stagnation and then use the resulting residual as the null space for A^T
> for future solves (with the same matrix, of course). It could be
> interesting to plot the residual to see what it looks like and it that
> makes sense physically
>
> >
> > 3) Are preconditoners applied to the left by default? If not which one
> are and which one are not?
>
>   It actually depends on the KSP, some algorithms only support right
> preconditioning, some implementations only support one or the other
> (depending on who wrote it) and some support both.  In PETSc CG, GMRES, and
> BiCGstab use left by default, both GMRES and BiCGstab  also support right.
>
> >
> > 4) So then if I provide the nullspace of A, KSP residual should
> converge, correct? I have made a few observations:
> >
> >4-1) When I provide the nullspace of A through MatSetNullSpace, I
> (generally) do see the residual (preconditioned) become very small (~1e-12
> or so) but then it sometimes stagnates (say around 1e-10). Is this normal?
>
>   There is only some much convergence one can expect for linear solvers;
> how far one can drive down the residual depends at least in part on the
> conditioning of the matrix. The higher the conditioning the less you can
> get in tolerance.
>
>
> >
> >4-2) Another observation is that the true residual stagnates way
> earlier which I assume is an indication that the RHS is in fact not in the
> range of A.
>
>You can hope this is the case.
>
>Of course one cannot really know if the residual is stagnating due to
> an inconsistent right hand side or for some other reason. But if it
> stagnates at 10^-4 it is probably due to inconsistent right hand side if it
> stagnates at 10^-12 the right hand side is probably consistent. If it
> stagnates at 10^-8 then it could be due to inconsistent rhs and or some
> other reason.
> >
> >4-3) Also, I've seen that the choice of KSP and PC have considerable
> effects on the solver. For instance, by default I use hypre+bcgs and I have
> noticed I need to change coarse-grid relaxation from Gaussian Elimination
> to symmetric-sor/Jacobi otherwise hypre has issues. I assume this is
> because the AMG preconditioner is also singular on the coarse level?
>
>Yes this is likely the reason.
> >
> >4-4) Along the same lines, I tried a couple of other PCs such as
> {jacobi, sor, gamg, ilu} and none of them were able to converge with bcgs
> as the KSP. However, with gmres, almost all of them converge
>
>Sure this is expected
>
> > with the exception of gamg.
> >
> >4-5) If I use gmres instead of bcgs, and for any of {jacobi, sor,
> ilu}, the true residual seems to be generally 2-3 orders of magnitude
> smaller (1e-6 vs 1e-3). I suppose this is just because gmres is more robust?
>
>   Yes, with a single system I would recommend sticking with GMRES and
> avoiding Bcgs.
> >
> >4-6) With hypre, the true residual is always larger (~1e-3) than
> other PCs no matter if I use bcgs or gmres.
>
>ok.
>
> >
> > Sorry for the long discussion but this has turned out to be quite
> educational for me!
> >
> > Thanks,
> > Mohammad
> >
> > On Wed, Feb 24, 2016 at 2:21 PM, Barry Smith  wrote:
> >
> > > On Feb 24, 2016, at 9:07 AM, Mohammad Mirzadeh 
> wrote:
> > >
> > > Barry,
> > >
> > > On Wednesday, February 24, 2016, Barry Smith 
> wrote:
> > >
> > > > On Feb 24, 2016, at 12:07 AM, Mohammad Mirzadeh 
> wrote:
> > >
> > >
> > > At the PDE level the compatibility condition is satisfied. However I
> suspect 

Re: [petsc-users] Neumann BC with non-symmetric matrix

2016-02-25 Thread Barry Smith

> On Feb 25, 2016, at 4:18 PM, Mohammad Mirzadeh  wrote:
> 
> Barry,
> 
> Thanks a lot for the detailed discussion. Things make much more sense now, 
> especially I was confused why the manual says to provide call both 
> 'MatSetNullSpace' and 'MatSetTransposeNullSpace'. I have couple of more 
> questions and some observations I have made since yesterday.
> 
> 1) Is there a systematic way to tell KSP to stop when it stagnates? I am 
> _not_ using SNES.

  No, I added the issue 
https://bitbucket.org/petsc/petsc/issues/122/ksp-convergence-test-for-inconsistent
 writing the code for a new test is pretty simple, but you need to decide 
mathematically how you are going to detect "stagnation".

> 
> 2) Once KSP converges to the least-square solution, the residual must be in 
> the nullspace of A^T because otherwise it would have been reduced by the 
> solver. So is it (at lest theoretically) possible to use the residual vector 
> as an approximate basis for the n(A^T)? In general this wouldn't be enough 
> but I'm thinking since the nullspace is one-dimensional, maybe I could use 
> the residual itself to manually project solution onto range of A after 
> calling KSPSolve?

  I don't see why this wouldn't work. Just run one initial solve till 
stagnation and then use the resulting residual as the null space for A^T for 
future solves (with the same matrix, of course). It could be interesting to 
plot the residual to see what it looks like and it that makes sense physically

> 
> 3) Are preconditoners applied to the left by default? If not which one are 
> and which one are not?

  It actually depends on the KSP, some algorithms only support right 
preconditioning, some implementations only support one or the other (depending 
on who wrote it) and some support both.  In PETSc CG, GMRES, and BiCGstab use 
left by default, both GMRES and BiCGstab  also support right.

> 
> 4) So then if I provide the nullspace of A, KSP residual should converge, 
> correct? I have made a few observations:
> 
>4-1) When I provide the nullspace of A through MatSetNullSpace, I 
> (generally) do see the residual (preconditioned) become very small (~1e-12 or 
> so) but then it sometimes stagnates (say around 1e-10). Is this normal?

  There is only some much convergence one can expect for linear solvers; how 
far one can drive down the residual depends at least in part on the 
conditioning of the matrix. The higher the conditioning the less you can get in 
tolerance.


> 
>4-2) Another observation is that the true residual stagnates way earlier 
> which I assume is an indication that the RHS is in fact not in the range of A.

   You can hope this is the case. 

   Of course one cannot really know if the residual is stagnating due to an 
inconsistent right hand side or for some other reason. But if it stagnates at 
10^-4 it is probably due to inconsistent right hand side if it stagnates at 
10^-12 the right hand side is probably consistent. If it stagnates at 10^-8 
then it could be due to inconsistent rhs and or some other reason.
> 
>4-3) Also, I've seen that the choice of KSP and PC have considerable 
> effects on the solver. For instance, by default I use hypre+bcgs and I have 
> noticed I need to change coarse-grid relaxation from Gaussian Elimination to 
> symmetric-sor/Jacobi otherwise hypre has issues. I assume this is because the 
> AMG preconditioner is also singular on the coarse level?

   Yes this is likely the reason.
> 
>4-4) Along the same lines, I tried a couple of other PCs such as {jacobi, 
> sor, gamg, ilu} and none of them were able to converge with bcgs as the KSP. 
> However, with gmres, almost all of them converge

   Sure this is expected

> with the exception of gamg. 
> 
>4-5) If I use gmres instead of bcgs, and for any of {jacobi, sor, ilu}, 
> the true residual seems to be generally 2-3 orders of magnitude smaller (1e-6 
> vs 1e-3). I suppose this is just because gmres is more robust?

  Yes, with a single system I would recommend sticking with GMRES and avoiding 
Bcgs.
> 
>4-6) With hypre, the true residual is always larger (~1e-3) than other PCs 
> no matter if I use bcgs or gmres. 

   ok.

> 
> Sorry for the long discussion but this has turned out to be quite educational 
> for me!
> 
> Thanks,
> Mohammad
> 
> On Wed, Feb 24, 2016 at 2:21 PM, Barry Smith  wrote:
> 
> > On Feb 24, 2016, at 9:07 AM, Mohammad Mirzadeh  wrote:
> >
> > Barry,
> >
> > On Wednesday, February 24, 2016, Barry Smith  wrote:
> >
> > > On Feb 24, 2016, at 12:07 AM, Mohammad Mirzadeh  
> > > wrote:
> >
> >
> > At the PDE level the compatibility condition is satisfied. However I 
> > suspect that at the discrete level the rhs is not not exactly in the range. 
> > The reason for my suspicion is that I know for a fact that my 
> > discretization is not conservative at the hanging nodes.
> 
>Could be.
> 
> >
> 

Re: [petsc-users] Neumann BC with non-symmetric matrix

2016-02-25 Thread Mohammad Mirzadeh
Barry,

Thanks a lot for the detailed discussion. Things make much more sense now,
especially I was confused why the manual says to provide call both
'MatSetNullSpace' and 'MatSetTransposeNullSpace'. I have couple of more
questions and some observations I have made since yesterday.

1) Is there a systematic way to tell KSP to stop when it stagnates? I am
_not_ using SNES.

2) Once KSP converges to the least-square solution, the residual must be in
the nullspace of A^T because otherwise it would have been reduced by the
solver. So is it (at lest theoretically) possible to use the residual
vector as an approximate basis for the n(A^T)? In general this wouldn't be
enough but I'm thinking since the nullspace is one-dimensional, maybe I
could use the residual itself to manually project solution onto range of A
after calling KSPSolve?

3) Are preconditoners applied to the left by default? If not which one are
and which one are not?

4) So then if I provide the nullspace of A, KSP residual should converge,
correct? I have made a few observations:

   4-1) When I provide the nullspace of A through MatSetNullSpace, I
(generally) do see the residual (preconditioned) become very small (~1e-12
or so) but then it sometimes stagnates (say around 1e-10). Is this normal?

   4-2) Another observation is that the true residual stagnates way earlier
which I assume is an indication that the RHS is in fact not in the range of
A.

   4-3) Also, I've seen that the choice of KSP and PC have considerable
effects on the solver. For instance, by default I use hypre+bcgs and I have
noticed I need to change coarse-grid relaxation from Gaussian Elimination
to symmetric-sor/Jacobi otherwise hypre has issues. I assume this is
because the AMG preconditioner is also singular on the coarse level?

   4-4) Along the same lines, I tried a couple of other PCs such as
{jacobi, sor, gamg, ilu} and none of them were able to converge with bcgs
as the KSP. However, with gmres, almost all of them converge with the
exception of gamg.

   4-5) If I use gmres instead of bcgs, and for any of {jacobi, sor, ilu},
the true residual seems to be generally 2-3 orders of magnitude smaller
(1e-6 vs 1e-3). I suppose this is just because gmres is more robust?

   4-6) With hypre, the true residual is always larger (~1e-3) than other
PCs no matter if I use bcgs or gmres.

Sorry for the long discussion but this has turned out to be quite
educational for me!

Thanks,
Mohammad

On Wed, Feb 24, 2016 at 2:21 PM, Barry Smith  wrote:

>
> > On Feb 24, 2016, at 9:07 AM, Mohammad Mirzadeh 
> wrote:
> >
> > Barry,
> >
> > On Wednesday, February 24, 2016, Barry Smith  wrote:
> >
> > > On Feb 24, 2016, at 12:07 AM, Mohammad Mirzadeh 
> wrote:
> >
> >
> > At the PDE level the compatibility condition is satisfied. However I
> suspect that at the discrete level the rhs is not not exactly in the range.
> The reason for my suspicion is that I know for a fact that my
> discretization is not conservative at the hanging nodes.
>
>Could be.
>
> >
> > Thanks for the suggestion. I'll give it a try. Howerver, does GMRES
> fundamentally behave differently than BiCGSTAB for these systems? I have
> seen people saying that it can keep the solution in the range if rhs is
> already in the range but I thought any KSP would do the same?
>
>
> Here is the deal.  There are two issues here
>
> 1)  Making sure that b is the in the range of A.  If b is not in the range
> of A then it is obviously impossible to find an x such that A x = b. If we
> divide b into a part in the range of A called br and the rest call it bp
> then b = br + bp   and one can solve Ax = br   and the left over residual
> is bp. Normally if you run GMRES with an inconsistent right hand side (that
> is bp != 0) it will solve Ax = br automatically and thus give you a "least
> squares" answer which is likely what you want. These means in some sense
> you don't really need to worry about making sure that b is in the range of
> A. But the residuals of GMRES will stagnate, which makes sense because it
> cannot get rid of the bp part. In the least squares sense however GMRES has
> converged. If you provide MatSetTransposeNullSpace() then KSP automatically
> removes this space from b when it starts the Krylov method, this is nice
> because the GMRES residuals will go to zero.
>
> 2) Making sure that huge multiples of the null space of A do not get into
> the computed solution.
>
> With left preconditioning Krylov methods construct the solution in the
> space {Bb, BABb, BABABb, ..} if the range of B contains entries in the null
> space of A then the Krylov space will contain vectors in the null space of
> A. What can happen is that in each iteration of the Krylov space those
> entries grow and you end up with a solution x + xn where xn is in the null
> space of A and very large, thus it looks like GMRES is not converging since
> the solution "jump" 

Re: [petsc-users] Neumann BC with non-symmetric matrix

2016-02-24 Thread Barry Smith

> On Feb 24, 2016, at 9:07 AM, Mohammad Mirzadeh  wrote:
> 
> Barry,
> 
> On Wednesday, February 24, 2016, Barry Smith  wrote:
> 
> > On Feb 24, 2016, at 12:07 AM, Mohammad Mirzadeh  wrote:
> 
> 
> At the PDE level the compatibility condition is satisfied. However I suspect 
> that at the discrete level the rhs is not not exactly in the range. The 
> reason for my suspicion is that I know for a fact that my discretization is 
> not conservative at the hanging nodes. 

   Could be. 

> 
> Thanks for the suggestion. I'll give it a try. Howerver, does GMRES 
> fundamentally behave differently than BiCGSTAB for these systems? I have seen 
> people saying that it can keep the solution in the range if rhs is already in 
> the range but I thought any KSP would do the same?


Here is the deal.  There are two issues here 

1)  Making sure that b is the in the range of A.  If b is not in the range of A 
then it is obviously impossible to find an x such that A x = b. If we divide b 
into a part in the range of A called br and the rest call it bp  then b = br + 
bp   and one can solve Ax = br   and the left over residual is bp. Normally if 
you run GMRES with an inconsistent right hand side (that is bp != 0) it will 
solve Ax = br automatically and thus give you a "least squares" answer which is 
likely what you want. These means in some sense you don't really need to worry 
about making sure that b is in the range of A. But the residuals of GMRES will 
stagnate, which makes sense because it cannot get rid of the bp part. In the 
least squares sense however GMRES has converged. If you provide 
MatSetTransposeNullSpace() then KSP automatically removes this space from b 
when it starts the Krylov method, this is nice because the GMRES residuals will 
go to zero.

2) Making sure that huge multiples of the null space of A do not get into the 
computed solution. 

With left preconditioning Krylov methods construct the solution in the space 
{Bb, BABb, BABABb, ..} if the range of B contains entries in the null space of 
A then the Krylov space will contain vectors in the null space of A. What can 
happen is that in each iteration of the Krylov space those entries grow and you 
end up with a solution x + xn where xn is in the null space of A and very 
large, thus it looks like GMRES is not converging since the solution "jump" 
each iteration. If you force the range of B to exclude the null space of A, 
that is project out the null space of A after applying B then nothing in the 
null space ever gets into the Krylov space and you get the "minimum norm" 
solution to the least squares problem which is what you almost always want. 
When you provide MatSetNullSpace() the KSP solvers automatically remove the 
null space after applying B.

 All this stuff applies for any Krylov method.

  So based on my understanding, you should just provide the null space that you 
can and forget out the transpose null space and use left preconditioning with 
GMRES (forget about what I said about trying with right preconditioning).  Let 
GMRES iterate until the residual norm has stabilized and use the resulting 
solution which is the least squares solution. If you are using KSP inside SNES 
you may need to set SNESSetMaxLinearSolveFailures() to a large value so it 
doesn't stop when it thinks the GMRES has failed.

  Barry

Matt and Jed, please check my logic I often flip my ranges/null spaces and may 
have incorrect presentation above.
> 
> 
> > >
> > > 2) I have tried fixing the solution at an arbitrary point, and while it 
> > > generally works, for some problems I get numerical artifacts, e.g. slight 
> > > asymmetry in the solution and/or increased error close to the point where 
> > > I fix the solution. Is this, more or less, expected as a known artifact?
> 
>   Yeah this approach is not good. We never recommend it.
> > >
> > > 3) An alternative to 2 is to enforce some global constraint on the 
> > > solution, e.g. to require that the average be zero. My question here is 
> > > two-fold:
> >
> >   Requiring the average be zero is exactly the same as providing a null 
> > space of the constant function. Saying the average is zero is the same as 
> > saying the solution is orthogonal to the constant function. I don't see any 
> > reason to introduce the Lagrange multiplier and all its complications 
> > inside of just providing the constant null space.
> >
> > Is this also true at the discrete level when the matrix is non-symmetric? I 
> > have always viewed this as just a constraint that could really be anything.
> >
> > >
> > > 3-1) Is this generally any better than solution 2, in terms of not 
> > > messing too much with the condition number of the matrix?
> > >
> > > 3-2) I don't quite know how to implement this using PETSc. Generally 
> > > speaking I'd like to solve
> > >
> > > | AU |   | X |   | B |
> > > || * |   | = |   |
> > > | U^T  0 |   | s 

Re: [petsc-users] Neumann BC with non-symmetric matrix

2016-02-23 Thread Mohammad Mirzadeh
Barry,
On Wednesday, February 24, 2016, Barry Smith  wrote:

>
> > On Feb 23, 2016, at 11:35 PM, Mohammad Mirzadeh  > wrote:
> >
> > Dear all,
> >
> > I am dealing with a situation I was hoping to get some suggestions here.
> Suppose after discretizing a poisson equation with purely neumann (or
> periodic) bc I end up with a matrix that is *almost* symmetric, i.e. it is
> symmetric for almost all grid points with the exception of a few points.
>
>   How come it is not purely symmetric? The usual finite elements with pure
> Neumann or periodic bc will give a completely symmetric matrix.
>
>   Barry
>
>
So this is a finite difference discretization on adaptive Cartesian grids.
It turns out that the discretization is non-symmetric at the corse-fine
interface. It's actually not because of the BC itself.

> >
> > The correct way of handling this problem is by specifying the nullspace
> to MatSetNullSpace. However, since the matrix is non-symmetric in general I
> would need to pass the nullspace of A^T. Now it turns out that if A is
> *sufficiently close to being symmetric*, I can get away with the constant
> vector, which is the nullspace of A and not A^T, but obviously this does
> not always work. Sometimes the KSP converges and in other situations the
> residual stagnates which is to be expected.
> >
> > Now, here are my questions (sorry if they are too many!):
> >
> > 1) Is there any efficient way of calculating nullspace of A^T in this
> case? Is SVD the only way?
> >
> > 2) I have tried fixing the solution at an arbitrary point, and while it
> generally works, for some problems I get numerical artifacts, e.g. slight
> asymmetry in the solution and/or increased error close to the point where I
> fix the solution. Is this, more or less, expected as a known artifact?
> >
> > 3) An alternative to 2 is to enforce some global constraint on the
> solution, e.g. to require that the average be zero. My question here is
> two-fold:
>
>   Requiring the average be zero is exactly the same as providing a null
> space of the constant function. Saying the average is zero is the same as
> saying the solution is orthogonal to the constant function. I don't see any
> reason to introduce the Lagrange multiplier and all its complications
> inside of just providing the constant null space.


Is this also true at the discrete level when the matrix is non-symmetric? I
have always viewed this as just a constraint that could really be anything.


>
> >
> > 3-1) Is this generally any better than solution 2, in terms of not
> messing too much with the condition number of the matrix?
> >
> > 3-2) I don't quite know how to implement this using PETSc. Generally
> speaking I'd like to solve
> >
> > | AU |   | X |   | B |
> > || * |   | = |   |
> > | U^T  0 |   | s |   | 0 |
> >
> >
> > where U is a constant vector (of ones) and s is effectively a Lagrange
> multiplier. I suspect I need to use MatCreateSchurComplement and pass that
> to the KSP? Or do I need create my own matrix type from scratch through
> MatCreateShell?
> >
> > Any help is appreciated!
> >
> > Thanks,
> > Mohammad
> >
> >
>
>

-- 
Sent from Gmail Mobile


Re: [petsc-users] Neumann BC with non-symmetric matrix

2016-02-23 Thread Barry Smith

> On Feb 23, 2016, at 11:35 PM, Mohammad Mirzadeh  wrote:
> 
> Dear all,
> 
> I am dealing with a situation I was hoping to get some suggestions here. 
> Suppose after discretizing a poisson equation with purely neumann (or 
> periodic) bc I end up with a matrix that is *almost* symmetric, i.e. it is 
> symmetric for almost all grid points with the exception of a few points.

  How come it is not purely symmetric? The usual finite elements with pure 
Neumann or periodic bc will give a completely symmetric matrix.

  Barry

> 
> The correct way of handling this problem is by specifying the nullspace to 
> MatSetNullSpace. However, since the matrix is non-symmetric in general I 
> would need to pass the nullspace of A^T. Now it turns out that if A is 
> *sufficiently close to being symmetric*, I can get away with the constant 
> vector, which is the nullspace of A and not A^T, but obviously this does not 
> always work. Sometimes the KSP converges and in other situations the residual 
> stagnates which is to be expected.
> 
> Now, here are my questions (sorry if they are too many!):
>  
> 1) Is there any efficient way of calculating nullspace of A^T in this case? 
> Is SVD the only way?
> 
> 2) I have tried fixing the solution at an arbitrary point, and while it 
> generally works, for some problems I get numerical artifacts, e.g. slight 
> asymmetry in the solution and/or increased error close to the point where I 
> fix the solution. Is this, more or less, expected as a known artifact?
> 
> 3) An alternative to 2 is to enforce some global constraint on the solution, 
> e.g. to require that the average be zero. My question here is two-fold:

  Requiring the average be zero is exactly the same as providing a null space 
of the constant function. Saying the average is zero is the same as saying the 
solution is orthogonal to the constant function. I don't see any reason to 
introduce the Lagrange multiplier and all its complications inside of just 
providing the constant null space.

> 
> 3-1) Is this generally any better than solution 2, in terms of not messing 
> too much with the condition number of the matrix?
> 
> 3-2) I don't quite know how to implement this using PETSc. Generally speaking 
> I'd like to solve
> 
> | AU |   | X |   | B |
> || * |   | = |   |
> | U^T  0 |   | s |   | 0 |
> 
> 
> where U is a constant vector (of ones) and s is effectively a Lagrange 
> multiplier. I suspect I need to use MatCreateSchurComplement and pass that to 
> the KSP? Or do I need create my own matrix type from scratch through 
> MatCreateShell?
> 
> Any help is appreciated!
> 
> Thanks,
> Mohammad
> 
> 



[petsc-users] Neumann BC with non-symmetric matrix

2016-02-23 Thread Mohammad Mirzadeh
Dear all,

I am dealing with a situation I was hoping to get some suggestions here.
Suppose after discretizing a poisson equation with purely neumann (or
periodic) bc I end up with a matrix that is *almost* symmetric, i.e. it is
symmetric for almost all grid points with the exception of a few points.

The correct way of handling this problem is by specifying the nullspace to
MatSetNullSpace. However, since the matrix is non-symmetric in general I
would need to pass the nullspace of A^T. Now it turns out that if A is
*sufficiently close to being symmetric*, I can get away with the constant
vector, which is the nullspace of A and not A^T, but obviously this does
not always work. Sometimes the KSP converges and in other situations the
residual stagnates which is to be expected.

Now, here are my questions (sorry if they are too many!):

1) Is there any efficient way of calculating nullspace of A^T in this case?
Is SVD the only way?

2) I have tried fixing the solution at an arbitrary point, and while it
generally works, for some problems I get numerical artifacts, e.g. slight
asymmetry in the solution and/or increased error close to the point where I
fix the solution. Is this, more or less, expected as a known artifact?

3) An alternative to 2 is to enforce some global constraint on the
solution, e.g. to require that the average be zero. My question here is
two-fold:

3-1) Is this generally any better than solution 2, in terms of not messing
too much with the condition number of the matrix?

3-2) I don't quite know how to implement this using PETSc. Generally
speaking I'd like to solve

| AU |   | X |   | B |
|| * |   | = |   |
| U^T  0 |   | s |   | 0 |


where U is a constant vector (of ones) and s is effectively a Lagrange
multiplier. I suspect I need to use MatCreateSchurComplement and pass that
to the KSP? Or do I need create my own matrix type from scratch through
MatCreateShell?

Any help is appreciated!

Thanks,
Mohammad