Re: [petsc-users] Increasing norm with finer mesh

2018-10-22 Thread Weizhuo Wang
Amazing, right preconditioning fixes the problem. Thanks a lot!


On Tue, Oct 16, 2018 at 8:31 PM Dave May  wrote:

>
>
> On Wed, 17 Oct 2018 at 03:15, Weizhuo Wang  wrote:
>
>> I just tried both, neither of them make a difference. I got exactly the
>> same curve with either combination.
>>
>
> Try using right preconditioning.
>
>
> https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/KSP/KSPSetPCSide.html
>
>
> Use the options:
>
> -ksp_type gmres -ksp_pc_side right -ksp_rtol 1e-12
>
> Or;
>
> -ksp_type fgmres  -ksp_rtol 1e-12
>
> Fgmres does right preconditioning my default
>
>
>
>> Thanks!
>>
>> Wang weizhuo
>>
>> On Tue, Oct 16, 2018 at 8:06 PM Matthew Knepley 
>> wrote:
>>
>>> On Tue, Oct 16, 2018 at 7:26 PM Weizhuo Wang 
>>> wrote:
>>>
 Hello again!

 After some tweaking the code is giving right answers now. However it
 start to disagree with MATLAB results ('traditional' way using matrix
 inverse) when the grid is larger than 100*100. My PhD advisor and I
 suspects that the default dimension of the Krylov subspace is 100 in the
 test case we are running. If so, is there a way to increase the size of the
 subspace?

>>>
>>> 1) The default subspace size is 30, not 100. You can increase the
>>> subspace size using
>>>
>>>-ksp_gmres_restart n
>>>
>>> 2) The problem is likely your tolerance. The default solver tolerance is
>>> 1e-5. You can change it using
>>>
>>>-ksp_rtol 1e-9
>>>
>>>   Thanks,
>>>
>>>  Matt
>>>
>>>

 [image: Disagrees.png]

 Thanks!

 Wang Weizhuo

 On Tue, Oct 9, 2018 at 2:50 AM Mark Adams  wrote:

> To reiterate what Matt is saying, you seem to have the exact solution
> on a 10x10 grid. That makes no sense unless the solution can be 
> represented
> exactly by your FE space (eg, u(x,y) = x + y).
>
> On Mon, Oct 8, 2018 at 9:33 PM Matthew Knepley 
> wrote:
>
>> On Mon, Oct 8, 2018 at 9:28 PM Weizhuo Wang 
>> wrote:
>>
>>> The code is attached in case anyone wants to take a look, I will try
>>> the high frequency scenario later.
>>>
>>
>> That is not the error. It is superconvergence at the vertices. The
>> real solution is trigonometric, so your
>> linear interpolants or whatever you use is not going to get the right
>> value in between mesh points. You
>> need to do a real integral over the whole interval to get the L_2
>> error.
>>
>>   Thanks,
>>
>>  Matt
>>
>>
>>> On Mon, Oct 8, 2018 at 7:58 PM Mark Adams  wrote:
>>>


 On Mon, Oct 8, 2018 at 6:58 PM Weizhuo Wang 
 wrote:

> The first plot is the norm with the flag -pc_type lu with respect
> to number of grids in one axis (n), and the second plot is the norm 
> without
> the flag -pc_type lu.
>

 So you are using the default PC w/o LU. The default is ILU. This
 will reduce high frequency effectively but is not effective on the low
 frequency error. Don't expect your algebraic error reduction to be at 
 the
 same scale as the residual reduction (what KSP measures).


>
>>>
>>> --
>>> Wang Weizhuo
>>>
>>
>>
>> --
>> What most experimenters take for granted before they begin their
>> experiments is infinitely more interesting than any results to which 
>> their
>> experiments lead.
>> -- Norbert Wiener
>>
>> https://www.cse.buffalo.edu/~knepley/
>> 
>>
>

 --
 Wang Weizhuo

>>>
>>>
>>> --
>>> What most experimenters take for granted before they begin their
>>> experiments is infinitely more interesting than any results to which their
>>> experiments lead.
>>> -- Norbert Wiener
>>>
>>> https://www.cse.buffalo.edu/~knepley/
>>> 
>>>
>>
>>
>> --
>> Wang Weizhuo
>>
>

-- 
Wang Weizhuo


Re: [petsc-users] Increasing norm with finer mesh

2018-10-16 Thread Fande Kong
Use -ksp_view to confirm the options are actually set.

Fande 

Sent from my iPhone

> On Oct 16, 2018, at 7:40 PM, Ellen M. Price  
> wrote:
> 
> Maybe a stupid suggestion, but sometimes I forget to call the
> *SetFromOptions function on my object, and then get confused when
> changing the options has no effect. Just a thought from a fellow grad
> student.
> 
> Ellen
> 
> 
>> On 10/16/2018 09:36 PM, Matthew Knepley wrote:
>> On Tue, Oct 16, 2018 at 9:14 PM Weizhuo Wang > > wrote:
>> 
>>I just tried both, neither of them make a difference. I got exactly
>>the same curve with either combination.
>> 
>> 
>> I have a hard time believing you. If you make the residual tolerance
>> much finer, your error will definitely change.
>> I run tests every day that do exactly this. You can run them too, since
>> they are just examples.
>> 
>>   Thanks,
>> 
>>  Matt
>>  
>> 
>>Thanks!
>> 
>>Wang weizhuo
>> 
>>On Tue, Oct 16, 2018 at 8:06 PM Matthew Knepley >> wrote:
>> 
>>On Tue, Oct 16, 2018 at 7:26 PM Weizhuo Wang
>>mailto:weizh...@illinois.edu>> wrote:
>> 
>>Hello again!
>> 
>>After some tweaking the code is giving right answers now.
>>However it start to disagree with MATLAB results
>>('traditional' way using matrix inverse) when the grid is
>>larger than 100*100. My PhD advisor and I suspects that the
>>default dimension of the Krylov subspace is 100 in the test
>>case we are running. If so, is there a way to increase the
>>size of the subspace?
>> 
>> 
>>1) The default subspace size is 30, not 100. You can increase
>>the subspace size using
>> 
>>   -ksp_gmres_restart n
>> 
>>2) The problem is likely your tolerance. The default solver
>>tolerance is 1e-5. You can change it using
>> 
>>   -ksp_rtol 1e-9
>> 
>>  Thanks,
>> 
>> Matt
>> 
>> 
>> 
>>Disagrees.png
>> 
>>Thanks!
>> 
>>Wang Weizhuo
>> 
>>On Tue, Oct 9, 2018 at 2:50 AM Mark Adams >> wrote:
>> 
>>To reiterate what Matt is saying, you seem to have the
>>exact solution on a 10x10 grid. That makes no sense
>>unless the solution can be represented exactly by your
>>FE space (eg, u(x,y) = x + y).
>> 
>>On Mon, Oct 8, 2018 at 9:33 PM Matthew Knepley
>>mailto:knep...@gmail.com>> wrote:
>> 
>>On Mon, Oct 8, 2018 at 9:28 PM Weizhuo Wang
>>>> wrote:
>> 
>>The code is attached in case anyone wants to
>>take a look, I will try the high frequency
>>scenario later.
>> 
>> 
>>That is not the error. It is superconvergence at the
>>vertices. The real solution is trigonometric, so your
>>linear interpolants or whatever you use is not going
>>to get the right value in between mesh points. You
>>need to do a real integral over the whole interval
>>to get the L_2 error.
>> 
>>  Thanks,
>> 
>> Matt
>> 
>> 
>>On Mon, Oct 8, 2018 at 7:58 PM Mark Adams
>>mailto:mfad...@lbl.gov>> wrote:
>> 
>> 
>> 
>>On Mon, Oct 8, 2018 at 6:58 PM Weizhuo Wang
>>>> wrote:
>> 
>>The first plot is the norm with the flag
>>-pc_type lu with respect to number of
>>grids in one axis (n), and the second
>>plot is the norm without the flag
>>-pc_type lu. 
>> 
>> 
>>So you are using the default PC w/o LU. The
>>default is ILU. This will reduce high
>>frequency effectively but is not effective
>>on the low frequency error. Don't expect
>>your algebraic error reduction to be at the
>>same scale as the residual reduction (what
>>KSP measures). 
>> 
>> 
>> 
>> 
>>-- 
>>Wang Weizhuo
>> 
>> 
>> 
>>-- 
>>What most experimenters take for granted before they
>>begin their experiments is infinitely more
>>interesting than any results to which their
>>  

Re: [petsc-users] Increasing norm with finer mesh

2018-10-16 Thread Ellen M. Price
Maybe a stupid suggestion, but sometimes I forget to call the
*SetFromOptions function on my object, and then get confused when
changing the options has no effect. Just a thought from a fellow grad
student.

Ellen


On 10/16/2018 09:36 PM, Matthew Knepley wrote:
> On Tue, Oct 16, 2018 at 9:14 PM Weizhuo Wang  > wrote:
> 
> I just tried both, neither of them make a difference. I got exactly
> the same curve with either combination.
> 
> 
> I have a hard time believing you. If you make the residual tolerance
> much finer, your error will definitely change.
> I run tests every day that do exactly this. You can run them too, since
> they are just examples.
> 
>   Thanks,
> 
>      Matt
>  
> 
> Thanks!
> 
> Wang weizhuo
> 
> On Tue, Oct 16, 2018 at 8:06 PM Matthew Knepley  > wrote:
> 
> On Tue, Oct 16, 2018 at 7:26 PM Weizhuo Wang
> mailto:weizh...@illinois.edu>> wrote:
> 
> Hello again!
> 
> After some tweaking the code is giving right answers now.
> However it start to disagree with MATLAB results
> ('traditional' way using matrix inverse) when the grid is
> larger than 100*100. My PhD advisor and I suspects that the
> default dimension of the Krylov subspace is 100 in the test
> case we are running. If so, is there a way to increase the
> size of the subspace?
> 
> 
> 1) The default subspace size is 30, not 100. You can increase
> the subspace size using
> 
>        -ksp_gmres_restart n
> 
> 2) The problem is likely your tolerance. The default solver
> tolerance is 1e-5. You can change it using
> 
>        -ksp_rtol 1e-9
> 
>   Thanks,
> 
>      Matt
>  
> 
> 
> Disagrees.png
> 
> Thanks!
> 
> Wang Weizhuo
> 
> On Tue, Oct 9, 2018 at 2:50 AM Mark Adams  > wrote:
> 
> To reiterate what Matt is saying, you seem to have the
> exact solution on a 10x10 grid. That makes no sense
> unless the solution can be represented exactly by your
> FE space (eg, u(x,y) = x + y).
> 
> On Mon, Oct 8, 2018 at 9:33 PM Matthew Knepley
> mailto:knep...@gmail.com>> wrote:
> 
> On Mon, Oct 8, 2018 at 9:28 PM Weizhuo Wang
>  > wrote:
> 
> The code is attached in case anyone wants to
> take a look, I will try the high frequency
> scenario later.
> 
> 
> That is not the error. It is superconvergence at the
> vertices. The real solution is trigonometric, so your
> linear interpolants or whatever you use is not going
> to get the right value in between mesh points. You
> need to do a real integral over the whole interval
> to get the L_2 error.
> 
>   Thanks,
> 
>      Matt
>  
> 
> On Mon, Oct 8, 2018 at 7:58 PM Mark Adams
> mailto:mfad...@lbl.gov>> wrote:
> 
> 
> 
> On Mon, Oct 8, 2018 at 6:58 PM Weizhuo Wang
>  > wrote:
> 
> The first plot is the norm with the flag
> -pc_type lu with respect to number of
> grids in one axis (n), and the second
> plot is the norm without the flag
> -pc_type lu. 
> 
> 
> So you are using the default PC w/o LU. The
> default is ILU. This will reduce high
> frequency effectively but is not effective
> on the low frequency error. Don't expect
> your algebraic error reduction to be at the
> same scale as the residual reduction (what
> KSP measures). 
>  
> 
> 
> 
> -- 
> Wang Weizhuo
> 
> 
> 
> -- 
> What most experimenters take for granted before they
> begin their experiments is infinitely more
> interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
> 
> https://www.cse.buffalo.edu/~knepley/
> 
> 

Re: [petsc-users] Increasing norm with finer mesh

2018-10-16 Thread Matthew Knepley
On Tue, Oct 16, 2018 at 9:14 PM Weizhuo Wang  wrote:

> I just tried both, neither of them make a difference. I got exactly the
> same curve with either combination.
>

I have a hard time believing you. If you make the residual tolerance much
finer, your error will definitely change.
I run tests every day that do exactly this. You can run them too, since
they are just examples.

  Thanks,

 Matt


> Thanks!
>
> Wang weizhuo
>
> On Tue, Oct 16, 2018 at 8:06 PM Matthew Knepley  wrote:
>
>> On Tue, Oct 16, 2018 at 7:26 PM Weizhuo Wang 
>> wrote:
>>
>>> Hello again!
>>>
>>> After some tweaking the code is giving right answers now. However it
>>> start to disagree with MATLAB results ('traditional' way using matrix
>>> inverse) when the grid is larger than 100*100. My PhD advisor and I
>>> suspects that the default dimension of the Krylov subspace is 100 in the
>>> test case we are running. If so, is there a way to increase the size of the
>>> subspace?
>>>
>>
>> 1) The default subspace size is 30, not 100. You can increase the
>> subspace size using
>>
>>-ksp_gmres_restart n
>>
>> 2) The problem is likely your tolerance. The default solver tolerance is
>> 1e-5. You can change it using
>>
>>-ksp_rtol 1e-9
>>
>>   Thanks,
>>
>>  Matt
>>
>>
>>>
>>> [image: Disagrees.png]
>>>
>>> Thanks!
>>>
>>> Wang Weizhuo
>>>
>>> On Tue, Oct 9, 2018 at 2:50 AM Mark Adams  wrote:
>>>
 To reiterate what Matt is saying, you seem to have the exact solution
 on a 10x10 grid. That makes no sense unless the solution can be represented
 exactly by your FE space (eg, u(x,y) = x + y).

 On Mon, Oct 8, 2018 at 9:33 PM Matthew Knepley 
 wrote:

> On Mon, Oct 8, 2018 at 9:28 PM Weizhuo Wang 
> wrote:
>
>> The code is attached in case anyone wants to take a look, I will try
>> the high frequency scenario later.
>>
>
> That is not the error. It is superconvergence at the vertices. The
> real solution is trigonometric, so your
> linear interpolants or whatever you use is not going to get the right
> value in between mesh points. You
> need to do a real integral over the whole interval to get the L_2
> error.
>
>   Thanks,
>
>  Matt
>
>
>> On Mon, Oct 8, 2018 at 7:58 PM Mark Adams  wrote:
>>
>>>
>>>
>>> On Mon, Oct 8, 2018 at 6:58 PM Weizhuo Wang 
>>> wrote:
>>>
 The first plot is the norm with the flag -pc_type lu with respect
 to number of grids in one axis (n), and the second plot is the norm 
 without
 the flag -pc_type lu.

>>>
>>> So you are using the default PC w/o LU. The default is ILU. This
>>> will reduce high frequency effectively but is not effective on the low
>>> frequency error. Don't expect your algebraic error reduction to be at 
>>> the
>>> same scale as the residual reduction (what KSP measures).
>>>
>>>

>>
>> --
>> Wang Weizhuo
>>
>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
> https://www.cse.buffalo.edu/~knepley/
> 
>

>>>
>>> --
>>> Wang Weizhuo
>>>
>>
>>
>> --
>> What most experimenters take for granted before they begin their
>> experiments is infinitely more interesting than any results to which their
>> experiments lead.
>> -- Norbert Wiener
>>
>> https://www.cse.buffalo.edu/~knepley/
>> 
>>
>
>
> --
> Wang Weizhuo
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ 


Re: [petsc-users] Increasing norm with finer mesh

2018-10-16 Thread Dave May
On Wed, 17 Oct 2018 at 03:15, Weizhuo Wang  wrote:

> I just tried both, neither of them make a difference. I got exactly the
> same curve with either combination.
>

Try using right preconditioning.

https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/KSP/KSPSetPCSide.html


Use the options:

-ksp_type gmres -ksp_pc_side right -ksp_rtol 1e-12

Or;

-ksp_type fgmres  -ksp_rtol 1e-12

Fgmres does right preconditioning my default



> Thanks!
>
> Wang weizhuo
>
> On Tue, Oct 16, 2018 at 8:06 PM Matthew Knepley  wrote:
>
>> On Tue, Oct 16, 2018 at 7:26 PM Weizhuo Wang 
>> wrote:
>>
>>> Hello again!
>>>
>>> After some tweaking the code is giving right answers now. However it
>>> start to disagree with MATLAB results ('traditional' way using matrix
>>> inverse) when the grid is larger than 100*100. My PhD advisor and I
>>> suspects that the default dimension of the Krylov subspace is 100 in the
>>> test case we are running. If so, is there a way to increase the size of the
>>> subspace?
>>>
>>
>> 1) The default subspace size is 30, not 100. You can increase the
>> subspace size using
>>
>>-ksp_gmres_restart n
>>
>> 2) The problem is likely your tolerance. The default solver tolerance is
>> 1e-5. You can change it using
>>
>>-ksp_rtol 1e-9
>>
>>   Thanks,
>>
>>  Matt
>>
>>
>>>
>>> [image: Disagrees.png]
>>>
>>> Thanks!
>>>
>>> Wang Weizhuo
>>>
>>> On Tue, Oct 9, 2018 at 2:50 AM Mark Adams  wrote:
>>>
 To reiterate what Matt is saying, you seem to have the exact solution
 on a 10x10 grid. That makes no sense unless the solution can be represented
 exactly by your FE space (eg, u(x,y) = x + y).

 On Mon, Oct 8, 2018 at 9:33 PM Matthew Knepley 
 wrote:

> On Mon, Oct 8, 2018 at 9:28 PM Weizhuo Wang 
> wrote:
>
>> The code is attached in case anyone wants to take a look, I will try
>> the high frequency scenario later.
>>
>
> That is not the error. It is superconvergence at the vertices. The
> real solution is trigonometric, so your
> linear interpolants or whatever you use is not going to get the right
> value in between mesh points. You
> need to do a real integral over the whole interval to get the L_2
> error.
>
>   Thanks,
>
>  Matt
>
>
>> On Mon, Oct 8, 2018 at 7:58 PM Mark Adams  wrote:
>>
>>>
>>>
>>> On Mon, Oct 8, 2018 at 6:58 PM Weizhuo Wang 
>>> wrote:
>>>
 The first plot is the norm with the flag -pc_type lu with respect
 to number of grids in one axis (n), and the second plot is the norm 
 without
 the flag -pc_type lu.

>>>
>>> So you are using the default PC w/o LU. The default is ILU. This
>>> will reduce high frequency effectively but is not effective on the low
>>> frequency error. Don't expect your algebraic error reduction to be at 
>>> the
>>> same scale as the residual reduction (what KSP measures).
>>>
>>>

>>
>> --
>> Wang Weizhuo
>>
>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
> https://www.cse.buffalo.edu/~knepley/
> 
>

>>>
>>> --
>>> Wang Weizhuo
>>>
>>
>>
>> --
>> What most experimenters take for granted before they begin their
>> experiments is infinitely more interesting than any results to which their
>> experiments lead.
>> -- Norbert Wiener
>>
>> https://www.cse.buffalo.edu/~knepley/
>> 
>>
>
>
> --
> Wang Weizhuo
>


Re: [petsc-users] Increasing norm with finer mesh

2018-10-16 Thread Weizhuo Wang
I just tried both, neither of them make a difference. I got exactly the
same curve with either combination.

Thanks!

Wang weizhuo

On Tue, Oct 16, 2018 at 8:06 PM Matthew Knepley  wrote:

> On Tue, Oct 16, 2018 at 7:26 PM Weizhuo Wang 
> wrote:
>
>> Hello again!
>>
>> After some tweaking the code is giving right answers now. However it
>> start to disagree with MATLAB results ('traditional' way using matrix
>> inverse) when the grid is larger than 100*100. My PhD advisor and I
>> suspects that the default dimension of the Krylov subspace is 100 in the
>> test case we are running. If so, is there a way to increase the size of the
>> subspace?
>>
>
> 1) The default subspace size is 30, not 100. You can increase the subspace
> size using
>
>-ksp_gmres_restart n
>
> 2) The problem is likely your tolerance. The default solver tolerance is
> 1e-5. You can change it using
>
>-ksp_rtol 1e-9
>
>   Thanks,
>
>  Matt
>
>
>>
>> [image: Disagrees.png]
>>
>> Thanks!
>>
>> Wang Weizhuo
>>
>> On Tue, Oct 9, 2018 at 2:50 AM Mark Adams  wrote:
>>
>>> To reiterate what Matt is saying, you seem to have the exact solution on
>>> a 10x10 grid. That makes no sense unless the solution can be represented
>>> exactly by your FE space (eg, u(x,y) = x + y).
>>>
>>> On Mon, Oct 8, 2018 at 9:33 PM Matthew Knepley 
>>> wrote:
>>>
 On Mon, Oct 8, 2018 at 9:28 PM Weizhuo Wang 
 wrote:

> The code is attached in case anyone wants to take a look, I will try
> the high frequency scenario later.
>

 That is not the error. It is superconvergence at the vertices. The real
 solution is trigonometric, so your
 linear interpolants or whatever you use is not going to get the right
 value in between mesh points. You
 need to do a real integral over the whole interval to get the L_2 error.

   Thanks,

  Matt


> On Mon, Oct 8, 2018 at 7:58 PM Mark Adams  wrote:
>
>>
>>
>> On Mon, Oct 8, 2018 at 6:58 PM Weizhuo Wang 
>> wrote:
>>
>>> The first plot is the norm with the flag -pc_type lu with respect to
>>> number of grids in one axis (n), and the second plot is the norm without
>>> the flag -pc_type lu.
>>>
>>
>> So you are using the default PC w/o LU. The default is ILU. This will
>> reduce high frequency effectively but is not effective on the low 
>> frequency
>> error. Don't expect your algebraic error reduction to be at the same 
>> scale
>> as the residual reduction (what KSP measures).
>>
>>
>>>
>
> --
> Wang Weizhuo
>


 --
 What most experimenters take for granted before they begin their
 experiments is infinitely more interesting than any results to which their
 experiments lead.
 -- Norbert Wiener

 https://www.cse.buffalo.edu/~knepley/
 

>>>
>>
>> --
>> Wang Weizhuo
>>
>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
> https://www.cse.buffalo.edu/~knepley/
> 
>


-- 
Wang Weizhuo


Re: [petsc-users] Increasing norm with finer mesh

2018-10-16 Thread Matthew Knepley
On Tue, Oct 16, 2018 at 7:26 PM Weizhuo Wang  wrote:

> Hello again!
>
> After some tweaking the code is giving right answers now. However it start
> to disagree with MATLAB results ('traditional' way using matrix inverse)
> when the grid is larger than 100*100. My PhD advisor and I suspects that
> the default dimension of the Krylov subspace is 100 in the test case we are
> running. If so, is there a way to increase the size of the subspace?
>

1) The default subspace size is 30, not 100. You can increase the subspace
size using

   -ksp_gmres_restart n

2) The problem is likely your tolerance. The default solver tolerance is
1e-5. You can change it using

   -ksp_rtol 1e-9

  Thanks,

 Matt


>
> [image: Disagrees.png]
>
> Thanks!
>
> Wang Weizhuo
>
> On Tue, Oct 9, 2018 at 2:50 AM Mark Adams  wrote:
>
>> To reiterate what Matt is saying, you seem to have the exact solution on
>> a 10x10 grid. That makes no sense unless the solution can be represented
>> exactly by your FE space (eg, u(x,y) = x + y).
>>
>> On Mon, Oct 8, 2018 at 9:33 PM Matthew Knepley  wrote:
>>
>>> On Mon, Oct 8, 2018 at 9:28 PM Weizhuo Wang 
>>> wrote:
>>>
 The code is attached in case anyone wants to take a look, I will try
 the high frequency scenario later.

>>>
>>> That is not the error. It is superconvergence at the vertices. The real
>>> solution is trigonometric, so your
>>> linear interpolants or whatever you use is not going to get the right
>>> value in between mesh points. You
>>> need to do a real integral over the whole interval to get the L_2 error.
>>>
>>>   Thanks,
>>>
>>>  Matt
>>>
>>>
 On Mon, Oct 8, 2018 at 7:58 PM Mark Adams  wrote:

>
>
> On Mon, Oct 8, 2018 at 6:58 PM Weizhuo Wang 
> wrote:
>
>> The first plot is the norm with the flag -pc_type lu with respect to
>> number of grids in one axis (n), and the second plot is the norm without
>> the flag -pc_type lu.
>>
>
> So you are using the default PC w/o LU. The default is ILU. This will
> reduce high frequency effectively but is not effective on the low 
> frequency
> error. Don't expect your algebraic error reduction to be at the same scale
> as the residual reduction (what KSP measures).
>
>
>>

 --
 Wang Weizhuo

>>>
>>>
>>> --
>>> What most experimenters take for granted before they begin their
>>> experiments is infinitely more interesting than any results to which their
>>> experiments lead.
>>> -- Norbert Wiener
>>>
>>> https://www.cse.buffalo.edu/~knepley/
>>> 
>>>
>>
>
> --
> Wang Weizhuo
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ 


Re: [petsc-users] Increasing norm with finer mesh

2018-10-16 Thread Weizhuo Wang
Hello again!

After some tweaking the code is giving right answers now. However it start
to disagree with MATLAB results ('traditional' way using matrix inverse)
when the grid is larger than 100*100. My PhD advisor and I suspects that
the default dimension of the Krylov subspace is 100 in the test case we are
running. If so, is there a way to increase the size of the subspace?

[image: Disagrees.png]

Thanks!

Wang Weizhuo

On Tue, Oct 9, 2018 at 2:50 AM Mark Adams  wrote:

> To reiterate what Matt is saying, you seem to have the exact solution on a
> 10x10 grid. That makes no sense unless the solution can be represented
> exactly by your FE space (eg, u(x,y) = x + y).
>
> On Mon, Oct 8, 2018 at 9:33 PM Matthew Knepley  wrote:
>
>> On Mon, Oct 8, 2018 at 9:28 PM Weizhuo Wang 
>> wrote:
>>
>>> The code is attached in case anyone wants to take a look, I will try the
>>> high frequency scenario later.
>>>
>>
>> That is not the error. It is superconvergence at the vertices. The real
>> solution is trigonometric, so your
>> linear interpolants or whatever you use is not going to get the right
>> value in between mesh points. You
>> need to do a real integral over the whole interval to get the L_2 error.
>>
>>   Thanks,
>>
>>  Matt
>>
>>
>>> On Mon, Oct 8, 2018 at 7:58 PM Mark Adams  wrote:
>>>


 On Mon, Oct 8, 2018 at 6:58 PM Weizhuo Wang 
 wrote:

> The first plot is the norm with the flag -pc_type lu with respect to
> number of grids in one axis (n), and the second plot is the norm without
> the flag -pc_type lu.
>

 So you are using the default PC w/o LU. The default is ILU. This will
 reduce high frequency effectively but is not effective on the low frequency
 error. Don't expect your algebraic error reduction to be at the same scale
 as the residual reduction (what KSP measures).


>
>>>
>>> --
>>> Wang Weizhuo
>>>
>>
>>
>> --
>> What most experimenters take for granted before they begin their
>> experiments is infinitely more interesting than any results to which their
>> experiments lead.
>> -- Norbert Wiener
>>
>> https://www.cse.buffalo.edu/~knepley/
>> 
>>
>

-- 
Wang Weizhuo


Re: [petsc-users] Increasing norm with finer mesh

2018-10-09 Thread Mark Adams
To reiterate what Matt is saying, you seem to have the exact solution on a
10x10 grid. That makes no sense unless the solution can be represented
exactly by your FE space (eg, u(x,y) = x + y).

On Mon, Oct 8, 2018 at 9:33 PM Matthew Knepley  wrote:

> On Mon, Oct 8, 2018 at 9:28 PM Weizhuo Wang  wrote:
>
>> The code is attached in case anyone wants to take a look, I will try the
>> high frequency scenario later.
>>
>
> That is not the error. It is superconvergence at the vertices. The real
> solution is trigonometric, so your
> linear interpolants or whatever you use is not going to get the right
> value in between mesh points. You
> need to do a real integral over the whole interval to get the L_2 error.
>
>   Thanks,
>
>  Matt
>
>
>> On Mon, Oct 8, 2018 at 7:58 PM Mark Adams  wrote:
>>
>>>
>>>
>>> On Mon, Oct 8, 2018 at 6:58 PM Weizhuo Wang 
>>> wrote:
>>>
 The first plot is the norm with the flag -pc_type lu with respect to
 number of grids in one axis (n), and the second plot is the norm without
 the flag -pc_type lu.

>>>
>>> So you are using the default PC w/o LU. The default is ILU. This will
>>> reduce high frequency effectively but is not effective on the low frequency
>>> error. Don't expect your algebraic error reduction to be at the same scale
>>> as the residual reduction (what KSP measures).
>>>
>>>

>>
>> --
>> Wang Weizhuo
>>
>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
> https://www.cse.buffalo.edu/~knepley/
> 
>


Re: [petsc-users] Increasing norm with finer mesh

2018-10-08 Thread Matthew Knepley
On Mon, Oct 8, 2018 at 9:28 PM Weizhuo Wang  wrote:

> The code is attached in case anyone wants to take a look, I will try the
> high frequency scenario later.
>

That is not the error. It is superconvergence at the vertices. The real
solution is trigonometric, so your
linear interpolants or whatever you use is not going to get the right value
in between mesh points. You
need to do a real integral over the whole interval to get the L_2 error.

  Thanks,

 Matt


> On Mon, Oct 8, 2018 at 7:58 PM Mark Adams  wrote:
>
>>
>>
>> On Mon, Oct 8, 2018 at 6:58 PM Weizhuo Wang 
>> wrote:
>>
>>> The first plot is the norm with the flag -pc_type lu with respect to
>>> number of grids in one axis (n), and the second plot is the norm without
>>> the flag -pc_type lu.
>>>
>>
>> So you are using the default PC w/o LU. The default is ILU. This will
>> reduce high frequency effectively but is not effective on the low frequency
>> error. Don't expect your algebraic error reduction to be at the same scale
>> as the residual reduction (what KSP measures).
>>
>>
>>>
>
> --
> Wang Weizhuo
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ 


Re: [petsc-users] Increasing norm with finer mesh

2018-10-08 Thread Weizhuo Wang
The code is attached in case anyone wants to take a look, I will try the
high frequency scenario later.

On Mon, Oct 8, 2018 at 7:58 PM Mark Adams  wrote:

>
>
> On Mon, Oct 8, 2018 at 6:58 PM Weizhuo Wang  wrote:
>
>> The first plot is the norm with the flag -pc_type lu with respect to
>> number of grids in one axis (n), and the second plot is the norm without
>> the flag -pc_type lu.
>>
>
> So you are using the default PC w/o LU. The default is ILU. This will
> reduce high frequency effectively but is not effective on the low frequency
> error. Don't expect your algebraic error reduction to be at the same scale
> as the residual reduction (what KSP measures).
>
>
>>

-- 
Wang Weizhuo


makefile
Description: Binary data


Helmholtz_demoV2C.cpp
Description: Binary data


Re: [petsc-users] Increasing norm with finer mesh

2018-10-08 Thread Mark Adams
On Mon, Oct 8, 2018 at 6:58 PM Weizhuo Wang  wrote:

> The first plot is the norm with the flag -pc_type lu with respect to
> number of grids in one axis (n), and the second plot is the norm without
> the flag -pc_type lu.
>

So you are using the default PC w/o LU. The default is ILU. This will
reduce high frequency effectively but is not effective on the low frequency
error. Don't expect your algebraic error reduction to be at the same scale
as the residual reduction (what KSP measures).


>


Re: [petsc-users] Increasing norm with finer mesh

2018-10-02 Thread Matthew Knepley
On Tue, Oct 2, 2018 at 5:26 PM Weizhuo Wang  wrote:

> I didn't specify a tolerance, it was using the default tolerance. Doesn't
> the asymptoting norm implies finer grid won't help to get finer solution?
>

There are two things going on in your test, discretization error controlled
by the grid, and algebraic error controlled by the solver. This makes it
difficult to isolate what is happening. However, it seems clear that your
plot is looking at algebraic error. You can confirm this by using

  -pc_type lu

for the solve. Then all you have is discretization error.

  Thanks,

 Matt


> Mark Adams  于2018年10月2日周二 下午4:11写道:
>
>>
>>
>> On Tue, Oct 2, 2018 at 5:04 PM Weizhuo Wang 
>> wrote:
>>
>>> Yes I was using one norm in my Helmholtz code, the example code used 2
>>> norm. But now I am using 2 norm in both code.
>>>
>>>   /*
>>>  Check the error
>>>   */
>>>   ierr = VecAXPY(x,-1.0,u); CHKERRQ(ierr);
>>>   ierr = VecNorm(x,NORM_1,); CHKERRQ(ierr);
>>>   ierr = KSPGetIterationNumber(ksp,); CHKERRQ(ierr);
>>>   ierr = PetscPrintf(PETSC_COMM_WORLD,"Norm of error %g iterations
>>> %D\n",(double)norm/(m*n),its); CHKERRQ(ierr);
>>>
>>>  I made a plot to show the increase:
>>>
>>
>>
>> FYI, this is asymptoting to a constant.  What solver tolerance are
>> you using?
>>
>>
>>>
>>> [image: Norm comparison.png]
>>>
>>> Mark Adams  于2018年10月2日周二 下午2:27写道:
>>>


 On Tue, Oct 2, 2018 at 2:24 PM Weizhuo Wang 
 wrote:

> The example code and makefile are attached below. The whole thing
> started as I tried to build a Helmholtz solver, and the mean error
> (calculated by: sum( | numerical_sol - analytical_sol | / analytical_sol )
> )
>

 This is a one norm. If you use max (instead of sum) then you don't need
 to scale. You do have to be careful about dividing by (near) zero.


> increases as I use finer and finer grids.
>

 What was the rate of increase?


> Then I looked at the example 12 (Laplacian solver) which is similar to
> what I did to see if I have missed something. The example is using 2_norm.
> I have made some minor modifications (3 places) on the code, you can 
> search
> 'Modified' in the code to see them.
>
> If this helps: I configured the PETSc to use real and double
> precision. Changed the name of the example code from ex12.c to ex12c.c
>
> Thanks for all your reply!
>
> Weizhuo
>
>
> Smith, Barry F. 
>
>
>>Please send your version of the example that computes the mean
>> norm of the grid; I suspect we are talking apples and oranges
>>
>>Barry
>>
>>
>>
>> > On Oct 1, 2018, at 7:51 PM, Weizhuo Wang 
>> wrote:
>> >
>> > I also tried to divide the norm by m*n , which is the number of
>> grids, the trend of norm still increases.
>> >
>> > Thanks!
>> >
>> > Weizhuo
>> >
>> > Matthew Knepley 
>> > On Mon, Oct 1, 2018 at 6:31 PM Weizhuo Wang 
>> wrote:
>> > Hi!
>> >
>> > I'm recently trying out the example code provided with the KSP
>> solver (ex12.c). I noticed that the mean norm of the grid increases as I
>> use finer meshes. For example, the mean norm is 5.72e-8 at m=10 n=10.
>> However at m=100, n=100, mean norm increases to 9.55e-6. This seems 
>> counter
>> intuitive, since most of the time error should decreases when using finer
>> grid. Am I doing this wrong?
>> >
>> > The norm is misleading in that it is the l_2 norm, meaning just the
>> sqrt of the sum of the squares of
>> > the vector entries. It should be scaled by the volume element to
>> approximate a scale-independent
>> > norm (like the L_2 norm).
>> >
>> >   Thanks,
>> >
>> >  Matt
>> >
>> > Thanks!
>> > --
>> > Wang Weizhuo
>> >
>> >
>> > --
>> > What most experimenters take for granted before they begin their
>> experiments is infinitely more interesting than any results to which 
>> their
>> experiments lead.
>> > -- Norbert Wiener
>> >
>> > https://www.cse.buffalo.edu/~knepley/
>> 
>> >
>> >
>> > --
>> > Wang Weizhuo
>>
>>
>
> --
> Wang Weizhuo
>

>>>
>>> --
>>> Wang Weizhuo
>>>
>>
>
> --
> Wang Weizhuo
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ 


Re: [petsc-users] Increasing norm with finer mesh

2018-10-02 Thread Weizhuo Wang
I didn't specify a tolerance, it was using the default tolerance. Doesn't
the asymptoting norm implies finer grid won't help to get finer solution?

Mark Adams  于2018年10月2日周二 下午4:11写道:

>
>
> On Tue, Oct 2, 2018 at 5:04 PM Weizhuo Wang  wrote:
>
>> Yes I was using one norm in my Helmholtz code, the example code used 2
>> norm. But now I am using 2 norm in both code.
>>
>>   /*
>>  Check the error
>>   */
>>   ierr = VecAXPY(x,-1.0,u); CHKERRQ(ierr);
>>   ierr = VecNorm(x,NORM_1,); CHKERRQ(ierr);
>>   ierr = KSPGetIterationNumber(ksp,); CHKERRQ(ierr);
>>   ierr = PetscPrintf(PETSC_COMM_WORLD,"Norm of error %g iterations
>> %D\n",(double)norm/(m*n),its); CHKERRQ(ierr);
>>
>>  I made a plot to show the increase:
>>
>
>
> FYI, this is asymptoting to a constant.  What solver tolerance are
> you using?
>
>
>>
>> [image: Norm comparison.png]
>>
>> Mark Adams  于2018年10月2日周二 下午2:27写道:
>>
>>>
>>>
>>> On Tue, Oct 2, 2018 at 2:24 PM Weizhuo Wang 
>>> wrote:
>>>
 The example code and makefile are attached below. The whole thing
 started as I tried to build a Helmholtz solver, and the mean error
 (calculated by: sum( | numerical_sol - analytical_sol | / analytical_sol )
 )

>>>
>>> This is a one norm. If you use max (instead of sum) then you don't need
>>> to scale. You do have to be careful about dividing by (near) zero.
>>>
>>>
 increases as I use finer and finer grids.

>>>
>>> What was the rate of increase?
>>>
>>>
 Then I looked at the example 12 (Laplacian solver) which is similar to
 what I did to see if I have missed something. The example is using 2_norm.
 I have made some minor modifications (3 places) on the code, you can search
 'Modified' in the code to see them.

 If this helps: I configured the PETSc to use real and double precision.
 Changed the name of the example code from ex12.c to ex12c.c

 Thanks for all your reply!

 Weizhuo


 Smith, Barry F. 


>Please send your version of the example that computes the mean norm
> of the grid; I suspect we are talking apples and oranges
>
>Barry
>
>
>
> > On Oct 1, 2018, at 7:51 PM, Weizhuo Wang 
> wrote:
> >
> > I also tried to divide the norm by m*n , which is the number of
> grids, the trend of norm still increases.
> >
> > Thanks!
> >
> > Weizhuo
> >
> > Matthew Knepley 
> > On Mon, Oct 1, 2018 at 6:31 PM Weizhuo Wang 
> wrote:
> > Hi!
> >
> > I'm recently trying out the example code provided with the KSP
> solver (ex12.c). I noticed that the mean norm of the grid increases as I
> use finer meshes. For example, the mean norm is 5.72e-8 at m=10 n=10.
> However at m=100, n=100, mean norm increases to 9.55e-6. This seems 
> counter
> intuitive, since most of the time error should decreases when using finer
> grid. Am I doing this wrong?
> >
> > The norm is misleading in that it is the l_2 norm, meaning just the
> sqrt of the sum of the squares of
> > the vector entries. It should be scaled by the volume element to
> approximate a scale-independent
> > norm (like the L_2 norm).
> >
> >   Thanks,
> >
> >  Matt
> >
> > Thanks!
> > --
> > Wang Weizhuo
> >
> >
> > --
> > What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> > -- Norbert Wiener
> >
> > https://www.cse.buffalo.edu/~knepley/
> 
> >
> >
> > --
> > Wang Weizhuo
>
>

 --
 Wang Weizhuo

>>>
>>
>> --
>> Wang Weizhuo
>>
>

-- 
Wang Weizhuo


Re: [petsc-users] Increasing norm with finer mesh

2018-10-02 Thread Mark Adams
On Tue, Oct 2, 2018 at 5:04 PM Weizhuo Wang  wrote:

> Yes I was using one norm in my Helmholtz code, the example code used 2
> norm. But now I am using 2 norm in both code.
>
>   /*
>  Check the error
>   */
>   ierr = VecAXPY(x,-1.0,u); CHKERRQ(ierr);
>   ierr = VecNorm(x,NORM_1,); CHKERRQ(ierr);
>   ierr = KSPGetIterationNumber(ksp,); CHKERRQ(ierr);
>   ierr = PetscPrintf(PETSC_COMM_WORLD,"Norm of error %g iterations
> %D\n",(double)norm/(m*n),its); CHKERRQ(ierr);
>
>  I made a plot to show the increase:
>


FYI, this is asymptoting to a constant.  What solver tolerance are
you using?


>
> [image: Norm comparison.png]
>
> Mark Adams  于2018年10月2日周二 下午2:27写道:
>
>>
>>
>> On Tue, Oct 2, 2018 at 2:24 PM Weizhuo Wang 
>> wrote:
>>
>>> The example code and makefile are attached below. The whole thing
>>> started as I tried to build a Helmholtz solver, and the mean error
>>> (calculated by: sum( | numerical_sol - analytical_sol | / analytical_sol )
>>> )
>>>
>>
>> This is a one norm. If you use max (instead of sum) then you don't need
>> to scale. You do have to be careful about dividing by (near) zero.
>>
>>
>>> increases as I use finer and finer grids.
>>>
>>
>> What was the rate of increase?
>>
>>
>>> Then I looked at the example 12 (Laplacian solver) which is similar to
>>> what I did to see if I have missed something. The example is using 2_norm.
>>> I have made some minor modifications (3 places) on the code, you can search
>>> 'Modified' in the code to see them.
>>>
>>> If this helps: I configured the PETSc to use real and double precision.
>>> Changed the name of the example code from ex12.c to ex12c.c
>>>
>>> Thanks for all your reply!
>>>
>>> Weizhuo
>>>
>>>
>>> Smith, Barry F. 
>>>
>>>
Please send your version of the example that computes the mean norm
 of the grid; I suspect we are talking apples and oranges

Barry



 > On Oct 1, 2018, at 7:51 PM, Weizhuo Wang 
 wrote:
 >
 > I also tried to divide the norm by m*n , which is the number of
 grids, the trend of norm still increases.
 >
 > Thanks!
 >
 > Weizhuo
 >
 > Matthew Knepley 
 > On Mon, Oct 1, 2018 at 6:31 PM Weizhuo Wang 
 wrote:
 > Hi!
 >
 > I'm recently trying out the example code provided with the KSP solver
 (ex12.c). I noticed that the mean norm of the grid increases as I use finer
 meshes. For example, the mean norm is 5.72e-8 at m=10 n=10. However at
 m=100, n=100, mean norm increases to 9.55e-6. This seems counter intuitive,
 since most of the time error should decreases when using finer grid. Am I
 doing this wrong?
 >
 > The norm is misleading in that it is the l_2 norm, meaning just the
 sqrt of the sum of the squares of
 > the vector entries. It should be scaled by the volume element to
 approximate a scale-independent
 > norm (like the L_2 norm).
 >
 >   Thanks,
 >
 >  Matt
 >
 > Thanks!
 > --
 > Wang Weizhuo
 >
 >
 > --
 > What most experimenters take for granted before they begin their
 experiments is infinitely more interesting than any results to which their
 experiments lead.
 > -- Norbert Wiener
 >
 > https://www.cse.buffalo.edu/~knepley/
 
 >
 >
 > --
 > Wang Weizhuo


>>>
>>> --
>>> Wang Weizhuo
>>>
>>
>
> --
> Wang Weizhuo
>


Re: [petsc-users] Increasing norm with finer mesh

2018-10-02 Thread Weizhuo Wang
Yes I was using one norm in my Helmholtz code, the example code used 2
norm. But now I am using 2 norm in both code.

  /*
 Check the error
  */
  ierr = VecAXPY(x,-1.0,u); CHKERRQ(ierr);
  ierr = VecNorm(x,NORM_1,); CHKERRQ(ierr);
  ierr = KSPGetIterationNumber(ksp,); CHKERRQ(ierr);
  ierr = PetscPrintf(PETSC_COMM_WORLD,"Norm of error %g iterations
%D\n",(double)norm/(m*n),its); CHKERRQ(ierr);

 I made a plot to show the increase:

[image: Norm comparison.png]

Mark Adams  于2018年10月2日周二 下午2:27写道:

>
>
> On Tue, Oct 2, 2018 at 2:24 PM Weizhuo Wang  wrote:
>
>> The example code and makefile are attached below. The whole thing started
>> as I tried to build a Helmholtz solver, and the mean error (calculated by:
>> sum( | numerical_sol - analytical_sol | / analytical_sol ) )
>>
>
> This is a one norm. If you use max (instead of sum) then you don't need to
> scale. You do have to be careful about dividing by (near) zero.
>
>
>> increases as I use finer and finer grids.
>>
>
> What was the rate of increase?
>
>
>> Then I looked at the example 12 (Laplacian solver) which is similar to
>> what I did to see if I have missed something. The example is using 2_norm.
>> I have made some minor modifications (3 places) on the code, you can search
>> 'Modified' in the code to see them.
>>
>> If this helps: I configured the PETSc to use real and double precision.
>> Changed the name of the example code from ex12.c to ex12c.c
>>
>> Thanks for all your reply!
>>
>> Weizhuo
>>
>>
>> Smith, Barry F. 
>>
>>
>>>Please send your version of the example that computes the mean norm
>>> of the grid; I suspect we are talking apples and oranges
>>>
>>>Barry
>>>
>>>
>>>
>>> > On Oct 1, 2018, at 7:51 PM, Weizhuo Wang 
>>> wrote:
>>> >
>>> > I also tried to divide the norm by m*n , which is the number of grids,
>>> the trend of norm still increases.
>>> >
>>> > Thanks!
>>> >
>>> > Weizhuo
>>> >
>>> > Matthew Knepley 
>>> > On Mon, Oct 1, 2018 at 6:31 PM Weizhuo Wang 
>>> wrote:
>>> > Hi!
>>> >
>>> > I'm recently trying out the example code provided with the KSP solver
>>> (ex12.c). I noticed that the mean norm of the grid increases as I use finer
>>> meshes. For example, the mean norm is 5.72e-8 at m=10 n=10. However at
>>> m=100, n=100, mean norm increases to 9.55e-6. This seems counter intuitive,
>>> since most of the time error should decreases when using finer grid. Am I
>>> doing this wrong?
>>> >
>>> > The norm is misleading in that it is the l_2 norm, meaning just the
>>> sqrt of the sum of the squares of
>>> > the vector entries. It should be scaled by the volume element to
>>> approximate a scale-independent
>>> > norm (like the L_2 norm).
>>> >
>>> >   Thanks,
>>> >
>>> >  Matt
>>> >
>>> > Thanks!
>>> > --
>>> > Wang Weizhuo
>>> >
>>> >
>>> > --
>>> > What most experimenters take for granted before they begin their
>>> experiments is infinitely more interesting than any results to which their
>>> experiments lead.
>>> > -- Norbert Wiener
>>> >
>>> > https://www.cse.buffalo.edu/~knepley/
>>> 
>>> >
>>> >
>>> > --
>>> > Wang Weizhuo
>>>
>>>
>>
>> --
>> Wang Weizhuo
>>
>

-- 
Wang Weizhuo


Re: [petsc-users] Increasing norm with finer mesh

2018-10-02 Thread Mark Adams
On Tue, Oct 2, 2018 at 2:24 PM Weizhuo Wang  wrote:

> The example code and makefile are attached below. The whole thing started
> as I tried to build a Helmholtz solver, and the mean error (calculated by:
> sum( | numerical_sol - analytical_sol | / analytical_sol ) )
>

This is a one norm. If you use max (instead of sum) then you don't need to
scale. You do have to be careful about dividing by (near) zero.


> increases as I use finer and finer grids.
>

What was the rate of increase?


> Then I looked at the example 12 (Laplacian solver) which is similar to
> what I did to see if I have missed something. The example is using 2_norm.
> I have made some minor modifications (3 places) on the code, you can search
> 'Modified' in the code to see them.
>
> If this helps: I configured the PETSc to use real and double precision.
> Changed the name of the example code from ex12.c to ex12c.c
>
> Thanks for all your reply!
>
> Weizhuo
>
>
> Smith, Barry F. 
>
>
>>Please send your version of the example that computes the mean norm of
>> the grid; I suspect we are talking apples and oranges
>>
>>Barry
>>
>>
>>
>> > On Oct 1, 2018, at 7:51 PM, Weizhuo Wang  wrote:
>> >
>> > I also tried to divide the norm by m*n , which is the number of grids,
>> the trend of norm still increases.
>> >
>> > Thanks!
>> >
>> > Weizhuo
>> >
>> > Matthew Knepley 
>> > On Mon, Oct 1, 2018 at 6:31 PM Weizhuo Wang 
>> wrote:
>> > Hi!
>> >
>> > I'm recently trying out the example code provided with the KSP solver
>> (ex12.c). I noticed that the mean norm of the grid increases as I use finer
>> meshes. For example, the mean norm is 5.72e-8 at m=10 n=10. However at
>> m=100, n=100, mean norm increases to 9.55e-6. This seems counter intuitive,
>> since most of the time error should decreases when using finer grid. Am I
>> doing this wrong?
>> >
>> > The norm is misleading in that it is the l_2 norm, meaning just the
>> sqrt of the sum of the squares of
>> > the vector entries. It should be scaled by the volume element to
>> approximate a scale-independent
>> > norm (like the L_2 norm).
>> >
>> >   Thanks,
>> >
>> >  Matt
>> >
>> > Thanks!
>> > --
>> > Wang Weizhuo
>> >
>> >
>> > --
>> > What most experimenters take for granted before they begin their
>> experiments is infinitely more interesting than any results to which their
>> experiments lead.
>> > -- Norbert Wiener
>> >
>> > https://www.cse.buffalo.edu/~knepley/
>> >
>> >
>> > --
>> > Wang Weizhuo
>>
>>
>
> --
> Wang Weizhuo
>


Re: [petsc-users] Increasing norm with finer mesh

2018-10-02 Thread Weizhuo Wang
The example code and makefile are attached below. The whole thing started
as I tried to build a Helmholtz solver, and the mean error (calculated by:
sum( | numerical_sol - analytical_sol | / analytical_sol ) ) increases as I
use finer and finer grids. Then I looked at the example 12 (Laplacian
solver) which is similar to what I did to see if I have missed something.
The example is using 2_norm. I have made some minor modifications (3
places) on the code, you can search 'Modified' in the code to see them.

If this helps: I configured the PETSc to use real and double precision.
Changed the name of the example code from ex12.c to ex12c.c

Thanks for all your reply!

Weizhuo


Smith, Barry F. 


>Please send your version of the example that computes the mean norm of
> the grid; I suspect we are talking apples and oranges
>
>Barry
>
>
>
> > On Oct 1, 2018, at 7:51 PM, Weizhuo Wang  wrote:
> >
> > I also tried to divide the norm by m*n , which is the number of grids,
> the trend of norm still increases.
> >
> > Thanks!
> >
> > Weizhuo
> >
> > Matthew Knepley 
> > On Mon, Oct 1, 2018 at 6:31 PM Weizhuo Wang 
> wrote:
> > Hi!
> >
> > I'm recently trying out the example code provided with the KSP solver
> (ex12.c). I noticed that the mean norm of the grid increases as I use finer
> meshes. For example, the mean norm is 5.72e-8 at m=10 n=10. However at
> m=100, n=100, mean norm increases to 9.55e-6. This seems counter intuitive,
> since most of the time error should decreases when using finer grid. Am I
> doing this wrong?
> >
> > The norm is misleading in that it is the l_2 norm, meaning just the sqrt
> of the sum of the squares of
> > the vector entries. It should be scaled by the volume element to
> approximate a scale-independent
> > norm (like the L_2 norm).
> >
> >   Thanks,
> >
> >  Matt
> >
> > Thanks!
> > --
> > Wang Weizhuo
> >
> >
> > --
> > What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> > -- Norbert Wiener
> >
> > https://www.cse.buffalo.edu/~knepley/
> >
> >
> > --
> > Wang Weizhuo
>
>

-- 
Wang Weizhuo

static char help[] = "Solves a linear system in parallel with KSP.\n\
Input parameters include:\n\
  -m: number of mesh points in x-direction\n\
  -n: number of mesh points in y-direction\n\n";

/*T
   Concepts: KSP^solving a system of linear equations
   Concepts: KSP^Laplacian, 2d
   Concepts: PC^registering preconditioners
   Processors: n
T*/

/*
   Demonstrates registering a new preconditioner (PC) type.

   To register a PC type whose code is linked into the executable,
   use PCRegister(). To register a PC type in a dynamic library use PCRegister()

   Also provide the prototype for your PCCreate_XXX() function. In
   this example we use the PETSc implementation of the Jacobi method,
   PCCreate_Jacobi() just as an example.

   See the file src/ksp/pc/impls/jacobi/jacobi.c for details on how to
   write a new PC component.

   See the manual page PCRegister() for details on how to register a method.
*/

/*
  Include "petscksp.h" so that we can use KSP solvers.  Note that this file
  automatically includes:
 petscsys.h   - base PETSc routines   petscvec.h - vectors
 petscmat.h - matrices
 petscis.h - index setspetscksp.h - Krylov subspace methods
 petscviewer.h - viewers   petscpc.h  - preconditioners
*/
#include 

PETSC_EXTERN PetscErrorCode PCCreate_Jacobi(PC);

int main(int argc,char **args)
{
  Vecx,b,u;  /* approx solution, RHS, exact solution */
  MatA;/* linear system matrix */
  KSPksp; /* linear solver context */
  PetscReal  norm; /* norm of solution error */
  PetscInt   i,j,Ii,J,Istart,Iend,m = 8,n = 7,its;
  PetscErrorCode ierr;
  PetscScalarv,one = 1.0;
  PC pc;  /* preconditioner context */

  ierr = PetscInitialize(,,(char*)0,help);if (ierr) return ierr;
  ierr = PetscOptionsGetInt(NULL,NULL,"-m",,NULL);CHKERRQ(ierr);
  ierr = PetscOptionsGetInt(NULL,NULL,"-n",,NULL);CHKERRQ(ierr);

  /* - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
 Compute the matrix and right-hand-side vector that define
 the linear system, Ax = b.
 - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - */
  /*
 Create parallel matrix, specifying only its global dimensions.
 When using MatCreate(), the matrix format can be specified at
 runtime. Also, the parallel partitioning of the matrix can be
 determined by PETSc at runtime.
  */
  ierr = MatCreate(PETSC_COMM_WORLD,);CHKERRQ(ierr);
  ierr = MatSetSizes(A,PETSC_DECIDE,PETSC_DECIDE,m*n,m*n);CHKERRQ(ierr);
  ierr = MatSetFromOptions(A);CHKERRQ(ierr);
  ierr = MatSetUp(A);CHKERRQ(ierr);

  /*
 Currently, all PETSc parallel matrix formats are partitioned by
 contiguous chunks of rows across the processors.  

Re: [petsc-users] Increasing norm with finer mesh

2018-10-02 Thread Mark Adams
As Matt said you should see the initial 2-norm residual asymptote to a
constant with scaling, but it will rise.

I prefer the max norm for this reason. You can use -ksp_monitor_max, but
this does compute an extra residual, apparently, but I don't understand why
it should ...

If this does not asymptotic to constant and your problem is linear, you
have a bug.

On Mon, Oct 1, 2018 at 6:31 PM Weizhuo Wang  wrote:

> Hi!
>
> I'm recently trying out the example code provided with the KSP solver
> (ex12.c). I noticed that the mean norm of the grid increases as I use finer
> meshes. For example, the mean norm is 5.72e-8 at m=10 n=10. However at
> m=100, n=100, mean norm increases to 9.55e-6. This seems counter intuitive,
> since most of the time error should decreases when using finer grid. Am I
> doing this wrong?
>
> Thanks!
> --
> Wang Weizhuo
>


Re: [petsc-users] Increasing norm with finer mesh

2018-10-01 Thread Smith, Barry F.

   Please send your version of the example that computes the mean norm of the 
grid; I suspect we are talking apples and oranges

   Barry



> On Oct 1, 2018, at 7:51 PM, Weizhuo Wang  wrote:
> 
> I also tried to divide the norm by m*n , which is the number of grids, the 
> trend of norm still increases.
> 
> Thanks!
> 
> Weizhuo
> 
> Matthew Knepley  于2018年10月1日周一 下午7:45写道:
> On Mon, Oct 1, 2018 at 6:31 PM Weizhuo Wang  wrote:
> Hi!
> 
> I'm recently trying out the example code provided with the KSP solver 
> (ex12.c). I noticed that the mean norm of the grid increases as I use finer 
> meshes. For example, the mean norm is 5.72e-8 at m=10 n=10. However at m=100, 
> n=100, mean norm increases to 9.55e-6. This seems counter intuitive, since 
> most of the time error should decreases when using finer grid. Am I doing 
> this wrong?
> 
> The norm is misleading in that it is the l_2 norm, meaning just the sqrt of 
> the sum of the squares of
> the vector entries. It should be scaled by the volume element to approximate 
> a scale-independent
> norm (like the L_2 norm).
> 
>   Thanks,
> 
>  Matt
>  
> Thanks! 
> -- 
> Wang Weizhuo
> 
> 
> -- 
> What most experimenters take for granted before they begin their experiments 
> is infinitely more interesting than any results to which their experiments 
> lead.
> -- Norbert Wiener
> 
> https://www.cse.buffalo.edu/~knepley/
> 
> 
> -- 
> Wang Weizhuo



Re: [petsc-users] Increasing norm with finer mesh

2018-10-01 Thread Matthew Knepley
On Mon, Oct 1, 2018 at 8:51 PM Weizhuo Wang  wrote:

> I also tried to divide the norm by m*n , which is the number of grids, the
> trend of norm still increases.
>

We need to be precise. First, look at the initial residual, because that is
what you control with the initial
guess. You are saying that the initial residual does not asymptote? I would
be reluctant to believe that.

  Thanks,

 Matt


> Thanks!
>
> Weizhuo
>
> Matthew Knepley  于2018年10月1日周一 下午7:45写道:
>
>> On Mon, Oct 1, 2018 at 6:31 PM Weizhuo Wang 
>> wrote:
>>
>>> Hi!
>>>
>>> I'm recently trying out the example code provided with the KSP solver
>>> (ex12.c). I noticed that the mean norm of the grid increases as I use finer
>>> meshes. For example, the mean norm is 5.72e-8 at m=10 n=10. However at
>>> m=100, n=100, mean norm increases to 9.55e-6. This seems counter intuitive,
>>> since most of the time error should decreases when using finer grid. Am I
>>> doing this wrong?
>>>
>>
>> The norm is misleading in that it is the l_2 norm, meaning just the sqrt
>> of the sum of the squares of
>> the vector entries. It should be scaled by the volume element to
>> approximate a scale-independent
>> norm (like the L_2 norm).
>>
>>   Thanks,
>>
>>  Matt
>>
>>
>>> Thanks!
>>> --
>>> Wang Weizhuo
>>>
>>
>>
>> --
>> What most experimenters take for granted before they begin their
>> experiments is infinitely more interesting than any results to which their
>> experiments lead.
>> -- Norbert Wiener
>>
>> https://www.cse.buffalo.edu/~knepley/
>> 
>>
>
>
> --
> Wang Weizhuo
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ 


Re: [petsc-users] Increasing norm with finer mesh

2018-10-01 Thread Weizhuo Wang
I also tried to divide the norm by m*n , which is the number of grids, the
trend of norm still increases.

Thanks!

Weizhuo

Matthew Knepley  于2018年10月1日周一 下午7:45写道:

> On Mon, Oct 1, 2018 at 6:31 PM Weizhuo Wang  wrote:
>
>> Hi!
>>
>> I'm recently trying out the example code provided with the KSP solver
>> (ex12.c). I noticed that the mean norm of the grid increases as I use finer
>> meshes. For example, the mean norm is 5.72e-8 at m=10 n=10. However at
>> m=100, n=100, mean norm increases to 9.55e-6. This seems counter intuitive,
>> since most of the time error should decreases when using finer grid. Am I
>> doing this wrong?
>>
>
> The norm is misleading in that it is the l_2 norm, meaning just the sqrt
> of the sum of the squares of
> the vector entries. It should be scaled by the volume element to
> approximate a scale-independent
> norm (like the L_2 norm).
>
>   Thanks,
>
>  Matt
>
>
>> Thanks!
>> --
>> Wang Weizhuo
>>
>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
> https://www.cse.buffalo.edu/~knepley/
> 
>


-- 
Wang Weizhuo


Re: [petsc-users] Increasing norm with finer mesh

2018-10-01 Thread Matthew Knepley
On Mon, Oct 1, 2018 at 6:31 PM Weizhuo Wang  wrote:

> Hi!
>
> I'm recently trying out the example code provided with the KSP solver
> (ex12.c). I noticed that the mean norm of the grid increases as I use finer
> meshes. For example, the mean norm is 5.72e-8 at m=10 n=10. However at
> m=100, n=100, mean norm increases to 9.55e-6. This seems counter intuitive,
> since most of the time error should decreases when using finer grid. Am I
> doing this wrong?
>

The norm is misleading in that it is the l_2 norm, meaning just the sqrt of
the sum of the squares of
the vector entries. It should be scaled by the volume element to
approximate a scale-independent
norm (like the L_2 norm).

  Thanks,

 Matt


> Thanks!
> --
> Wang Weizhuo
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/