I didn't specify a tolerance, it was using the default tolerance. Doesn't
the asymptoting norm implies finer grid won't help to get finer solution?

Mark Adams <mfad...@lbl.gov> 于2018年10月2日周二 下午4:11写道:

>
>
> On Tue, Oct 2, 2018 at 5:04 PM Weizhuo Wang <weizh...@illinois.edu> wrote:
>
>> Yes I was using one norm in my Helmholtz code, the example code used 2
>> norm. But now I am using 2 norm in both code.
>>
>>   /*
>>      Check the error
>>   */
>>   ierr = VecAXPY(x,-1.0,u); CHKERRQ(ierr);
>>   ierr = VecNorm(x,NORM_1,&norm); CHKERRQ(ierr);
>>   ierr = KSPGetIterationNumber(ksp,&its); CHKERRQ(ierr);
>>   ierr = PetscPrintf(PETSC_COMM_WORLD,"Norm of error %g iterations
>> %D\n",(double)norm/(m*n),its); CHKERRQ(ierr);
>>
>>  I made a plot to show the increase:
>>
>
>
> FYI, this is asymptoting to a constant.  What solver tolerance are
> you using?
>
>
>>
>> [image: Norm comparison.png]
>>
>> Mark Adams <mfad...@lbl.gov> 于2018年10月2日周二 下午2:27写道:
>>
>>>
>>>
>>> On Tue, Oct 2, 2018 at 2:24 PM Weizhuo Wang <weizh...@illinois.edu>
>>> wrote:
>>>
>>>> The example code and makefile are attached below. The whole thing
>>>> started as I tried to build a Helmholtz solver, and the mean error
>>>> (calculated by: sum( | numerical_sol - analytical_sol | / analytical_sol )
>>>> )
>>>>
>>>
>>> This is a one norm. If you use max (instead of sum) then you don't need
>>> to scale. You do have to be careful about dividing by (near) zero.
>>>
>>>
>>>> increases as I use finer and finer grids.
>>>>
>>>
>>> What was the rate of increase?
>>>
>>>
>>>> Then I looked at the example 12 (Laplacian solver) which is similar to
>>>> what I did to see if I have missed something. The example is using 2_norm.
>>>> I have made some minor modifications (3 places) on the code, you can search
>>>> 'Modified' in the code to see them.
>>>>
>>>> If this helps: I configured the PETSc to use real and double precision.
>>>> Changed the name of the example code from ex12.c to ex12c.c
>>>>
>>>> Thanks for all your reply!
>>>>
>>>> Weizhuo
>>>>
>>>>
>>>> Smith, Barry F. <bsm...@mcs.anl.gov>
>>>>
>>>>
>>>>>    Please send your version of the example that computes the mean norm
>>>>> of the grid; I suspect we are talking apples and oranges
>>>>>
>>>>>    Barry
>>>>>
>>>>>
>>>>>
>>>>> > On Oct 1, 2018, at 7:51 PM, Weizhuo Wang <weizh...@illinois.edu>
>>>>> wrote:
>>>>> >
>>>>> > I also tried to divide the norm by m*n , which is the number of
>>>>> grids, the trend of norm still increases.
>>>>> >
>>>>> > Thanks!
>>>>> >
>>>>> > Weizhuo
>>>>> >
>>>>> > Matthew Knepley <knep...@gmail.com>
>>>>> > On Mon, Oct 1, 2018 at 6:31 PM Weizhuo Wang <weizh...@illinois.edu>
>>>>> wrote:
>>>>> > Hi!
>>>>> >
>>>>> > I'm recently trying out the example code provided with the KSP
>>>>> solver (ex12.c). I noticed that the mean norm of the grid increases as I
>>>>> use finer meshes. For example, the mean norm is 5.72e-8 at m=10 n=10.
>>>>> However at m=100, n=100, mean norm increases to 9.55e-6. This seems 
>>>>> counter
>>>>> intuitive, since most of the time error should decreases when using finer
>>>>> grid. Am I doing this wrong?
>>>>> >
>>>>> > The norm is misleading in that it is the l_2 norm, meaning just the
>>>>> sqrt of the sum of the squares of
>>>>> > the vector entries. It should be scaled by the volume element to
>>>>> approximate a scale-independent
>>>>> > norm (like the L_2 norm).
>>>>> >
>>>>> >   Thanks,
>>>>> >
>>>>> >      Matt
>>>>> >
>>>>> > Thanks!
>>>>> > --
>>>>> > Wang Weizhuo
>>>>> >
>>>>> >
>>>>> > --
>>>>> > What most experimenters take for granted before they begin their
>>>>> experiments is infinitely more interesting than any results to which their
>>>>> experiments lead.
>>>>> > -- Norbert Wiener
>>>>> >
>>>>> > https://www.cse.buffalo.edu/~knepley/
>>>>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__www.cse.buffalo.edu_-7Eknepley_&d=DwMFaQ&c=OCIEmEwdEq_aNlsP4fF3gFqSN-E3mlr2t9JcDdfOZag&r=hsLktHsuxNfF6zyuWGCN8x-6ghPYxhx4cV62Hya47oo&m=KjmDEsZ6w8LEry7nlv3Bw7-pczqWbKGueFU59VoIWZg&s=tEv9-AHhL2CIlmmVos0gFa5PAY9oMG3aTQlnfi62ivA&e=>
>>>>> >
>>>>> >
>>>>> > --
>>>>> > Wang Weizhuo
>>>>>
>>>>>
>>>>
>>>> --
>>>> Wang Weizhuo
>>>>
>>>
>>
>> --
>> Wang Weizhuo
>>
>

-- 
Wang Weizhuo

Reply via email to