Thanks. I missed something earlier in the KSPView

>> using UNPRECONDITIONED norm type for convergence test

Please add the options 

>>>> -ksp_monitor_true_residual -mg_levels_ksp_monitor_true_residual 

It is using the unpreconditioned residual norms for convergence testing but we 
are printing the preconditioned norms.

Barry


> On Sep 29, 2025, at 11:12 AM, Moral Sanchez, Elena 
> <[email protected]> wrote:
> 
> This is the output:
>     Residual norms for mg_levels_1_ solve.
>     0 KSP Residual norm 2.249726733143e+00
>     1 KSP Residual norm 1.433120400946e+00
>     2 KSP Residual norm 1.169262560123e+00
>     3 KSP Residual norm 1.323528716607e+00
>     4 KSP Residual norm 5.006323254234e-01
>     5 KSP Residual norm 3.569836784785e-01
>     6 KSP Residual norm 2.493182937513e-01
>     7 KSP Residual norm 3.038202502298e-01
>     8 KSP Residual norm 2.780214194402e-01
>     9 KSP Residual norm 1.676826341491e-01
>    10 KSP Residual norm 1.209985378713e-01
>    11 KSP Residual norm 9.445076689969e-02
>    12 KSP Residual norm 8.308555284580e-02
>    13 KSP Residual norm 5.472865592585e-02
>    14 KSP Residual norm 4.357870564398e-02
>    15 KSP Residual norm 5.079681292439e-02
>     Linear mg_levels_1_ solve converged due to CONVERGED_ITS iterations 15
>     Residual norms for mg_levels_1_ solve.
>     0 KSP Residual norm 5.079681292439e-02
>     1 KSP Residual norm 2.934938644003e-02
>     2 KSP Residual norm 3.257065831294e-02
>     3 KSP Residual norm 4.143063876867e-02
>     4 KSP Residual norm 4.822471409489e-02
>     5 KSP Residual norm 3.197538246153e-02
>     6 KSP Residual norm 3.461217019835e-02
>     7 KSP Residual norm 3.410193775327e-02
>     8 KSP Residual norm 4.690424294464e-02
>     9 KSP Residual norm 3.366148892800e-02
>    10 KSP Residual norm 4.068015727689e-02
>    11 KSP Residual norm 2.658836123104e-02
>    12 KSP Residual norm 2.826244186003e-02
>    13 KSP Residual norm 2.981793619508e-02
>    14 KSP Residual norm 3.525455091450e-02
>    15 KSP Residual norm 2.331539121838e-02
>     Linear mg_levels_1_ solve converged due to CONVERGED_ITS iterations 15
>     Residual norms for mg_levels_1_ solve.
>     0 KSP Residual norm 2.421498365806e-02
>     1 KSP Residual norm 1.761072112362e-02
>     2 KSP Residual norm 1.400842489042e-02
>     3 KSP Residual norm 1.419665483348e-02
>     4 KSP Residual norm 1.617590701667e-02
>     5 KSP Residual norm 1.354824081005e-02
>     6 KSP Residual norm 1.387252917475e-02
>     7 KSP Residual norm 1.514043102087e-02
>     8 KSP Residual norm 1.275811124745e-02
>     9 KSP Residual norm 1.241039155981e-02
>    10 KSP Residual norm 9.585207801652e-03
>    11 KSP Residual norm 9.022641230732e-03
>    12 KSP Residual norm 1.187709152046e-02
>    13 KSP Residual norm 1.084880112494e-02
>    14 KSP Residual norm 8.194750346781e-03
>    15 KSP Residual norm 7.614246199165e-03
>     Linear mg_levels_1_ solve converged due to CONVERGED_ITS iterations 15
>     Residual norms for mg_levels_1_ solve.
>     0 KSP Residual norm 7.614246199165e-03
>     1 KSP Residual norm 5.620014684145e-03
>     2 KSP Residual norm 6.643368363907e-03
>     3 KSP Residual norm 8.708642393659e-03
>     4 KSP Residual norm 6.401852907459e-03
>     5 KSP Residual norm 7.230576215262e-03
>     6 KSP Residual norm 6.204081601285e-03
>     7 KSP Residual norm 7.038656665944e-03
>     8 KSP Residual norm 7.194079694050e-03
>     9 KSP Residual norm 6.353576889135e-03
>    10 KSP Residual norm 7.313589502731e-03
>    11 KSP Residual norm 6.643320423193e-03
>    12 KSP Residual norm 7.235443182108e-03
>    13 KSP Residual norm 4.971292307201e-03
>    14 KSP Residual norm 5.357933842147e-03
>    15 KSP Residual norm 5.841682994497e-03
>     Linear mg_levels_1_ solve converged due to CONVERGED_ITS iterations 15
> 
> From: Barry Smith <[email protected] <mailto:[email protected]>>
> Sent: 29 September 2025 15:56:33
> To: Moral Sanchez, Elena
> Cc: Mark Adams; petsc-users
> Subject: Re: [petsc-users] setting correct tolerances for MG smoother CG at 
> the finest level
>  
> 
>   I asked you to run with 
> 
>>>>  -ksp_monitor -mg_levels_ksp_monitor -ksp_converged_reason 
>>>> -mg_levels_ksp_converged_reason
> 
> you chose not to, delaying the process of understanding what is happening.
> 
>   Please run with those options and send the output. My guess is that you are 
> computing the "residual norms" in your own monitor code, and it is doing so 
> differently than what PETSc does, thus resulting in the appearance of a 
> sufficiently small residual norm, whereas PETSc may not have calculated 
> something that small.
> 
> Barry
> 
> 
>> On Sep 29, 2025, at 8:39 AM, Moral Sanchez, Elena 
>> <[email protected] <mailto:[email protected]>> 
>> wrote:
>> 
>> Thanks for the hint. I agree that the coarse solve should be much more 
>> "accurate". However, for the moment I am just trying to understand what the 
>> MG is doing exactly. 
>> 
>> I am puzzled to see that the fine grid smoother ("lvl 0") does not stop when 
>> the residual becomes less than 1e-1. It should converge due to the atol. 
>> 
>> From: Mark Adams <[email protected] <mailto:[email protected]>>
>> Sent: 29 September 2025 14:20:56
>> To: Moral Sanchez, Elena
>> Cc: Barry Smith; petsc-users
>> Subject: Re: [petsc-users] setting correct tolerances for MG smoother CG at 
>> the finest level
>>  
>> Oh I see the coarse grid solver in your full solver output now.
>> You still want an accurate coarse grid solve. Usually (the default in GAMG) 
>> you use a direct solver on one process, and cousin until the coarse grid is 
>> small enough to make that cheap.
>> 
>> On Mon, Sep 29, 2025 at 8:07 AM Moral Sanchez, Elena 
>> <[email protected] <mailto:[email protected]>> 
>> wrote:
>>> Hi, I doubled the system size and changed the tolerances just to show a 
>>> better example of the problem. This is the output of the callbacks in the 
>>> first iteration:
>>>     CG Iter 0/1 | res = 2.25e+00/1.00e-09 | 0.1 s
>>>         MG lvl 0 (s=884): CG Iter 0/15 | res = 2.25e+00/1.00e-01 | 0.3 s
>>>         MG lvl 0 (s=884): CG Iter 1/15 | res = 1.43e+00/1.00e-01 | 0.2 s
>>>         MG lvl 0 (s=884): CG Iter 2/15 | res = 1.17e+00/1.00e-01 | 0.2 s
>>>         MG lvl 0 (s=884): CG Iter 3/15 | res = 1.32e+00/1.00e-01 | 0.1 s
>>>         MG lvl 0 (s=884): CG Iter 4/15 | res = 5.01e-01/1.00e-01 | 0.2 s
>>>         MG lvl 0 (s=884): CG Iter 5/15 | res = 3.57e-01/1.00e-01 | 0.2 s
>>>         MG lvl 0 (s=884): CG Iter 6/15 | res = 2.49e-01/1.00e-01 | 0.2 s
>>>         MG lvl 0 (s=884): CG Iter 7/15 | res = 3.04e-01/1.00e-01 | 0.2 s
>>>         MG lvl 0 (s=884): CG Iter 8/15 | res = 2.78e-01/1.00e-01 | 0.2 s
>>>         MG lvl 0 (s=884): CG Iter 9/15 | res = 1.68e-01/1.00e-01 | 0.2 s
>>>         MG lvl 0 (s=884): CG Iter 10/15 | res = 1.21e-01/1.00e-01 | 0.1 s
>>>         MG lvl 0 (s=884): CG Iter 11/15 | res = 9.45e-02/1.00e-01 | 0.2 s
>>>         MG lvl 0 (s=884): CG Iter 12/15 | res = 8.31e-02/1.00e-01 | 0.2 s
>>>         MG lvl 0 (s=884): CG Iter 13/15 | res = 5.47e-02/1.00e-01 | 0.2 s
>>>         MG lvl 0 (s=884): CG Iter 14/15 | res = 4.36e-02/1.00e-01 | 0.2 s
>>>         MG lvl 0 (s=884): CG Iter 15/15 | res = 5.08e-02/1.00e-01 | 0.1 s
>>>         ConvergedReason MG lvl 0: 4
>>>         MG lvl -1 (s=524): CG Iter 0/15 | res = 8.15e-02/1.00e-01 | 3.0 s
>>>         ConvergedReason MG lvl -1: 3
>>>         MG lvl 0 (s=884): CG Iter 0/15 | res = 5.08e-02/1.00e-01 | 0.3 s
>>>         MG lvl 0 (s=884): CG Iter 1/15 | res = 2.93e-02/1.00e-01 | 0.2 s
>>>         MG lvl 0 (s=884): CG Iter 2/15 | res = 3.26e-02/1.00e-01 | 0.2 s
>>>         MG lvl 0 (s=884): CG Iter 3/15 | res = 4.14e-02/1.00e-01 | 0.2 s
>>>         MG lvl 0 (s=884): CG Iter 4/15 | res = 4.82e-02/1.00e-01 | 0.2 s
>>>         MG lvl 0 (s=884): CG Iter 5/15 | res = 3.20e-02/1.00e-01 | 0.2 s
>>>         MG lvl 0 (s=884): CG Iter 6/15 | res = 3.46e-02/1.00e-01 | 0.3 s
>>>         MG lvl 0 (s=884): CG Iter 7/15 | res = 3.41e-02/1.00e-01 | 0.2 s
>>>         MG lvl 0 (s=884): CG Iter 8/15 | res = 4.69e-02/1.00e-01 | 0.2 s
>>>         MG lvl 0 (s=884): CG Iter 9/15 | res = 3.37e-02/1.00e-01 | 0.2 s
>>>         MG lvl 0 (s=884): CG Iter 10/15 | res = 4.07e-02/1.00e-01 | 0.2 s
>>>         MG lvl 0 (s=884): CG Iter 11/15 | res = 2.66e-02/1.00e-01 | 0.1 s
>>>         MG lvl 0 (s=884): CG Iter 12/15 | res = 2.83e-02/1.00e-01 | 0.1 s
>>>         MG lvl 0 (s=884): CG Iter 13/15 | res = 2.98e-02/1.00e-01 | 0.2 s
>>>         MG lvl 0 (s=884): CG Iter 14/15 | res = 3.53e-02/1.00e-01 | 0.1 s
>>>         MG lvl 0 (s=884): CG Iter 15/15 | res = 2.33e-02/1.00e-01 | 0.2 s
>>>         ConvergedReason MG lvl 0: 4
>>>     CG Iter 1/1 | res = 2.42e-02/1.00e-09 | 5.6 s
>>>         MG lvl 0 (s=884): CG Iter 0/15 | res = 2.42e-02/1.00e-01 | 0.2 s
>>>         MG lvl 0 (s=884): CG Iter 1/15 | res = 1.76e-02/1.00e-01 | 0.2 s
>>>         MG lvl 0 (s=884): CG Iter 2/15 | res = 1.40e-02/1.00e-01 | 0.2 s
>>>         MG lvl 0 (s=884): CG Iter 3/15 | res = 1.42e-02/1.00e-01 | 0.1 s
>>>         MG lvl 0 (s=884): CG Iter 4/15 | res = 1.62e-02/1.00e-01 | 0.2 s
>>>         MG lvl 0 (s=884): CG Iter 5/15 | res = 1.35e-02/1.00e-01 | 0.1 s
>>>         MG lvl 0 (s=884): CG Iter 6/15 | res = 1.39e-02/1.00e-01 | 0.1 s
>>>         MG lvl 0 (s=884): CG Iter 7/15 | res = 1.51e-02/1.00e-01 | 0.2 s
>>>         MG lvl 0 (s=884): CG Iter 8/15 | res = 1.28e-02/1.00e-01 | 0.2 s
>>>         MG lvl 0 (s=884): CG Iter 9/15 | res = 1.24e-02/1.00e-01 | 0.2 s
>>>         MG lvl 0 (s=884): CG Iter 10/15 | res = 9.59e-03/1.00e-01 | 0.1 s
>>>         MG lvl 0 (s=884): CG Iter 11/15 | res = 9.02e-03/1.00e-01 | 0.1 s
>>>         MG lvl 0 (s=884): CG Iter 12/15 | res = 1.19e-02/1.00e-01 | 0.1 s
>>>         MG lvl 0 (s=884): CG Iter 13/15 | res = 1.08e-02/1.00e-01 | 0.1 s
>>>         MG lvl 0 (s=884): CG Iter 14/15 | res = 8.19e-03/1.00e-01 | 0.1 s
>>>         MG lvl 0 (s=884): CG Iter 15/15 | res = 7.61e-03/1.00e-01 | 0.1 s
>>>         ConvergedReason MG lvl 0: 4
>>>         MG lvl -1 (s=524): CG Iter 0/15 | res = 1.38e-02/1.00e-01 | 5.2 s
>>>         ConvergedReason MG lvl -1: 3
>>>         MG lvl 0 (s=884): CG Iter 0/15 | res = 7.61e-03/1.00e-01 | 0.2 s
>>>         MG lvl 0 (s=884): CG Iter 1/15 | res = 5.62e-03/1.00e-01 | 0.1 s
>>>         MG lvl 0 (s=884): CG Iter 2/15 | res = 6.64e-03/1.00e-01 | 0.1 s
>>>         MG lvl 0 (s=884): CG Iter 3/15 | res = 8.71e-03/1.00e-01 | 0.1 s
>>>         MG lvl 0 (s=884): CG Iter 4/15 | res = 6.40e-03/1.00e-01 | 0.1 s
>>>         MG lvl 0 (s=884): CG Iter 5/15 | res = 7.23e-03/1.00e-01 | 0.1 s
>>>         MG lvl 0 (s=884): CG Iter 6/15 | res = 6.20e-03/1.00e-01 | 0.1 s
>>>         MG lvl 0 (s=884): CG Iter 7/15 | res = 7.04e-03/1.00e-01 | 0.1 s
>>>         MG lvl 0 (s=884): CG Iter 8/15 | res = 7.19e-03/1.00e-01 | 0.1 s
>>>         MG lvl 0 (s=884): CG Iter 9/15 | res = 6.35e-03/1.00e-01 | 0.1 s
>>>         MG lvl 0 (s=884): CG Iter 10/15 | res = 7.31e-03/1.00e-01 | 0.1 s
>>>         MG lvl 0 (s=884): CG Iter 11/15 | res = 6.64e-03/1.00e-01 | 0.1 s
>>>         MG lvl 0 (s=884): CG Iter 12/15 | res = 7.24e-03/1.00e-01 | 0.1 s
>>>         MG lvl 0 (s=884): CG Iter 13/15 | res = 4.97e-03/1.00e-01 | 0.1 s
>>>         MG lvl 0 (s=884): CG Iter 14/15 | res = 5.36e-03/1.00e-01 | 0.1 s
>>>         MG lvl 0 (s=884): CG Iter 15/15 | res = 5.84e-03/1.00e-01 | 0.1 s
>>>         ConvergedReason MG lvl 0: 4    
>>>     CG ConvergedReason: -3 
>>> 
>>> For completeness, I add here the -ksp_view of the whole solver:
>>>     KSP Object: 1 MPI process
>>>       type: cg
>>>         variant HERMITIAN
>>>       maximum iterations=1, nonzero initial guess
>>>       tolerances: relative=1e-08, absolute=1e-09, divergence=10000.
>>>       left preconditioning
>>>       using UNPRECONDITIONED norm type for convergence test
>>>     PC Object: 1 MPI process
>>>       type: mg
>>>         type is MULTIPLICATIVE, levels=2 cycles=v
>>>           Cycles per PCApply=1
>>>           Not using Galerkin computed coarse grid matrices
>>>       Coarse grid solver -- level 0 -------------------------------
>>>         KSP Object: (mg_coarse_) 1 MPI process
>>>           type: cg
>>>         variant HERMITIAN
>>>           maximum iterations=15, nonzero initial guess
>>>           tolerances: relative=0.1, absolute=0.1, divergence=1e+30
>>>           left preconditioning
>>>           using UNPRECONDITIONED norm type for convergence test
>>>         PC Object: (mg_coarse_) 1 MPI process
>>>           type: none
>>>           linear system matrix = precond matrix:
>>>           Mat Object: 1 MPI process
>>>         type: python
>>>         rows=524, cols=524
>>>             Python: Solver_petsc.LeastSquaresOperator
>>>       Down solver (pre-smoother) on level 1 -------------------------------
>>>         KSP Object: (mg_levels_1_) 1 MPI process
>>>           type: cg
>>>         variant HERMITIAN
>>>           maximum iterations=15, nonzero initial guess
>>>           tolerances: relative=0.1, absolute=0.1, divergence=1e+30
>>>           left preconditioning
>>>           using UNPRECONDITIONED norm type for convergence test
>>>         PC Object: (mg_levels_1_) 1 MPI process
>>>           type: none
>>>           linear system matrix = precond matrix:
>>>           Mat Object: 1 MPI process
>>>         type: python
>>>         rows=884, cols=884
>>>             Python: Solver_petsc.LeastSquaresOperator
>>>       Up solver (post-smoother) same as down solver (pre-smoother)
>>>       linear system matrix = precond matrix:
>>>       Mat Object: 1 MPI process
>>>         type: python
>>>         rows=884, cols=884
>>>         Python: Solver_petsc.LeastSquaresOperator
>>>         
>>> Regarding Mark's Email: What do you mean with "the whole solver doesn't 
>>> have a coarse grid"? I am using my own Restriction and Interpolation 
>>> operators.
>>> Thanks for the help,
>>> Elena
>>> 
>>> From: Mark Adams <[email protected] <mailto:[email protected]>>
>>> Sent: 28 September 2025 20:13:54
>>> To: Barry Smith
>>> Cc: Moral Sanchez, Elena; petsc-users
>>> Subject: Re: [petsc-users] setting correct tolerances for MG smoother CG at 
>>> the finest level
>>>  
>>> Not sure why your "whole"solver does not have a coarse grid but this is 
>>> wrong:
>>> 
>>>> KSP Object: (mg_coarse_) 1 MPI process
>>>>   type: cg
>>>>     variant HERMITIAN
>>>>   maximum iterations=100, initial guess is zero
>>>>   tolerances: relative=0.1, absolute=0.1, divergence=1e+30
>>>> 
>>>> The coarse grid has to be accurate. The defaults are a good place to 
>>>> start: max_it=10.000, rtol=1e-5, atol=1e-30 (ish)
>>> 
>>> On Fri, Sep 26, 2025 at 3:21 PM Barry Smith <[email protected] 
>>> <mailto:[email protected]>> wrote:
>>>>   Looks reasonable. Send the output running with 
>>>> 
>>>>    -ksp_monitor -mg_levels_ksp_monitor -ksp_converged_reason 
>>>> -mg_levels_ksp_converged_reason
>>>> 
>>>>> On Sep 26, 2025, at 1:19 PM, Moral Sanchez, Elena 
>>>>> <[email protected] <mailto:[email protected]>> 
>>>>> wrote:
>>>>> 
>>>>> Dear Barry,
>>>>> 
>>>>> This is -ksp_view for the smoother at the finest level:
>>>>> KSP Object: (mg_levels_1_) 1 MPI process
>>>>>   type: cg
>>>>>     variant HERMITIAN
>>>>>   maximum iterations=10, nonzero initial guess
>>>>>   tolerances: relative=0.1, absolute=0.1, divergence=1e+30
>>>>>   left preconditioning
>>>>>   using UNPRECONDITIONED norm type for convergence test
>>>>> PC Object: (mg_levels_1_) 1 MPI process
>>>>>   type: none
>>>>>   linear system matrix = precond matrix:
>>>>>   Mat Object: 1 MPI process
>>>>>     type: python
>>>>>     rows=524, cols=524
>>>>>         Python: Solver_petsc.LeastSquaresOperator
>>>>> And at the coarsest level:
>>>>> KSP Object: (mg_coarse_) 1 MPI process
>>>>>   type: cg
>>>>>     variant HERMITIAN
>>>>>   maximum iterations=100, initial guess is zero
>>>>>   tolerances: relative=0.1, absolute=0.1, divergence=1e+30
>>>>>   left preconditioning
>>>>>   using UNPRECONDITIONED norm type for convergence test
>>>>> PC Object: (mg_coarse_) 1 MPI process
>>>>>   type: none
>>>>>   linear system matrix = precond matrix:
>>>>>   Mat Object: 1 MPI process
>>>>>     type: python
>>>>>     rows=344, cols=344
>>>>>         Python: Solver_petsc.LeastSquaresOperator
>>>>> And for the whole solver:
>>>>> KSP Object: 1 MPI process
>>>>>   type: cg
>>>>>     variant HERMITIAN
>>>>>   maximum iterations=100, nonzero initial guess
>>>>>   tolerances: relative=1e-08, absolute=1e-09, divergence=10000.
>>>>>   left preconditioning
>>>>>   using UNPRECONDITIONED norm type for convergence test
>>>>> PC Object: 1 MPI process
>>>>>   type: mg
>>>>>     type is MULTIPLICATIVE, levels=2 cycles=v
>>>>>       Cycles per PCApply=1
>>>>>       Not using Galerkin computed coarse grid matrices
>>>>>   Coarse grid solver -- level 0 -------------------------------
>>>>>     KSP Object: (mg_coarse_) 1 MPI process
>>>>>       type: cg
>>>>>         variant HERMITIAN
>>>>>       maximum iterations=100, initial guess is zero
>>>>>       tolerances: relative=0.1, absolute=0.1, divergence=1e+30
>>>>>       left preconditioning
>>>>>       using UNPRECONDITIONED norm type for convergence test
>>>>>     PC Object: (mg_coarse_) 1 MPI process
>>>>>       type: none
>>>>>       linear system matrix = precond matrix:
>>>>>       Mat Object: 1 MPI process
>>>>>         type: python
>>>>>         rows=344, cols=344
>>>>>             Python: Solver_petsc.LeastSquaresOperator
>>>>>   Down solver (pre-smoother) on level 1 -------------------------------
>>>>>     KSP Object: (mg_levels_1_) 1 MPI process
>>>>>       type: cg
>>>>>         variant HERMITIAN
>>>>>       maximum iterations=10, nonzero initial guess
>>>>>       tolerances: relative=0.1, absolute=0.1, divergence=1e+30
>>>>>       left preconditioning
>>>>>       using UNPRECONDITIONED norm type for convergence test
>>>>>     PC Object: (mg_levels_1_) 1 MPI process
>>>>>       type: none
>>>>>       linear system matrix = precond matrix:
>>>>>       Mat Object: 1 MPI process
>>>>>         type: python
>>>>>         rows=524, cols=524
>>>>>             Python: Solver_petsc.LeastSquaresOperator
>>>>>   Up solver (post-smoother) same as down solver (pre-smoother)
>>>>>   linear system matrix = precond matrix:
>>>>>   Mat Object: 1 MPI process
>>>>>     type: python
>>>>>     rows=524, cols=524
>>>>>         Python: Solver_petsc.LeastSquaresOperator
>>>>> Best,
>>>>> Elena
>>>>> 
>>>>>  
>>>>> From: Barry Smith <[email protected] <mailto:[email protected]>>
>>>>> Sent: 26 September 2025 19:05:02
>>>>> To: Moral Sanchez, Elena
>>>>> Cc: [email protected] <mailto:[email protected]>
>>>>> Subject: Re: [petsc-users] setting correct tolerances for MG smoother CG 
>>>>> at the finest level
>>>>>  
>>>>>   
>>>>> Send the output using -ksp_view 
>>>>> 
>>>>> Normally one uses a fixed number of iterations of smoothing  on level 
>>>>> with multigrid rather than a tolerance, but yes PETSc should respect such 
>>>>> a tolerance.
>>>>> 
>>>>> Barry
>>>>> 
>>>>> 
>>>>>> On Sep 26, 2025, at 12:49 PM, Moral Sanchez, Elena 
>>>>>> <[email protected] <mailto:[email protected]>> 
>>>>>> wrote:
>>>>>> 
>>>>>> Hi, 
>>>>>> I am using multigrid (multiplicative) as a preconditioner with a V-cycle 
>>>>>> of two levels. At each level, I am setting CG as the smoother with 
>>>>>> certain tolerance.
>>>>>> 
>>>>>> What I observe is that in the finest level the CG continues iterating 
>>>>>> after the residual norm reaches the tolerance (atol) and it only stops 
>>>>>> when reaching the maximum number of iterations at that level. At the 
>>>>>> coarsest level this does not occur and the CG stops when the tolerance 
>>>>>> is reached.
>>>>>> 
>>>>>> I double-checked that the smoother at the finest level has the right 
>>>>>> tolerance. And I am using a Monitor function to track the residual.
>>>>>> 
>>>>>> Do you know how to make the smoother at the finest level stop when 
>>>>>> reaching the tolerance?
>>>>>> 
>>>>>> Cheers,
>>>>>> Elena.

Reply via email to