Re: [petsc-users] subprocess (block) tolerance.

2018-10-04 Thread Smith, Barry F.



> On Oct 4, 2018, at 6:09 PM, HeeHo Park  wrote:
> 
> Barry and Jed,
> 
> Thank you for your answers. I think I need to learn more about domain 
> decomposition as I am a bit confused. 
> Is it true that we are using BiCGstab here to solve the system of equations, 
> using Additive Schwartz as a domain decomposition preconditioner, and that 
> precondition matrix for each block is preconditioned by ilu[0]? 

   You can say it that way. 
> 
> Thanks,
> 
> 
> On Thu, Oct 4, 2018 at 5:53 PM Smith, Barry F.  wrote:
> 
>   Since 
> 
> KSP Object: (flow_sub_) 1 MPI processes
> type: preonly
> 
> this means only a single iteration of the inner solver is used so   the 
> numbers in red are not used. 
> 
> You could do something like -flow_ksp_type fgmres -flow_sub_ksp_type gmres 
> -flow_sub_ksp_rtol 1.e-2 but it wouldn't help maters. Likely the current 
> values are the best.
> 
>Barry
> 
> 
> > On Oct 4, 2018, at 5:44 PM, HeeHo Park  wrote:
> > 
> > Hi, I'm running PFLOTRAN and in PFLOTRAN, we have flow_ and flow_sub_ 
> > processes. I was wondering what the red underlined values meant (each block 
> > tolerance?) and how to change them (would it affect convergence?). Blue 
> > marked bold values are changed from the default values for linear solvers.
> > 
> > FLOW Linear Solver
> >solver: bcgs
> >preconditioner: asm
> >  atol: 1.00E-10
> >  rtol: 1.00E-10
> >  dtol: 1.00E+04
> > maximum iteration: 1
> > KSP Object: (flow_) 8 MPI processes
> >   type: bcgs
> >   maximum iterations=1, initial guess is zero
> >   tolerances:  relative=1e-10, absolute=1e-10, divergence=1.
> >   left preconditioning
> >   using PRECONDITIONED norm type for convergence test
> > PC Object: (flow_) 8 MPI processes
> >   type: asm
> > Additive Schwarz: total subdomain blocks = 8, amount of overlap = 1
> > Additive Schwarz: restriction/interpolation type - RESTRICT
> > [0] number of local blocks = 1
> > [1] number of local blocks = 1
> > [2] number of local blocks = 1
> > [3] number of local blocks = 1
> > [4] number of local blocks = 1
> > [5] number of local blocks = 1
> > [6] number of local blocks = 1
> > [7] number of local blocks = 1
> > Local solve info for each block is in the following KSP and PC objects:
> > - - - - - - - - - - - - - - - - - -
> > [0] local block number 0, size = 1389
> > KSP Object: (flow_sub_) 1 MPI processes
> >   type: preonly
> >   maximum iterations=1, initial guess is zero
> > >>>  tolerances:  relative=1e-05, absolute=1e-50, divergence=1.
> >   left preconditioning
> >   using DEFAULT norm type for convergence test
> > PC Object: (flow_sub_) 1 MPI processes
> >   type: ilu
> >   PC has not been set up so information may be incomplete
> > out-of-place factorization
> > 0 levels of fill
> > tolerance for zero pivot 2.22045e-14
> > using diagonal shift on blocks to prevent zero pivot [INBLOCKS]
> > matrix ordering: natural
> >   linear system matrix = precond matrix:
> >   Mat Object: (flow_) 1 MPI processes
> > type: seqbaij
> > rows=1389, cols=1389, bs=3
> > total: nonzeros=20025, allocated nonzeros=20025
> > total number of mallocs used during MatSetValues calls =0
> > block size is 3
> > - - - - - - - - - - - - - - - - - -
> > 
> > -- 
> > HeeHo Daniel Park
> 
> 
> 
> -- 
> HeeHo Daniel Park



Re: [petsc-users] subprocess (block) tolerance.

2018-10-04 Thread HeeHo Park
Barry and Jed,

Thank you for your answers. I think I need to learn more about domain
decomposition as I am a bit confused.
Is it true that we are using BiCGstab here to solve the system of
equations, using Additive Schwartz as a domain decomposition
preconditioner, and that precondition matrix for each block is
preconditioned by ilu[0]?

Thanks,


On Thu, Oct 4, 2018 at 5:53 PM Smith, Barry F.  wrote:

>
>   Since
>
> KSP Object: (flow_sub_) 1 MPI processes
> type: preonly
>
> this means only a single iteration of the inner solver is used so   the
> numbers in red are not used.
>
> You could do something like -flow_ksp_type fgmres -flow_sub_ksp_type gmres
> -flow_sub_ksp_rtol 1.e-2 but it wouldn't help maters. Likely the current
> values are the best.
>
>Barry
>
>
> > On Oct 4, 2018, at 5:44 PM, HeeHo Park  wrote:
> >
> > Hi, I'm running PFLOTRAN and in PFLOTRAN, we have flow_ and flow_sub_
> processes. I was wondering what the red underlined values meant (each block
> tolerance?) and how to change them (would it affect convergence?). Blue
> marked bold values are changed from the default values for linear solvers.
> >
> > FLOW Linear Solver
> >solver: bcgs
> >preconditioner: asm
> >  atol: 1.00E-10
> >  rtol: 1.00E-10
> >  dtol: 1.00E+04
> > maximum iteration: 1
> > KSP Object: (flow_) 8 MPI processes
> >   type: bcgs
> >   maximum iterations=1, initial guess is zero
> >   tolerances:  relative=1e-10, absolute=1e-10, divergence=1.
> >   left preconditioning
> >   using PRECONDITIONED norm type for convergence test
> > PC Object: (flow_) 8 MPI processes
> >   type: asm
> > Additive Schwarz: total subdomain blocks = 8, amount of overlap = 1
> > Additive Schwarz: restriction/interpolation type - RESTRICT
> > [0] number of local blocks = 1
> > [1] number of local blocks = 1
> > [2] number of local blocks = 1
> > [3] number of local blocks = 1
> > [4] number of local blocks = 1
> > [5] number of local blocks = 1
> > [6] number of local blocks = 1
> > [7] number of local blocks = 1
> > Local solve info for each block is in the following KSP and PC
> objects:
> > - - - - - - - - - - - - - - - - - -
> > [0] local block number 0, size = 1389
> > KSP Object: (flow_sub_) 1 MPI processes
> >   type: preonly
> >   maximum iterations=1, initial guess is zero
> > >>>  tolerances:  relative=1e-05, absolute=1e-50, divergence=1.
> >   left preconditioning
> >   using DEFAULT norm type for convergence test
> > PC Object: (flow_sub_) 1 MPI processes
> >   type: ilu
> >   PC has not been set up so information may be incomplete
> > out-of-place factorization
> > 0 levels of fill
> > tolerance for zero pivot 2.22045e-14
> > using diagonal shift on blocks to prevent zero pivot [INBLOCKS]
> > matrix ordering: natural
> >   linear system matrix = precond matrix:
> >   Mat Object: (flow_) 1 MPI processes
> > type: seqbaij
> > rows=1389, cols=1389, bs=3
> > total: nonzeros=20025, allocated nonzeros=20025
> > total number of mallocs used during MatSetValues calls =0
> > block size is 3
> > - - - - - - - - - - - - - - - - - -
> >
> > --
> > HeeHo Daniel Park
>
>

-- 
HeeHo Daniel Park


Re: [petsc-users] subprocess (block) tolerance.

2018-10-04 Thread Smith, Barry F.


  Since 

KSP Object: (flow_sub_) 1 MPI processes
type: preonly

this means only a single iteration of the inner solver is used so   the numbers 
in red are not used. 

You could do something like -flow_ksp_type fgmres -flow_sub_ksp_type gmres 
-flow_sub_ksp_rtol 1.e-2 but it wouldn't help maters. Likely the current values 
are the best.

   Barry


> On Oct 4, 2018, at 5:44 PM, HeeHo Park  wrote:
> 
> Hi, I'm running PFLOTRAN and in PFLOTRAN, we have flow_ and flow_sub_ 
> processes. I was wondering what the red underlined values meant (each block 
> tolerance?) and how to change them (would it affect convergence?). Blue 
> marked bold values are changed from the default values for linear solvers.
> 
> FLOW Linear Solver
>solver: bcgs
>preconditioner: asm
>  atol: 1.00E-10
>  rtol: 1.00E-10
>  dtol: 1.00E+04
> maximum iteration: 1
> KSP Object: (flow_) 8 MPI processes
>   type: bcgs
>   maximum iterations=1, initial guess is zero
>   tolerances:  relative=1e-10, absolute=1e-10, divergence=1.
>   left preconditioning
>   using PRECONDITIONED norm type for convergence test
> PC Object: (flow_) 8 MPI processes
>   type: asm
> Additive Schwarz: total subdomain blocks = 8, amount of overlap = 1
> Additive Schwarz: restriction/interpolation type - RESTRICT
> [0] number of local blocks = 1
> [1] number of local blocks = 1
> [2] number of local blocks = 1
> [3] number of local blocks = 1
> [4] number of local blocks = 1
> [5] number of local blocks = 1
> [6] number of local blocks = 1
> [7] number of local blocks = 1
> Local solve info for each block is in the following KSP and PC objects:
> - - - - - - - - - - - - - - - - - -
> [0] local block number 0, size = 1389
> KSP Object: (flow_sub_) 1 MPI processes
>   type: preonly
>   maximum iterations=1, initial guess is zero
> >>>  tolerances:  relative=1e-05, absolute=1e-50, divergence=1.
>   left preconditioning
>   using DEFAULT norm type for convergence test
> PC Object: (flow_sub_) 1 MPI processes
>   type: ilu
>   PC has not been set up so information may be incomplete
> out-of-place factorization
> 0 levels of fill
> tolerance for zero pivot 2.22045e-14
> using diagonal shift on blocks to prevent zero pivot [INBLOCKS]
> matrix ordering: natural
>   linear system matrix = precond matrix:
>   Mat Object: (flow_) 1 MPI processes
> type: seqbaij
> rows=1389, cols=1389, bs=3
> total: nonzeros=20025, allocated nonzeros=20025
> total number of mallocs used during MatSetValues calls =0
> block size is 3
> - - - - - - - - - - - - - - - - - -
> 
> -- 
> HeeHo Daniel Park



Re: [petsc-users] subprocess (block) tolerance.

2018-10-04 Thread Jed Brown
The subdomain KSP (flow_sub_) has type "preonly" so it always does
exactly one iteration.  If you were to use an iterative subdomain solver
(e.g., -flow_sub_ksp_type gmres) then those tolerances would be used.

HeeHo Park  writes:

> Hi, I'm running PFLOTRAN and in PFLOTRAN, we have flow_ and flow_sub_
> processes. I was wondering what the red underlined values meant (each block
> tolerance?) and how to change them (would it affect convergence?). Blue
> marked bold values are changed from the default values for linear solvers.
>
> FLOW Linear Solver
>solver: bcgs
>preconditioner: asm
>  *atol: 1.00E-10*
> * rtol: 1.00E-10*
>  dtol: 1.00E+04
> maximum iteration: 1
> KSP Object: (flow_) 8 MPI processes
>   type: bcgs
>   maximum iterations=1, initial guess is zero
>   tolerances: * relative=1e-10, absolute=1e-10*, divergence=1.
>   left preconditioning
>   using PRECONDITIONED norm type for convergence test
> PC Object: (flow_) 8 MPI processes
>   type: asm
> Additive Schwarz: total subdomain blocks = 8, amount of overlap = 1
> Additive Schwarz: restriction/interpolation type - RESTRICT
> [0] number of local blocks = 1
> [1] number of local blocks = 1
> [2] number of local blocks = 1
> [3] number of local blocks = 1
> [4] number of local blocks = 1
> [5] number of local blocks = 1
> [6] number of local blocks = 1
> [7] number of local blocks = 1
> Local solve info for each block is in the following KSP and PC objects:
> - - - - - - - - - - - - - - - - - -
> [0] local block number 0, size = 1389
> KSP Object: (flow_sub_) 1 MPI processes
>   type: preonly
>   maximum iterations=1, initial guess is zero
  tolerances:  *relative=1e-05, absolute=1e-50*, divergence=1.
>   left preconditioning
>   using DEFAULT norm type for convergence test
> PC Object: (flow_sub_) 1 MPI processes
>   type: ilu
>   PC has not been set up so information may be incomplete
> out-of-place factorization
> 0 levels of fill
> tolerance for zero pivot 2.22045e-14
> using diagonal shift on blocks to prevent zero pivot [INBLOCKS]
> matrix ordering: natural
>   linear system matrix = precond matrix:
>   Mat Object: (flow_) 1 MPI processes
> type: seqbaij
> rows=1389, cols=1389, bs=3
> total: nonzeros=20025, allocated nonzeros=20025
> total number of mallocs used during MatSetValues calls =0
> block size is 3
> - - - - - - - - - - - - - - - - - -
>
> -- 
> HeeHo Daniel Park