Please send all the command line options you use.

> On Jan 16, 2017, at 1:47 AM, Klaij, Christiaan <c.kl...@marin.nl> wrote:
> 
> Barry,
> 
> Sure, here's the output with:
> 
> -sys_ksp_view -sys_ksp_converged_reason 
> -sys_fieldsplit_0_ksp_converged_reason -sys_fieldsplit_1_ksp_converged_reason
> 
> (In my previous email, I rearranged 0 & 1 for easy summing.)
> 
> Chris
> 
>  Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 1
>  Linear sys_fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 22
>  Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 1
>  Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 2
>  Linear sys_fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 6
>  Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 2
>  Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 7
>  Linear sys_fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 3
>  Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 7
>  Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 7
>  Linear sys_fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 2
>  Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 7
>  Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 8
>  Linear sys_fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 2
>  Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 8
>  Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 8
>  Linear sys_fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 2
>  Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 8
> Linear sys_ solve converged due to CONVERGED_RTOL iterations 6
> KSP Object:(sys_) 1 MPI processes
>  type: fgmres
>    GMRES: restart=30, using Classical (unmodified) Gram-Schmidt 
> Orthogonalization with no iterative refinement
>    GMRES: happy breakdown tolerance 1e-30
>  maximum iterations=300, initial guess is zero
>  tolerances:  relative=0.01, absolute=1e-50, divergence=10000.
>  right preconditioning
>  using UNPRECONDITIONED norm type for convergence test
> PC Object:(sys_) 1 MPI processes
>  type: fieldsplit
>    FieldSplit with Schur preconditioner, factorization FULL
>    Preconditioner for the Schur complement formed from Sp, an assembled 
> approximation to S, which uses (lumped, if requested) A00's diagonal's inverse
>    Split info:
>    Split number 0 Defined by IS
>    Split number 1 Defined by IS
>    KSP solver for A00 block
>      KSP Object:      (sys_fieldsplit_0_)       1 MPI processes
>        type: gmres
>          GMRES: restart=30, using Classical (unmodified) Gram-Schmidt 
> Orthogonalization with no iterative refinement
>          GMRES: happy breakdown tolerance 1e-30
>        maximum iterations=10000, initial guess is zero
>        tolerances:  relative=0.01, absolute=1e-50, divergence=10000.
>        right preconditioning
>        using UNPRECONDITIONED norm type for convergence test
>      PC Object:      (sys_fieldsplit_0_)       1 MPI processes
>        type: ilu
>          ILU: out-of-place factorization
>          0 levels of fill
>          tolerance for zero pivot 2.22045e-14
>          matrix ordering: natural
>          factor fill ratio given 1., needed 1.
>            Factored matrix follows:
>              Mat Object:               1 MPI processes
>                type: seqaij
>                rows=9600, cols=9600
>                package used to perform factorization: petsc
>                total: nonzeros=47280, allocated nonzeros=47280
>                total number of mallocs used during MatSetValues calls =0
>                  not using I-node routines
>        linear system matrix = precond matrix:
>        Mat Object:        (sys_fieldsplit_0_)         1 MPI processes
>          type: seqaij
>          rows=9600, cols=9600
>          total: nonzeros=47280, allocated nonzeros=47280
>          total number of mallocs used during MatSetValues calls =0
>            not using I-node routines
>    KSP solver for upper A00 in upper triangular factor
>      KSP Object:      (sys_fieldsplit_1_upper_)       1 MPI processes
>        type: preonly
>        maximum iterations=10000, initial guess is zero
>        tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
>        left preconditioning
>        using NONE norm type for convergence test
>      PC Object:      (sys_fieldsplit_1_upper_)       1 MPI processes
>        type: jacobi
>        linear system matrix = precond matrix:
>        Mat Object:        (sys_fieldsplit_0_)         1 MPI processes
>          type: seqaij
>          rows=9600, cols=9600
>          total: nonzeros=47280, allocated nonzeros=47280
>          total number of mallocs used during MatSetValues calls =0
>            not using I-node routines
>    KSP solver for S = A11 - A10 inv(A00) A01
>      KSP Object:      (sys_fieldsplit_1_)       1 MPI processes
>        type: gmres
>          GMRES: restart=30, using Classical (unmodified) Gram-Schmidt 
> Orthogonalization with no iterative refinement
>          GMRES: happy breakdown tolerance 1e-30
>        maximum iterations=10000, initial guess is zero
>        tolerances:  relative=0.01, absolute=1e-50, divergence=10000.
>        right preconditioning
>        using UNPRECONDITIONED norm type for convergence test
>      PC Object:      (sys_fieldsplit_1_)       1 MPI processes
>        type: ilu
>          ILU: out-of-place factorization
>          0 levels of fill
>          tolerance for zero pivot 2.22045e-14
>          matrix ordering: natural
>          factor fill ratio given 1., needed 1.
>            Factored matrix follows:
>              Mat Object:               1 MPI processes
>                type: seqaij
>                rows=3200, cols=3200
>                package used to perform factorization: petsc
>                total: nonzeros=40404, allocated nonzeros=40404
>                total number of mallocs used during MatSetValues calls =0
>                  not using I-node routines
>        linear system matrix followed by preconditioner matrix:
>        Mat Object:        (sys_fieldsplit_1_)         1 MPI processes
>          type: schurcomplement
>          rows=3200, cols=3200
>            Schur complement A11 - A10 inv(A00) A01
>            A11
>              Mat Object:              (sys_fieldsplit_1_)               1 MPI 
> processes
>                type: seqaij
>                rows=3200, cols=3200
>                total: nonzeros=40404, allocated nonzeros=40404
>                total number of mallocs used during MatSetValues calls =0
>                  not using I-node routines
>            A10
>              Mat Object:               1 MPI processes
>                type: seqaij
>                rows=3200, cols=9600
>                total: nonzeros=47280, allocated nonzeros=47280
>                total number of mallocs used during MatSetValues calls =0
>                  not using I-node routines
>            KSP of A00
>              KSP Object:              (sys_fieldsplit_1_inner_)               
> 1 MPI processes
>                type: preonly
>                maximum iterations=10000, initial guess is zero
>                tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
>                left preconditioning
>                using NONE norm type for convergence test
>              PC Object:              (sys_fieldsplit_1_inner_)               
> 1 MPI processes
>                type: jacobi
>                linear system matrix = precond matrix:
>                Mat Object:                (sys_fieldsplit_0_)                 
> 1 MPI processes
>                  type: seqaij
>                  rows=9600, cols=9600
>                  total: nonzeros=47280, allocated nonzeros=47280
>                  total number of mallocs used during MatSetValues calls =0
>                    not using I-node routines
>            A01
>              Mat Object:               1 MPI processes
>                type: seqaij
>                rows=9600, cols=3200
>                total: nonzeros=47280, allocated nonzeros=47280
>                total number of mallocs used during MatSetValues calls =0
>                  not using I-node routines
>        Mat Object:         1 MPI processes
>          type: seqaij
>          rows=3200, cols=3200
>          total: nonzeros=40404, allocated nonzeros=40404
>          total number of mallocs used during MatSetValues calls =0
>            not using I-node routines
>  linear system matrix followed by preconditioner matrix:
>  Mat Object:   1 MPI processes
>    type: nest
>    rows=12800, cols=12800
>      Matrix object:
>        type=nest, rows=2, cols=2
>        MatNest structure:
>        (0,0) : prefix="mom_", type=seqaij, rows=9600, cols=9600
>        (0,1) : prefix="grad_", type=seqaij, rows=9600, cols=3200
>        (1,0) : prefix="div_", type=seqaij, rows=3200, cols=9600
>        (1,1) : prefix="stab_", type=seqaij, rows=3200, cols=3200
>  Mat Object:   1 MPI processes
>    type: nest
>    rows=12800, cols=12800
>      Matrix object:
>        type=nest, rows=2, cols=2
>        MatNest structure:
>        (0,0) : prefix="sys_fieldsplit_0_", type=seqaij, rows=9600, cols=9600
>        (0,1) : type=seqaij, rows=9600, cols=3200
>        (1,0) : type=seqaij, rows=3200, cols=9600
>        (1,1) : prefix="sys_fieldsplit_1_", type=seqaij, rows=3200, cols=3200
> nusediter_vv          37
> nusediter_pp          37
> 
> 
> 
> dr. ir. Christiaan Klaij  | CFD Researcher | Research & Development
> MARIN | T +31 317 49 33 44 | mailto:c.kl...@marin.nl | http://www.marin.nl
> 
> MARIN news: 
> http://www.marin.nl/web/News/News-items/The-Ocean-Cleanup-testing-continues.htm
> 
> ________________________________________
> From: Barry Smith <bsm...@mcs.anl.gov>
> Sent: Friday, January 13, 2017 7:51 PM
> To: Klaij, Christiaan
> Cc: petsc-users@mcs.anl.gov
> Subject: Re: [petsc-users] monitoring the convergence of fieldsplit 0 and 1
> 
>   Yes, I would have expected this to work. Could you send the output from 
> -ksp_view in this case?
> 
> 
>> On Jan 13, 2017, at 3:46 AM, Klaij, Christiaan <c.kl...@marin.nl> wrote:
>> 
>> Barry,
>> 
>> It's been a while but I'm finally using this function in
>> 3.7.4. Is it supposed to work with fieldsplit? Here's why.
>> 
>> I'm solving a Navier-Stokes system with fieldsplit (pc has one
>> velocity solve and one pressure solve) and trying to retrieve the
>> totals like this:
>> 
>> CALL KSPSolve(ksp_system,rr_system,xx_system,ierr); CHKERRQ(ierr)
>> CALL PCFieldSplitGetSubKSP(pc_system,numsplit,subksp,ierr); CHKERRQ(ierr)
>> CALL KSPGetTotalIterations(subksp(1),nusediter_vv,ierr); CHKERRQ(ierr)
>> CALL KSPGetTotalIterations(subksp(2),nusediter_pp,ierr); CHKERRQ(ierr)
>> print *, 'nusediter_vv', nusediter_vv
>> print *, 'nusediter_pp', nusediter_pp
>> 
>> Running the code shows this surprise:
>> 
>> Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 1
>> Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 1
>> Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 2
>> Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 2
>> Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 7
>> Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 7
>> Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 7
>> Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 7
>> Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 8
>> Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 8
>> Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 8
>> Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 8
>> 
>> Linear sys_fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 22
>> Linear sys_fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 6
>> Linear sys_fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 3
>> Linear sys_fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 2
>> Linear sys_fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 2
>> Linear sys_fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 2
>> 
>> nusediter_vv          37
>> nusediter_pp          37
>> 
>> So the value of nusediter_pp is indeed 37, but for nusediter_vv
>> it should be 66. Any idea what went wrong?
>> 
>> Chris
>> 
>> 
>> 
>> dr. ir. Christiaan Klaij  | CFD Researcher | Research & Development
>> MARIN | T +31 317 49 33 44 | mailto:c.kl...@marin.nl | http://www.marin.nl
>> 
>> MARIN news: 
>> http://www.marin.nl/web/News/News-items/MARIN-wishes-you-a-challenging-inspiring-2017.htm
>> 
>> ________________________________________
>> From: Barry Smith <bsm...@mcs.anl.gov>
>> Sent: Saturday, April 11, 2015 12:27 AM
>> To: Klaij, Christiaan
>> Cc: petsc-users@mcs.anl.gov
>> Subject: Re: [petsc-users] monitoring the convergence of fieldsplit 0 and 1
>> 
>> Chris,
>> 
>> I have added KSPGetTotalIterations() to the branch 
>> barry/add-ksp-total-iterations/master and next. After tests it will go into 
>> master
>> 
>> Barry
>> 
>>> On Apr 10, 2015, at 8:07 AM, Klaij, Christiaan <c.kl...@marin.nl> wrote:
>>> 
>>> Barry,
>>> 
>>> Sure, I can call PCFieldSplitGetSubKSP() to get the fieldsplit_0
>>> ksp and then KSPGetIterationNumber, but what does this number
>>> mean?
>>> 
>>> It appears to be the number of iterations of the last time that
>>> the subsystem was solved, right? If so, this corresponds to the
>>> last iteration of the coupled system, how about all the previous
>>> iterations?
>>> 
>>> Chris
>>> ________________________________________
>>> From: Barry Smith <bsm...@mcs.anl.gov>
>>> Sent: Friday, April 10, 2015 2:48 PM
>>> To: Klaij, Christiaan
>>> Cc: petsc-users@mcs.anl.gov
>>> Subject: Re: [petsc-users] monitoring the convergence of fieldsplit 0 and 1
>>> 
>>> Chris,
>>> 
>>>   It appears you should call PCFieldSplitGetSubKSP() and then get the 
>>> information you want out of the individual KSPs. If this doesn't work 
>>> please let us know.
>>> 
>>> Barry
>>> 
>>>> On Apr 10, 2015, at 6:48 AM, Klaij, Christiaan <c.kl...@marin.nl> wrote:
>>>> 
>>>> A question when using PCFieldSplit: for each linear iteration of
>>>> the system, how many iterations for fielsplit 0 and 1?
>>>> 
>>>> One way to find out is to run with -ksp_monitor,
>>>> -fieldsplit_0_ksp_monitor and -fieldsplit_0_ksp_monitor. This
>>>> gives the complete convergence history.
>>>> 
>>>> Another way, suggested by Matt, is to use -ksp_monitor,
>>>> -fieldsplit_0_ksp_converged_reason and
>>>> -fieldsplit_1_ksp_converged_reason. This gives only the totals
>>>> for fieldsplit 0 and 1 (but without saying for which one).
>>>> 
>>>> Both ways require to somehow process the output, which is a bit
>>>> inconvenient. Could KSPGetResidualHistory perhaps return (some)
>>>> information on the subsystems' convergence for processing inside
>>>> the code?
>>>> 
>>>> Chris
>>>> 
>>>> 
>>>> dr. ir. Christiaan Klaij
>>>> CFD Researcher
>>>> Research & Development
>>>> E mailto:c.kl...@marin.nl
>>>> T +31 317 49 33 44
>>>> 
>>>> 
>>>> MARIN
>>>> 2, Haagsteeg, P.O. Box 28, 6700 AA Wageningen, The Netherlands
>>>> T +31 317 49 39 11, F +31 317 49 32 45, I www.marin.nl
>>>> 
>>> 
>> 
> 

Reply via email to