Re: [petsc-users] monitoring the convergence of fieldsplit 0 and 1

2018-05-15 Thread Klaij, Christiaan
Matt,


Just a reminder. With petsc-3.8.4 the issue is still there.


Chris


dr. ir. Christiaan Klaij | Senior Researcher | Research & Development
MARIN | T +31 317 49 33 44 | c.kl...@marin.nl<mailto:c.kl...@marin.nl> | 
www.marin.nl<http://www.marin.nl>

[LinkedIn]<https://www.linkedin.com/company/marin> [YouTube] 
<http://www.youtube.com/marinmultimedia>  [Twitter] 
<https://twitter.com/MARIN_nieuws>  [Facebook] 
<https://www.facebook.com/marin.wageningen>
MARIN news: MARIN at WindDays 2018, June 13 & 14, 
Rotterdam<http://www.marin.nl/web/News/News-items/MARIN-at-WindDays-2018-June-13-14-Rotterdam.htm>


From: Matthew Knepley 
Sent: Wednesday, January 18, 2017 4:13 PM
To: Klaij, Christiaan
Cc: Lawrence Mitchell; petsc-users@mcs.anl.gov
Subject: Re: [petsc-users] monitoring the convergence of fieldsplit 0 and 1

On Wed, Jan 18, 2017 at 4:42 AM, Klaij, Christiaan 
mailto:c.kl...@marin.nl>> wrote:
Thanks Lawrence, that nicely explains the unexpected behaviour!

I guess in general there ought to be getters for the four
ksp(A00)'s that occur in the full factorization.

Yes, we will fix it. I think that the default retrieval should get the 00 
block, not the inner as well.

   Matt


Chris


dr. ir. Christiaan Klaij  | CFD Researcher | Research & Development
MARIN | T +31 317 49 33 44 | 
mailto:c.kl...@marin.nl<mailto:c.kl...@marin.nl> | http://www.marin.nl

MARIN news: 
http://www.marin.nl/web/News/News-items/Verification-and-validation-exercises-for-flow-around-KVLCC2-tanker.htm


From: Lawrence Mitchell 
mailto:lawrence.mitch...@imperial.ac.uk>>
Sent: Wednesday, January 18, 2017 10:59 AM
To: petsc-users@mcs.anl.gov<mailto:petsc-users@mcs.anl.gov>
Cc: bsm...@mcs.anl.gov<mailto:bsm...@mcs.anl.gov>; Klaij, Christiaan
Subject: Re: [petsc-users] monitoring the convergence of fieldsplit 0 and 1

On 18/01/17 08:40, Klaij, Christiaan wrote:
> Barry,
>
> I've managed to replicate the problem with 3.7.4
> snes/examples/tutorials/ex70.c. Basically I've added
> KSPGetTotalIterations to main (file is attached):

PCFieldSplitGetSubKSP returns, in the Schur case:

MatSchurComplementGet(pc->schur, &ksp);

in subksp[0]

and

pc->schur in subksp[1]

In your case, subksp[0] is the (preonly) approximation to A^{-1} *inside*

S = D - C A_inner^{-1} B

And subksp[1] is the approximation to S^{-1}.

Since each application of S to a vector (required in S^{-1}) requires
one application of A^{-1}, because you use 225 iterations in total to
invert S, you also use 225 applications of the KSP on A_inner.

There doesn't appear to be a way to get the KSP used for A^{-1} if
you've asked for different approximations to A^{-1} in the 0,0 block
and inside S.

Cheers,

Lawrence

> $ diff -u ex70.c.bak ex70.c
> --- ex70.c.bak2017-01-18 09:25:46.286174830 +0100
> +++ ex70.c2017-01-18 09:03:40.904483434 +0100
> @@ -669,6 +669,10 @@
>KSPksp;
>PetscErrorCode ierr;
>
> +  KSP*subksp;
> +  PC pc;
> +  PetscInt   numsplit = 1, nusediter_vv, nusediter_pp;
> +
>ierr = PetscInitialize(&argc, &argv, NULL, help);CHKERRQ(ierr);
>s.nx = 4;
>s.ny = 6;
> @@ -690,6 +694,13 @@
>ierr = StokesSetupPC(&s, ksp);CHKERRQ(ierr);
>ierr = KSPSolve(ksp, s.b, s.x);CHKERRQ(ierr);
>
> +  ierr = KSPGetPC(ksp, &pc);CHKERRQ(ierr);
> +  ierr = PCFieldSplitGetSubKSP(pc,&numsplit,&subksp); CHKERRQ(ierr);
> +  ierr = KSPGetTotalIterations(subksp[0],&nusediter_vv); CHKERRQ(ierr);
> +  ierr = KSPGetTotalIterations(subksp[1],&nusediter_pp); CHKERRQ(ierr);
> +  ierr = PetscPrintf(PETSC_COMM_WORLD," total u solves = %i\n", 
> nusediter_vv); CHKERRQ(ierr);
> +  ierr = PetscPrintf(PETSC_COMM_WORLD," total p solves = %i\n", 
> nusediter_pp); CHKERRQ(ierr);
> +
>/* don't trust, verify! */
>ierr = StokesCalcResidual(&s);CHKERRQ(ierr);
>ierr = StokesCalcError(&s);CHKERRQ(ierr);
>
> Now run as follows:
>
> $ mpirun -n 2 ./ex70 -ksp_type fgmres -pc_type fieldsplit -pc_fieldsplit_type 
> schur -pc_fieldsplit_schur_fact_type lower -fieldsplit_0_ksp_type gmres 
> -fieldsplit_0_pc_type bjacobi -fieldsplit_1_pc_type jacobi 
> -fieldsplit_1_inner_ksp_type preonly -fieldsplit_1_inner_pc_type jacobi 
> -fieldsplit_1_upper_ksp_type preonly -fieldsplit_1_upper_pc_type jacobi 
> -fieldsplit_0_ksp_converged_reason -fieldsplit_1_ksp_converged_reason
>   Linear fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 5
>   Linear fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 14
>   Linear fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 5
>   Linear fieldsplit_1_ solve converged

Re: [petsc-users] monitoring the convergence of fieldsplit 0 and 1

2017-01-18 Thread Matthew Knepley
On Wed, Jan 18, 2017 at 4:42 AM, Klaij, Christiaan  wrote:

> Thanks Lawrence, that nicely explains the unexpected behaviour!
>
> I guess in general there ought to be getters for the four
> ksp(A00)'s that occur in the full factorization.


Yes, we will fix it. I think that the default retrieval should get the 00
block, not the inner as well.

   Matt


>
> Chris
>
>
> dr. ir. Christiaan Klaij  | CFD Researcher | Research & Development
> MARIN | T +31 317 49 33 44 | mailto:c.kl...@marin.nl | http://www.marin.nl
>
> MARIN news: http://www.marin.nl/web/News/News-items/Verification-and-
> validation-exercises-for-flow-around-KVLCC2-tanker.htm
>
> 
> From: Lawrence Mitchell 
> Sent: Wednesday, January 18, 2017 10:59 AM
> To: petsc-users@mcs.anl.gov
> Cc: bsm...@mcs.anl.gov; Klaij, Christiaan
> Subject: Re: [petsc-users] monitoring the convergence of fieldsplit 0 and 1
>
> On 18/01/17 08:40, Klaij, Christiaan wrote:
> > Barry,
> >
> > I've managed to replicate the problem with 3.7.4
> > snes/examples/tutorials/ex70.c. Basically I've added
> > KSPGetTotalIterations to main (file is attached):
>
> PCFieldSplitGetSubKSP returns, in the Schur case:
>
> MatSchurComplementGet(pc->schur, &ksp);
>
> in subksp[0]
>
> and
>
> pc->schur in subksp[1]
>
> In your case, subksp[0] is the (preonly) approximation to A^{-1} *inside*
>
> S = D - C A_inner^{-1} B
>
> And subksp[1] is the approximation to S^{-1}.
>
> Since each application of S to a vector (required in S^{-1}) requires
> one application of A^{-1}, because you use 225 iterations in total to
> invert S, you also use 225 applications of the KSP on A_inner.
>
> There doesn't appear to be a way to get the KSP used for A^{-1} if
> you've asked for different approximations to A^{-1} in the 0,0 block
> and inside S.
>
> Cheers,
>
> Lawrence
>
> > $ diff -u ex70.c.bak ex70.c
> > --- ex70.c.bak2017-01-18 09:25:46.286174830 +0100
> > +++ ex70.c2017-01-18 09:03:40.904483434 +0100
> > @@ -669,6 +669,10 @@
> >KSPksp;
> >PetscErrorCode ierr;
> >
> > +  KSP*subksp;
> > +  PC pc;
> > +  PetscInt   numsplit = 1, nusediter_vv, nusediter_pp;
> > +
> >ierr = PetscInitialize(&argc, &argv, NULL, help);CHKERRQ(ierr);
> >s.nx = 4;
> >s.ny = 6;
> > @@ -690,6 +694,13 @@
> >ierr = StokesSetupPC(&s, ksp);CHKERRQ(ierr);
> >ierr = KSPSolve(ksp, s.b, s.x);CHKERRQ(ierr);
> >
> > +  ierr = KSPGetPC(ksp, &pc);CHKERRQ(ierr);
> > +  ierr = PCFieldSplitGetSubKSP(pc,&numsplit,&subksp); CHKERRQ(ierr);
> > +  ierr = KSPGetTotalIterations(subksp[0],&nusediter_vv); CHKERRQ(ierr);
> > +  ierr = KSPGetTotalIterations(subksp[1],&nusediter_pp); CHKERRQ(ierr);
> > +  ierr = PetscPrintf(PETSC_COMM_WORLD," total u solves = %i\n",
> nusediter_vv); CHKERRQ(ierr);
> > +  ierr = PetscPrintf(PETSC_COMM_WORLD," total p solves = %i\n",
> nusediter_pp); CHKERRQ(ierr);
> > +
> >/* don't trust, verify! */
> >ierr = StokesCalcResidual(&s);CHKERRQ(ierr);
> >ierr = StokesCalcError(&s);CHKERRQ(ierr);
> >
> > Now run as follows:
> >
> > $ mpirun -n 2 ./ex70 -ksp_type fgmres -pc_type fieldsplit
> -pc_fieldsplit_type schur -pc_fieldsplit_schur_fact_type lower
> -fieldsplit_0_ksp_type gmres -fieldsplit_0_pc_type bjacobi
> -fieldsplit_1_pc_type jacobi -fieldsplit_1_inner_ksp_type preonly
> -fieldsplit_1_inner_pc_type jacobi -fieldsplit_1_upper_ksp_type preonly
> -fieldsplit_1_upper_pc_type jacobi -fieldsplit_0_ksp_converged_reason
> -fieldsplit_1_ksp_converged_reason
> >   Linear fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 5
> >   Linear fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations
> 14
> >   Linear fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 5
> >   Linear fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations
> 14
> >   Linear fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 5
> >   Linear fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations
> 16
> >   Linear fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 5
> >   Linear fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations
> 16
> >   Linear fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 5
> >   Linear fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations
> 17
> >   Linear fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 5
> &

Re: [petsc-users] monitoring the convergence of fieldsplit 0 and 1

2017-01-18 Thread Klaij, Christiaan
Thanks Lawrence, that nicely explains the unexpected behaviour!

I guess in general there ought to be getters for the four
ksp(A00)'s that occur in the full factorization.

Chris


dr. ir. Christiaan Klaij  | CFD Researcher | Research & Development
MARIN | T +31 317 49 33 44 | mailto:c.kl...@marin.nl | http://www.marin.nl

MARIN news: 
http://www.marin.nl/web/News/News-items/Verification-and-validation-exercises-for-flow-around-KVLCC2-tanker.htm


From: Lawrence Mitchell 
Sent: Wednesday, January 18, 2017 10:59 AM
To: petsc-users@mcs.anl.gov
Cc: bsm...@mcs.anl.gov; Klaij, Christiaan
Subject: Re: [petsc-users] monitoring the convergence of fieldsplit 0 and 1

On 18/01/17 08:40, Klaij, Christiaan wrote:
> Barry,
>
> I've managed to replicate the problem with 3.7.4
> snes/examples/tutorials/ex70.c. Basically I've added
> KSPGetTotalIterations to main (file is attached):

PCFieldSplitGetSubKSP returns, in the Schur case:

MatSchurComplementGet(pc->schur, &ksp);

in subksp[0]

and

pc->schur in subksp[1]

In your case, subksp[0] is the (preonly) approximation to A^{-1} *inside*

S = D - C A_inner^{-1} B

And subksp[1] is the approximation to S^{-1}.

Since each application of S to a vector (required in S^{-1}) requires
one application of A^{-1}, because you use 225 iterations in total to
invert S, you also use 225 applications of the KSP on A_inner.

There doesn't appear to be a way to get the KSP used for A^{-1} if
you've asked for different approximations to A^{-1} in the 0,0 block
and inside S.

Cheers,

Lawrence

> $ diff -u ex70.c.bak ex70.c
> --- ex70.c.bak2017-01-18 09:25:46.286174830 +0100
> +++ ex70.c2017-01-18 09:03:40.904483434 +0100
> @@ -669,6 +669,10 @@
>KSPksp;
>PetscErrorCode ierr;
>
> +  KSP*subksp;
> +  PC pc;
> +  PetscInt   numsplit = 1, nusediter_vv, nusediter_pp;
> +
>ierr = PetscInitialize(&argc, &argv, NULL, help);CHKERRQ(ierr);
>s.nx = 4;
>s.ny = 6;
> @@ -690,6 +694,13 @@
>ierr = StokesSetupPC(&s, ksp);CHKERRQ(ierr);
>ierr = KSPSolve(ksp, s.b, s.x);CHKERRQ(ierr);
>
> +  ierr = KSPGetPC(ksp, &pc);CHKERRQ(ierr);
> +  ierr = PCFieldSplitGetSubKSP(pc,&numsplit,&subksp); CHKERRQ(ierr);
> +  ierr = KSPGetTotalIterations(subksp[0],&nusediter_vv); CHKERRQ(ierr);
> +  ierr = KSPGetTotalIterations(subksp[1],&nusediter_pp); CHKERRQ(ierr);
> +  ierr = PetscPrintf(PETSC_COMM_WORLD," total u solves = %i\n", 
> nusediter_vv); CHKERRQ(ierr);
> +  ierr = PetscPrintf(PETSC_COMM_WORLD," total p solves = %i\n", 
> nusediter_pp); CHKERRQ(ierr);
> +
>/* don't trust, verify! */
>ierr = StokesCalcResidual(&s);CHKERRQ(ierr);
>ierr = StokesCalcError(&s);CHKERRQ(ierr);
>
> Now run as follows:
>
> $ mpirun -n 2 ./ex70 -ksp_type fgmres -pc_type fieldsplit -pc_fieldsplit_type 
> schur -pc_fieldsplit_schur_fact_type lower -fieldsplit_0_ksp_type gmres 
> -fieldsplit_0_pc_type bjacobi -fieldsplit_1_pc_type jacobi 
> -fieldsplit_1_inner_ksp_type preonly -fieldsplit_1_inner_pc_type jacobi 
> -fieldsplit_1_upper_ksp_type preonly -fieldsplit_1_upper_pc_type jacobi 
> -fieldsplit_0_ksp_converged_reason -fieldsplit_1_ksp_converged_reason
>   Linear fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 5
>   Linear fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 14
>   Linear fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 5
>   Linear fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 14
>   Linear fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 5
>   Linear fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 16
>   Linear fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 5
>   Linear fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 16
>   Linear fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 5
>   Linear fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 17
>   Linear fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 5
>   Linear fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 18
>   Linear fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 5
>   Linear fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 20
>   Linear fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 5
>   Linear fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 21
>   Linear fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 5
>   Linear fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 23
>   Linear fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 5
>   Linear fieldsplit_1_ solve converged

Re: [petsc-users] monitoring the convergence of fieldsplit 0 and 1

2017-01-18 Thread Lawrence Mitchell
; should be 60. Hope this helps you find the problem.
> 
> Chris
> 
> 
> 
> dr. ir. Christiaan Klaij  | CFD Researcher | Research & Development
> MARIN | T +31 317 49 33 44 | mailto:c.kl...@marin.nl | http://www.marin.nl
> 
> MARIN news: 
> http://www.marin.nl/web/News/News-items/Few-places-left-for-Offshore-and-Ship-hydrodynamics-courses.htm
> 
> 
> From: Klaij, Christiaan
> Sent: Tuesday, January 17, 2017 8:45 AM
> To: Barry Smith
> Cc: petsc-users@mcs.anl.gov
> Subject: Re: [petsc-users] monitoring the convergence of fieldsplit 0 and 1
> 
> Well, that's it, all the rest was hard coded. Here's the relevant part of the 
> code:
> 
>CALL PCSetType(pc_system,PCFIELDSPLIT,ierr); CHKERRQ(ierr)
>   CALL PCFieldSplitSetType(pc_system,PC_COMPOSITE_SCHUR,ierr); CHKERRQ(ierr)
>   CALL PCFieldSplitSetIS(pc_system,"0",isgs(1),ierr); CHKERRQ(ierr)
>   CALL PCFieldSplitSetIS(pc_system,"1",isgs(2),ierr); CHKERRQ(ierr)
>   CALL 
> PCFieldSplitSetSchurFactType(pc_system,PC_FIELDSPLIT_SCHUR_FACT_FULL,ierr);CHKERRQ(ierr)
>   CALL 
> PCFieldSplitSetSchurPre(pc_system,PC_FIELDSPLIT_SCHUR_PRE_SELFP,PETSC_NULL_OBJECT,ierr);CHKERRQ(ierr)
> 
>   CALL 
> KSPSetTolerances(ksp_system,tol,PETSC_DEFAULT_REAL,PETSC_DEFAULT_REAL,maxiter,ierr);
>  CHKERRQ(ierr)
>   CALL 
> PetscOptionsSetValue(PETSC_NULL_OBJECT,"-sys_fieldsplit_0_ksp_rtol","0.01",ierr);
>  CHKERRQ(ierr)
>   CALL 
> PetscOptionsSetValue(PETSC_NULL_OBJECT,"-sys_fieldsplit_1_ksp_rtol","0.01",ierr);
>  CHKERRQ(ierr)
> 
>   CALL 
> PetscOptionsSetValue(PETSC_NULL_OBJECT,"-sys_fieldsplit_0_ksp_pc_side","right",ierr);
>  CHKERRQ(ierr)
>   CALL 
> PetscOptionsSetValue(PETSC_NULL_OBJECT,"-sys_fieldsplit_1_ksp_pc_side","right",ierr);
>  CHKERRQ(ierr)
> 
>   CALL 
> PetscOptionsSetValue(PETSC_NULL_OBJECT,"-sys_fieldsplit_0_ksp_type","gmres",ierr);
>  CHKERRQ(ierr)
>   CALL 
> PetscOptionsSetValue(PETSC_NULL_OBJECT,"-sys_fieldsplit_1_upper_ksp_type","preonly",ierr);
>  CHKERRQ(ierr)
>   CALL 
> PetscOptionsSetValue(PETSC_NULL_OBJECT,"-sys_fieldsplit_1_upper_pc_type","jacobi",ierr);
>  CHKERRQ(ierr)
> 
>   CALL 
> PetscOptionsSetValue(PETSC_NULL_OBJECT,"-sys_fieldsplit_1_inner_ksp_type","preonly",ierr);
>  CHKERRQ(ierr)
>   CALL 
> PetscOptionsSetValue(PETSC_NULL_OBJECT,"-sys_fieldsplit_1_inner_pc_type","jacobi",ierr);
>  CHKERRQ(ierr)
> 
> 
> From: Barry Smith 
> Sent: Monday, January 16, 2017 9:28 PM
> To: Klaij, Christiaan
> Cc: petsc-users@mcs.anl.gov
> Subject: Re: [petsc-users] monitoring the convergence of fieldsplit 0 and 1
> 
>Please send all the command line options you use.
> 
> 
>> On Jan 16, 2017, at 1:47 AM, Klaij, Christiaan  wrote:
>>
>> Barry,
>>
>> Sure, here's the output with:
>>
>> -sys_ksp_view -sys_ksp_converged_reason 
>> -sys_fieldsplit_0_ksp_converged_reason -sys_fieldsplit_1_ksp_converged_reason
>>
>> (In my previous email, I rearranged 0 & 1 for easy summing.)
>>
>> Chris
>>
>>  Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 1
>>  Linear sys_fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 22
>>  Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 1
>>  Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 2
>>  Linear sys_fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 6
>>  Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 2
>>  Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 7
>>  Linear sys_fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 3
>>  Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 7
>>  Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 7
>>  Linear sys_fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 2
>>  Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 7
>>  Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 8
>>  Linear sys_fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 2
>>  Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 8
>>  Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 8
>>  Linear sys_fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 2
>>  Linear sys_fieldsplit_0_ solve converged

Re: [petsc-users] monitoring the convergence of fieldsplit 0 and 1

2017-01-18 Thread Klaij, Christiaan
Barry,

I've managed to replicate the problem with 3.7.4
snes/examples/tutorials/ex70.c. Basically I've added
KSPGetTotalIterations to main (file is attached):

$ diff -u ex70.c.bak ex70.c
--- ex70.c.bak2017-01-18 09:25:46.286174830 +0100
+++ ex70.c2017-01-18 09:03:40.904483434 +0100
@@ -669,6 +669,10 @@
   KSPksp;
   PetscErrorCode ierr;

+  KSP*subksp;
+  PC pc;
+  PetscInt   numsplit = 1, nusediter_vv, nusediter_pp;
+
   ierr = PetscInitialize(&argc, &argv, NULL, help);CHKERRQ(ierr);
   s.nx = 4;
   s.ny = 6;
@@ -690,6 +694,13 @@
   ierr = StokesSetupPC(&s, ksp);CHKERRQ(ierr);
   ierr = KSPSolve(ksp, s.b, s.x);CHKERRQ(ierr);

+  ierr = KSPGetPC(ksp, &pc);CHKERRQ(ierr);
+  ierr = PCFieldSplitGetSubKSP(pc,&numsplit,&subksp); CHKERRQ(ierr);
+  ierr = KSPGetTotalIterations(subksp[0],&nusediter_vv); CHKERRQ(ierr);
+  ierr = KSPGetTotalIterations(subksp[1],&nusediter_pp); CHKERRQ(ierr);
+  ierr = PetscPrintf(PETSC_COMM_WORLD," total u solves = %i\n", nusediter_vv); 
CHKERRQ(ierr);
+  ierr = PetscPrintf(PETSC_COMM_WORLD," total p solves = %i\n", nusediter_pp); 
CHKERRQ(ierr);
+
   /* don't trust, verify! */
   ierr = StokesCalcResidual(&s);CHKERRQ(ierr);
   ierr = StokesCalcError(&s);CHKERRQ(ierr);

Now run as follows:

$ mpirun -n 2 ./ex70 -ksp_type fgmres -pc_type fieldsplit -pc_fieldsplit_type 
schur -pc_fieldsplit_schur_fact_type lower -fieldsplit_0_ksp_type gmres 
-fieldsplit_0_pc_type bjacobi -fieldsplit_1_pc_type jacobi 
-fieldsplit_1_inner_ksp_type preonly -fieldsplit_1_inner_pc_type jacobi 
-fieldsplit_1_upper_ksp_type preonly -fieldsplit_1_upper_pc_type jacobi 
-fieldsplit_0_ksp_converged_reason -fieldsplit_1_ksp_converged_reason
  Linear fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 5
  Linear fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 14
  Linear fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 5
  Linear fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 14
  Linear fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 5
  Linear fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 16
  Linear fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 5
  Linear fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 16
  Linear fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 5
  Linear fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 17
  Linear fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 5
  Linear fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 18
  Linear fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 5
  Linear fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 20
  Linear fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 5
  Linear fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 21
  Linear fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 5
  Linear fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 23
  Linear fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 5
  Linear fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 22
  Linear fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 5
  Linear fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 22
  Linear fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 5
  Linear fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 22
 total u solves = 225
 total p solves = 225
 residual u = 9.67257e-06
 residual p = 5.42082e-07
 residual [u,p] = 9.68775e-06
 discretization error u = 0.0106464
 discretization error p = 1.85907
 discretization error [u,p] = 1.8591

So here again the total of 225 is correct for p, but for u it
should be 60. Hope this helps you find the problem.

Chris



dr. ir. Christiaan Klaij  | CFD Researcher | Research & Development
MARIN | T +31 317 49 33 44 | mailto:c.kl...@marin.nl | http://www.marin.nl

MARIN news: 
http://www.marin.nl/web/News/News-items/Few-places-left-for-Offshore-and-Ship-hydrodynamics-courses.htm


From: Klaij, Christiaan
Sent: Tuesday, January 17, 2017 8:45 AM
To: Barry Smith
Cc: petsc-users@mcs.anl.gov
Subject: Re: [petsc-users] monitoring the convergence of fieldsplit 0 and 1

Well, that's it, all the rest was hard coded. Here's the relevant part of the 
code:

   CALL PCSetType(pc_system,PCFIELDSPLIT,ierr); CHKERRQ(ierr)
  CALL PCFieldSplitSetType(pc_system,PC_COMPOSITE_SCHUR,ierr); CHKERRQ(ierr)
  CALL PCFieldSplitSetIS(pc_system,"0",isgs(1),ierr); CHKERRQ(ierr)
  CALL PCFieldSplitSetIS(pc_system,"1",isgs(2),ierr); CHKERRQ(ierr)
  CALL 
PCFieldSplitSetSchurFactType(pc_system,PC_FIELDSPLIT_SCHUR_FACT_FULL,ierr);CHKERRQ(ierr)
  CALL 
PCFieldSplitSetSchurPre(pc_system,PC_FIELDSPLIT_SCHUR_PRE_SELFP,PETSC_N

Re: [petsc-users] monitoring the convergence of fieldsplit 0 and 1

2017-01-16 Thread Klaij, Christiaan
Well, that's it, all the rest was hard coded. Here's the relevant part of the 
code:

   CALL PCSetType(pc_system,PCFIELDSPLIT,ierr); CHKERRQ(ierr)
  CALL PCFieldSplitSetType(pc_system,PC_COMPOSITE_SCHUR,ierr); CHKERRQ(ierr)
  CALL PCFieldSplitSetIS(pc_system,"0",isgs(1),ierr); CHKERRQ(ierr)
  CALL PCFieldSplitSetIS(pc_system,"1",isgs(2),ierr); CHKERRQ(ierr)
  CALL 
PCFieldSplitSetSchurFactType(pc_system,PC_FIELDSPLIT_SCHUR_FACT_FULL,ierr);CHKERRQ(ierr)
  CALL 
PCFieldSplitSetSchurPre(pc_system,PC_FIELDSPLIT_SCHUR_PRE_SELFP,PETSC_NULL_OBJECT,ierr);CHKERRQ(ierr)

  CALL 
KSPSetTolerances(ksp_system,tol,PETSC_DEFAULT_REAL,PETSC_DEFAULT_REAL,maxiter,ierr);
 CHKERRQ(ierr)
  CALL 
PetscOptionsSetValue(PETSC_NULL_OBJECT,"-sys_fieldsplit_0_ksp_rtol","0.01",ierr);
 CHKERRQ(ierr)
  CALL 
PetscOptionsSetValue(PETSC_NULL_OBJECT,"-sys_fieldsplit_1_ksp_rtol","0.01",ierr);
 CHKERRQ(ierr)

  CALL 
PetscOptionsSetValue(PETSC_NULL_OBJECT,"-sys_fieldsplit_0_ksp_pc_side","right",ierr);
 CHKERRQ(ierr)
  CALL 
PetscOptionsSetValue(PETSC_NULL_OBJECT,"-sys_fieldsplit_1_ksp_pc_side","right",ierr);
 CHKERRQ(ierr)

  CALL 
PetscOptionsSetValue(PETSC_NULL_OBJECT,"-sys_fieldsplit_0_ksp_type","gmres",ierr);
 CHKERRQ(ierr)
  CALL 
PetscOptionsSetValue(PETSC_NULL_OBJECT,"-sys_fieldsplit_1_upper_ksp_type","preonly",ierr);
 CHKERRQ(ierr)
  CALL 
PetscOptionsSetValue(PETSC_NULL_OBJECT,"-sys_fieldsplit_1_upper_pc_type","jacobi",ierr);
 CHKERRQ(ierr)

  CALL 
PetscOptionsSetValue(PETSC_NULL_OBJECT,"-sys_fieldsplit_1_inner_ksp_type","preonly",ierr);
 CHKERRQ(ierr)
  CALL 
PetscOptionsSetValue(PETSC_NULL_OBJECT,"-sys_fieldsplit_1_inner_pc_type","jacobi",ierr);
 CHKERRQ(ierr)



dr. ir. Christiaan Klaij  | CFD Researcher | Research & Development
MARIN | T +31 317 49 33 44 | mailto:c.kl...@marin.nl | http://www.marin.nl

MARIN news: 
http://www.marin.nl/web/News/News-items/Verification-and-validation-exercises-for-flow-around-KVLCC2-tanker.htm

____________
From: Barry Smith 
Sent: Monday, January 16, 2017 9:28 PM
To: Klaij, Christiaan
Cc: petsc-users@mcs.anl.gov
Subject: Re: [petsc-users] monitoring the convergence of fieldsplit 0 and 1

   Please send all the command line options you use.


> On Jan 16, 2017, at 1:47 AM, Klaij, Christiaan  wrote:
>
> Barry,
>
> Sure, here's the output with:
>
> -sys_ksp_view -sys_ksp_converged_reason 
> -sys_fieldsplit_0_ksp_converged_reason -sys_fieldsplit_1_ksp_converged_reason
>
> (In my previous email, I rearranged 0 & 1 for easy summing.)
>
> Chris
>
>  Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 1
>  Linear sys_fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 22
>  Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 1
>  Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 2
>  Linear sys_fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 6
>  Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 2
>  Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 7
>  Linear sys_fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 3
>  Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 7
>  Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 7
>  Linear sys_fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 2
>  Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 7
>  Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 8
>  Linear sys_fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 2
>  Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 8
>  Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 8
>  Linear sys_fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 2
>  Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 8
> Linear sys_ solve converged due to CONVERGED_RTOL iterations 6
> KSP Object:(sys_) 1 MPI processes
>  type: fgmres
>GMRES: restart=30, using Classical (unmodified) Gram-Schmidt 
> Orthogonalization with no iterative refinement
>GMRES: happy breakdown tolerance 1e-30
>  maximum iterations=300, initial guess is zero
>  tolerances:  relative=0.01, absolute=1e-50, divergence=1.
>  right preconditioning
>  using UNPRECONDITIONED norm type for convergence test
> PC Object:(sys_) 1 MPI processes
>  type: fieldsplit
>FieldSplit with Schur preconditioner, factorization FULL
>Preconditioner for the Schur complement formed from Sp, an assembled 
> approximation to S, 

Re: [petsc-users] monitoring the convergence of fieldsplit 0 and 1

2017-01-16 Thread Barry Smith
eldsplit_0_", type=seqaij, rows=9600, cols=9600
>(0,1) : type=seqaij, rows=9600, cols=3200
>    (1,0) : type=seqaij, rows=3200, cols=9600
>(1,1) : prefix="sys_fieldsplit_1_", type=seqaij, rows=3200, cols=3200
> nusediter_vv  37
> nusediter_pp  37
> 
> 
> 
> dr. ir. Christiaan Klaij  | CFD Researcher | Research & Development
> MARIN | T +31 317 49 33 44 | mailto:c.kl...@marin.nl | http://www.marin.nl
> 
> MARIN news: 
> http://www.marin.nl/web/News/News-items/The-Ocean-Cleanup-testing-continues.htm
> 
> 
> From: Barry Smith 
> Sent: Friday, January 13, 2017 7:51 PM
> To: Klaij, Christiaan
> Cc: petsc-users@mcs.anl.gov
> Subject: Re: [petsc-users] monitoring the convergence of fieldsplit 0 and 1
> 
>   Yes, I would have expected this to work. Could you send the output from 
> -ksp_view in this case?
> 
> 
>> On Jan 13, 2017, at 3:46 AM, Klaij, Christiaan  wrote:
>> 
>> Barry,
>> 
>> It's been a while but I'm finally using this function in
>> 3.7.4. Is it supposed to work with fieldsplit? Here's why.
>> 
>> I'm solving a Navier-Stokes system with fieldsplit (pc has one
>> velocity solve and one pressure solve) and trying to retrieve the
>> totals like this:
>> 
>> CALL KSPSolve(ksp_system,rr_system,xx_system,ierr); CHKERRQ(ierr)
>> CALL PCFieldSplitGetSubKSP(pc_system,numsplit,subksp,ierr); CHKERRQ(ierr)
>> CALL KSPGetTotalIterations(subksp(1),nusediter_vv,ierr); CHKERRQ(ierr)
>> CALL KSPGetTotalIterations(subksp(2),nusediter_pp,ierr); CHKERRQ(ierr)
>> print *, 'nusediter_vv', nusediter_vv
>> print *, 'nusediter_pp', nusediter_pp
>> 
>> Running the code shows this surprise:
>> 
>> Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 1
>> Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 1
>> Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 2
>> Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 2
>> Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 7
>> Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 7
>> Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 7
>> Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 7
>> Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 8
>> Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 8
>> Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 8
>> Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 8
>> 
>> Linear sys_fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 22
>> Linear sys_fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 6
>> Linear sys_fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 3
>> Linear sys_fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 2
>> Linear sys_fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 2
>> Linear sys_fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 2
>> 
>> nusediter_vv  37
>> nusediter_pp  37
>> 
>> So the value of nusediter_pp is indeed 37, but for nusediter_vv
>> it should be 66. Any idea what went wrong?
>> 
>> Chris
>> 
>> 
>> 
>> dr. ir. Christiaan Klaij  | CFD Researcher | Research & Development
>> MARIN | T +31 317 49 33 44 | mailto:c.kl...@marin.nl | http://www.marin.nl
>> 
>> MARIN news: 
>> http://www.marin.nl/web/News/News-items/MARIN-wishes-you-a-challenging-inspiring-2017.htm
>> 
>> ____________
>> From: Barry Smith 
>> Sent: Saturday, April 11, 2015 12:27 AM
>> To: Klaij, Christiaan
>> Cc: petsc-users@mcs.anl.gov
>> Subject: Re: [petsc-users] monitoring the convergence of fieldsplit 0 and 1
>> 
>> Chris,
>> 
>> I have added KSPGetTotalIterations() to the branch 
>> barry/add-ksp-total-iterations/master and next. After tests it will go into 
>> master
>> 
>> Barry
>> 
>>> On Apr 10, 2015, at 8:07 AM, Klaij, Christiaan  wrote:
>>> 
>>> Barry,
>>> 
>>> Sure, I can call PCFieldSplitGetSubKSP() to get the fieldsplit_0
>>> ksp and then KSPGetIterationNumber, but what does this number
>>> mean?
>>> 
>>> It appears to be the number of iterations of the last time that
>>> the subsystem was solved, right? If so, th

Re: [petsc-users] monitoring the convergence of fieldsplit 0 and 1

2017-01-15 Thread Klaij, Christiaan
terative refinement
  GMRES: happy breakdown tolerance 1e-30
maximum iterations=1, initial guess is zero
tolerances:  relative=0.01, absolute=1e-50, divergence=1.
right preconditioning
using UNPRECONDITIONED norm type for convergence test
  PC Object:  (sys_fieldsplit_1_)   1 MPI processes
type: ilu
  ILU: out-of-place factorization
  0 levels of fill
  tolerance for zero pivot 2.22045e-14
  matrix ordering: natural
  factor fill ratio given 1., needed 1.
Factored matrix follows:
  Mat Object:   1 MPI processes
type: seqaij
rows=3200, cols=3200
package used to perform factorization: petsc
total: nonzeros=40404, allocated nonzeros=40404
total number of mallocs used during MatSetValues calls =0
  not using I-node routines
linear system matrix followed by preconditioner matrix:
Mat Object:(sys_fieldsplit_1_) 1 MPI processes
  type: schurcomplement
  rows=3200, cols=3200
Schur complement A11 - A10 inv(A00) A01
A11
  Mat Object:  (sys_fieldsplit_1_)   1 MPI 
processes
type: seqaij
rows=3200, cols=3200
total: nonzeros=40404, allocated nonzeros=40404
total number of mallocs used during MatSetValues calls =0
  not using I-node routines
A10
  Mat Object:   1 MPI processes
type: seqaij
rows=3200, cols=9600
total: nonzeros=47280, allocated nonzeros=47280
total number of mallocs used during MatSetValues calls =0
  not using I-node routines
KSP of A00
  KSP Object:  (sys_fieldsplit_1_inner_)   
1 MPI processes
type: preonly
maximum iterations=1, initial guess is zero
tolerances:  relative=1e-05, absolute=1e-50, divergence=1.
left preconditioning
using NONE norm type for convergence test
  PC Object:  (sys_fieldsplit_1_inner_)   1 
MPI processes
type: jacobi
linear system matrix = precond matrix:
Mat Object:(sys_fieldsplit_0_) 
1 MPI processes
  type: seqaij
  rows=9600, cols=9600
  total: nonzeros=47280, allocated nonzeros=47280
  total number of mallocs used during MatSetValues calls =0
not using I-node routines
A01
  Mat Object:   1 MPI processes
type: seqaij
rows=9600, cols=3200
total: nonzeros=47280, allocated nonzeros=47280
total number of mallocs used during MatSetValues calls =0
  not using I-node routines
Mat Object: 1 MPI processes
  type: seqaij
  rows=3200, cols=3200
  total: nonzeros=40404, allocated nonzeros=40404
  total number of mallocs used during MatSetValues calls =0
not using I-node routines
  linear system matrix followed by preconditioner matrix:
  Mat Object:   1 MPI processes
type: nest
rows=12800, cols=12800
  Matrix object:
type=nest, rows=2, cols=2
MatNest structure:
(0,0) : prefix="mom_", type=seqaij, rows=9600, cols=9600
(0,1) : prefix="grad_", type=seqaij, rows=9600, cols=3200
(1,0) : prefix="div_", type=seqaij, rows=3200, cols=9600
(1,1) : prefix="stab_", type=seqaij, rows=3200, cols=3200
  Mat Object:   1 MPI processes
type: nest
rows=12800, cols=12800
  Matrix object:
type=nest, rows=2, cols=2
MatNest structure:
(0,0) : prefix="sys_fieldsplit_0_", type=seqaij, rows=9600, cols=9600
(0,1) : type=seqaij, rows=9600, cols=3200
(1,0) : type=seqaij, rows=3200, cols=9600
(1,1) : prefix="sys_fieldsplit_1_", type=seqaij, rows=3200, cols=3200
 nusediter_vv  37
 nusediter_pp  37



dr. ir. Christiaan Klaij  | CFD Researcher | Research & Development
MARIN | T +31 317 49 33 44 | mailto:c.kl...@marin.nl | http://www.marin.nl

MARIN news: 
http://www.marin.nl/web/News/News-items/The-Ocean-Cleanup-testing-continues.htm


From: Barry Smith 
Sent: Friday, January 13, 2017 7:51 PM
To: Klaij, Christiaan
Cc: petsc-users@mcs.anl.gov
Subject: Re: [petsc-users] monitoring the convergence of fieldsplit 0 and 1

   Yes, I would have expected this to work. Could you send the output from 
-ksp_view in this case?


> On Jan 13, 2017, at 3:46 AM,

Re: [petsc-users] monitoring the convergence of fieldsplit 0 and 1

2017-01-13 Thread Barry Smith

   Yes, I would have expected this to work. Could you send the output from 
-ksp_view in this case?


> On Jan 13, 2017, at 3:46 AM, Klaij, Christiaan  wrote:
> 
> Barry,
> 
> It's been a while but I'm finally using this function in
> 3.7.4. Is it supposed to work with fieldsplit? Here's why.
> 
> I'm solving a Navier-Stokes system with fieldsplit (pc has one
> velocity solve and one pressure solve) and trying to retrieve the
> totals like this:
> 
>  CALL KSPSolve(ksp_system,rr_system,xx_system,ierr); CHKERRQ(ierr)
>  CALL PCFieldSplitGetSubKSP(pc_system,numsplit,subksp,ierr); CHKERRQ(ierr)
>  CALL KSPGetTotalIterations(subksp(1),nusediter_vv,ierr); CHKERRQ(ierr)
>  CALL KSPGetTotalIterations(subksp(2),nusediter_pp,ierr); CHKERRQ(ierr)
> print *, 'nusediter_vv', nusediter_vv
> print *, 'nusediter_pp', nusediter_pp
> 
> Running the code shows this surprise:
> 
>  Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 1
>  Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 1
>  Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 2
>  Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 2
>  Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 7
>  Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 7
>  Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 7
>  Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 7
>  Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 8
>  Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 8
>  Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 8
>  Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 8
> 
>  Linear sys_fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 22
>  Linear sys_fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 6
>  Linear sys_fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 3
>  Linear sys_fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 2
>  Linear sys_fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 2
>  Linear sys_fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 2
> 
> nusediter_vv  37
> nusediter_pp  37
> 
> So the value of nusediter_pp is indeed 37, but for nusediter_vv
> it should be 66. Any idea what went wrong?
> 
> Chris
> 
> 
> 
> dr. ir. Christiaan Klaij  | CFD Researcher | Research & Development
> MARIN | T +31 317 49 33 44 | mailto:c.kl...@marin.nl | http://www.marin.nl
> 
> MARIN news: 
> http://www.marin.nl/web/News/News-items/MARIN-wishes-you-a-challenging-inspiring-2017.htm
> 
> ____________
> From: Barry Smith 
> Sent: Saturday, April 11, 2015 12:27 AM
> To: Klaij, Christiaan
> Cc: petsc-users@mcs.anl.gov
> Subject: Re: [petsc-users] monitoring the convergence of fieldsplit 0 and 1
> 
>  Chris,
> 
>  I have added KSPGetTotalIterations() to the branch 
> barry/add-ksp-total-iterations/master and next. After tests it will go into 
> master
> 
>  Barry
> 
>> On Apr 10, 2015, at 8:07 AM, Klaij, Christiaan  wrote:
>> 
>> Barry,
>> 
>> Sure, I can call PCFieldSplitGetSubKSP() to get the fieldsplit_0
>> ksp and then KSPGetIterationNumber, but what does this number
>> mean?
>> 
>> It appears to be the number of iterations of the last time that
>> the subsystem was solved, right? If so, this corresponds to the
>> last iteration of the coupled system, how about all the previous
>> iterations?
>> 
>> Chris
>> 
>> From: Barry Smith 
>> Sent: Friday, April 10, 2015 2:48 PM
>> To: Klaij, Christiaan
>> Cc: petsc-users@mcs.anl.gov
>> Subject: Re: [petsc-users] monitoring the convergence of fieldsplit 0 and 1
>> 
>>  Chris,
>> 
>>It appears you should call PCFieldSplitGetSubKSP() and then get the 
>> information you want out of the individual KSPs. If this doesn't work please 
>> let us know.
>> 
>>  Barry
>> 
>>> On Apr 10, 2015, at 6:48 AM, Klaij, Christiaan  wrote:
>>> 
>>> A question when using PCFieldSplit: for each linear iteration of
>>> the system, how many iterations for fielsplit 0 and 1?
>>> 
>>> One way to find out is to run with -ksp_monitor,
>>> -fieldsplit_0_ksp_monitor and -fieldsplit_0_ksp_monitor. This
>>> gives the complete convergence history.
>>> 
>>> Another way, suggested by Matt, 

Re: [petsc-users] monitoring the convergence of fieldsplit 0 and 1

2017-01-13 Thread Klaij, Christiaan
Barry,

It's been a while but I'm finally using this function in
3.7.4. Is it supposed to work with fieldsplit? Here's why.

I'm solving a Navier-Stokes system with fieldsplit (pc has one
velocity solve and one pressure solve) and trying to retrieve the
totals like this:

  CALL KSPSolve(ksp_system,rr_system,xx_system,ierr); CHKERRQ(ierr)
  CALL PCFieldSplitGetSubKSP(pc_system,numsplit,subksp,ierr); CHKERRQ(ierr)
  CALL KSPGetTotalIterations(subksp(1),nusediter_vv,ierr); CHKERRQ(ierr)
  CALL KSPGetTotalIterations(subksp(2),nusediter_pp,ierr); CHKERRQ(ierr)
print *, 'nusediter_vv', nusediter_vv
print *, 'nusediter_pp', nusediter_pp

Running the code shows this surprise:

  Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 1
  Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 1
  Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 2
  Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 2
  Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 7
  Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 7
  Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 7
  Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 7
  Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 8
  Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 8
  Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 8
  Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 8

  Linear sys_fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 22
  Linear sys_fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 6
  Linear sys_fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 3
  Linear sys_fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 2
  Linear sys_fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 2
  Linear sys_fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 2

 nusediter_vv  37
 nusediter_pp  37

So the value of nusediter_pp is indeed 37, but for nusediter_vv
it should be 66. Any idea what went wrong?

Chris



dr. ir. Christiaan Klaij  | CFD Researcher | Research & Development
MARIN | T +31 317 49 33 44 | mailto:c.kl...@marin.nl | http://www.marin.nl

MARIN news: 
http://www.marin.nl/web/News/News-items/MARIN-wishes-you-a-challenging-inspiring-2017.htm


From: Barry Smith 
Sent: Saturday, April 11, 2015 12:27 AM
To: Klaij, Christiaan
Cc: petsc-users@mcs.anl.gov
Subject: Re: [petsc-users] monitoring the convergence of fieldsplit 0 and 1

  Chris,

  I have added KSPGetTotalIterations() to the branch 
barry/add-ksp-total-iterations/master and next. After tests it will go into 
master

  Barry

> On Apr 10, 2015, at 8:07 AM, Klaij, Christiaan  wrote:
>
> Barry,
>
> Sure, I can call PCFieldSplitGetSubKSP() to get the fieldsplit_0
> ksp and then KSPGetIterationNumber, but what does this number
> mean?
>
> It appears to be the number of iterations of the last time that
> the subsystem was solved, right? If so, this corresponds to the
> last iteration of the coupled system, how about all the previous
> iterations?
>
> Chris
> 
> From: Barry Smith 
> Sent: Friday, April 10, 2015 2:48 PM
> To: Klaij, Christiaan
> Cc: petsc-users@mcs.anl.gov
> Subject: Re: [petsc-users] monitoring the convergence of fieldsplit 0 and 1
>
>   Chris,
>
> It appears you should call PCFieldSplitGetSubKSP() and then get the 
> information you want out of the individual KSPs. If this doesn't work please 
> let us know.
>
>   Barry
>
>> On Apr 10, 2015, at 6:48 AM, Klaij, Christiaan  wrote:
>>
>> A question when using PCFieldSplit: for each linear iteration of
>> the system, how many iterations for fielsplit 0 and 1?
>>
>> One way to find out is to run with -ksp_monitor,
>> -fieldsplit_0_ksp_monitor and -fieldsplit_0_ksp_monitor. This
>> gives the complete convergence history.
>>
>> Another way, suggested by Matt, is to use -ksp_monitor,
>> -fieldsplit_0_ksp_converged_reason and
>> -fieldsplit_1_ksp_converged_reason. This gives only the totals
>> for fieldsplit 0 and 1 (but without saying for which one).
>>
>> Both ways require to somehow process the output, which is a bit
>> inconvenient. Could KSPGetResidualHistory perhaps return (some)
>> information on the subsystems' convergence for processing inside
>> the code?
>>
>> Chris
>>
>>
>> dr. ir. Christiaan Klaij
>> CFD Researcher
>> Research & Development
>> E mailto:c.kl...@marin.nl
>> T +31 317 49 33 44
>>
>>
>> MARIN
>> 2, Haagsteeg, P.O. Box 28, 6700 AA Wageningen, The Netherlands
>> T +31 317 49 39 11, F +31 317 49 32 45, I www.marin.nl
>>
>



Re: [petsc-users] monitoring the convergence of fieldsplit 0 and 1

2015-04-11 Thread Klaij, Christiaan
Barry, Matt

Thanks, the total iteration count is already very useful. Looking forward to 
the custom monitors as well.

Chris

From: Barry Smith 
Sent: Saturday, April 11, 2015 12:27 AM
To: Klaij, Christiaan
Cc: petsc-users@mcs.anl.gov
Subject: Re: [petsc-users] monitoring the convergence of fieldsplit 0 and 1

  Chris,

  I have added KSPGetTotalIterations() to the branch 
barry/add-ksp-total-iterations/master and next. After tests it will go into 
master

  Barry

> On Apr 10, 2015, at 8:07 AM, Klaij, Christiaan  wrote:
>
> Barry,
>
> Sure, I can call PCFieldSplitGetSubKSP() to get the fieldsplit_0
> ksp and then KSPGetIterationNumber, but what does this number
> mean?
>
> It appears to be the number of iterations of the last time that
> the subsystem was solved, right? If so, this corresponds to the
> last iteration of the coupled system, how about all the previous
> iterations?
>
> Chris
> 
> From: Barry Smith 
> Sent: Friday, April 10, 2015 2:48 PM
> To: Klaij, Christiaan
> Cc: petsc-users@mcs.anl.gov
> Subject: Re: [petsc-users] monitoring the convergence of fieldsplit 0 and 1
>
>   Chris,
>
> It appears you should call PCFieldSplitGetSubKSP() and then get the 
> information you want out of the individual KSPs. If this doesn't work please 
> let us know.
>
>   Barry
>
>> On Apr 10, 2015, at 6:48 AM, Klaij, Christiaan  wrote:
>>
>> A question when using PCFieldSplit: for each linear iteration of
>> the system, how many iterations for fielsplit 0 and 1?
>>
>> One way to find out is to run with -ksp_monitor,
>> -fieldsplit_0_ksp_monitor and -fieldsplit_0_ksp_monitor. This
>> gives the complete convergence history.
>>
>> Another way, suggested by Matt, is to use -ksp_monitor,
>> -fieldsplit_0_ksp_converged_reason and
>> -fieldsplit_1_ksp_converged_reason. This gives only the totals
>> for fieldsplit 0 and 1 (but without saying for which one).
>>
>> Both ways require to somehow process the output, which is a bit
>> inconvenient. Could KSPGetResidualHistory perhaps return (some)
>> information on the subsystems' convergence for processing inside
>> the code?
>>
>> Chris
>>
>>
>> dr. ir. Christiaan Klaij
>> CFD Researcher
>> Research & Development
>> E mailto:c.kl...@marin.nl
>> T +31 317 49 33 44
>>
>>
>> MARIN
>> 2, Haagsteeg, P.O. Box 28, 6700 AA Wageningen, The Netherlands
>> T +31 317 49 39 11, F +31 317 49 32 45, I www.marin.nl
>>
>



Re: [petsc-users] monitoring the convergence of fieldsplit 0 and 1

2015-04-10 Thread Barry Smith

  Chris,

  I have added KSPGetTotalIterations() to the branch 
barry/add-ksp-total-iterations/master and next. After tests it will go into 
master

  Barry

> On Apr 10, 2015, at 8:07 AM, Klaij, Christiaan  wrote:
> 
> Barry,
> 
> Sure, I can call PCFieldSplitGetSubKSP() to get the fieldsplit_0
> ksp and then KSPGetIterationNumber, but what does this number
> mean?
> 
> It appears to be the number of iterations of the last time that
> the subsystem was solved, right? If so, this corresponds to the
> last iteration of the coupled system, how about all the previous
> iterations?
> 
> Chris
> 
> From: Barry Smith 
> Sent: Friday, April 10, 2015 2:48 PM
> To: Klaij, Christiaan
> Cc: petsc-users@mcs.anl.gov
> Subject: Re: [petsc-users] monitoring the convergence of fieldsplit 0 and 1
> 
>   Chris,
> 
> It appears you should call PCFieldSplitGetSubKSP() and then get the 
> information you want out of the individual KSPs. If this doesn't work please 
> let us know.
> 
>   Barry
> 
>> On Apr 10, 2015, at 6:48 AM, Klaij, Christiaan  wrote:
>> 
>> A question when using PCFieldSplit: for each linear iteration of
>> the system, how many iterations for fielsplit 0 and 1?
>> 
>> One way to find out is to run with -ksp_monitor,
>> -fieldsplit_0_ksp_monitor and -fieldsplit_0_ksp_monitor. This
>> gives the complete convergence history.
>> 
>> Another way, suggested by Matt, is to use -ksp_monitor,
>> -fieldsplit_0_ksp_converged_reason and
>> -fieldsplit_1_ksp_converged_reason. This gives only the totals
>> for fieldsplit 0 and 1 (but without saying for which one).
>> 
>> Both ways require to somehow process the output, which is a bit
>> inconvenient. Could KSPGetResidualHistory perhaps return (some)
>> information on the subsystems' convergence for processing inside
>> the code?
>> 
>> Chris
>> 
>> 
>> dr. ir. Christiaan Klaij
>> CFD Researcher
>> Research & Development
>> E mailto:c.kl...@marin.nl
>> T +31 317 49 33 44
>> 
>> 
>> MARIN
>> 2, Haagsteeg, P.O. Box 28, 6700 AA Wageningen, The Netherlands
>> T +31 317 49 39 11, F +31 317 49 32 45, I www.marin.nl
>> 
>