Thank you Barry.
I added a custom atoll = 1.0e-12 and this makes the CFD stable with all the 
linear solver types. CFD solution is good and pressure is a good “zero” field 
at every CFD iteration.
I did the same test using ASM+ILU+FGMRES(BCGS and GMRES) and the behaviour is 
the same.
During some CFD iteration the residual of linear system starts slightly higher 
than atol and the linear solver makes some iteration (2/3 iterations) before it 
stops because of atol.
The pressure is still different in the 2 sub-domains (order 1.0e-14 because of 
those few linear solver iterations), therefore no symmetry of the solution In 
the 2 sub-domains.
I think it is a matter of round-off, do you agree on this? Or do I need to take 
care of this difference as a symptom of something wrong?

Thank you for your support.

Marco Cisternino

From: Barry Smith <bsm...@petsc.dev>
Sent: giovedì 30 settembre 2021 16:39
To: Marco Cisternino <marco.cistern...@optimad.it>
Cc: petsc-users@mcs.anl.gov
Subject: Re: [petsc-users] Disconnected domains and Poisson equation


   It looks like the initial solution (guess) is to round-off the solution to 
the linear system 9.010260489109e-14

0 KSP unpreconditioned resid norm 9.010260489109e-14 true resid norm 
9.010260489109e-14 ||r(i)||/||b|| 2.021559024868e+00
  0 KSP Residual norm 9.010260489109e-14 % max 1.000000000000e+00 min 
1.000000000000e+00 max/min 1.000000000000e+00
  1 KSP unpreconditioned resid norm 4.918108339808e-15 true resid norm 
4.918171792537e-15 ||r(i)||/||b|| 1.103450292594e-01
  1 KSP Residual norm 4.918108339808e-15 % max 9.566256813737e-01 min 
9.566256813737e-01 max/min 1.000000000000e+00
  2 KSP unpreconditioned resid norm 1.443599554690e-15 true resid norm 
1.444867143493e-15 ||r(i)||/||b|| 3.241731154382e-02
  2 KSP Residual norm 1.443599554690e-15 % max 9.614019380614e-01 min 
7.360950481750e-01 max/min 1.306083963538e+00

Thus the Krylov solver will not be able to improve the solution, it then gets 
stuck trying to improve the solution but cannot because of round off.

In other words the algorithm has converged (even at the initial solution 
(guess) and should stop immediately.

You can use -ksp_atol 1.e-12 to get it to stop immediately without iterating if 
the initial residual is less than 1e-12.

Barry




On Sep 30, 2021, at 4:16 AM, Marco Cisternino 
<marco.cistern...@optimad.it<mailto:marco.cistern...@optimad.it>> wrote:

Hello Barry.
This is the output of ksp_view using fgmres and gamg. It has to be said that 
the solution of the linear system should be a zero values field. As you can see 
both unpreconditioned residual and r/b converge at this iteration of the CFD 
solver. During the time integration of the CFD, I can observe pressure linear 
solver residuals behaving in a different way: unpreconditioned residual stil 
converges but r/b stalls. After the output of ksp_view I add the output of 
ksp_monitor_true_residual for one of these iteration where r/b stalls.
Thanks,

KSP Object: 1 MPI processes
  type: fgmres
    restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization 
with no iterative refinement
    happy breakdown tolerance 1e-30
  maximum iterations=100, nonzero initial guess
  tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
  right preconditioning
  using UNPRECONDITIONED norm type for convergence test
PC Object: 1 MPI processes
  type: gamg
    type is MULTIPLICATIVE, levels=4 cycles=v
      Cycles per PCApply=1
      Using externally compute Galerkin coarse grid matrices
      GAMG specific options
        Threshold for dropping small values in graph on each level =   0.02   
0.02
        Threshold scaling factor for each level not specified = 1.
        AGG specific options
          Symmetric graph true
          Number of levels to square graph 1
          Number smoothing steps 0
  Coarse grid solver -- level -------------------------------
    KSP Object: (mg_coarse_) 1 MPI processes
      type: preonly
      maximum iterations=10000, initial guess is zero
      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
      left preconditioning
      using NONE norm type for convergence test
    PC Object: (mg_coarse_) 1 MPI processes
      type: bjacobi
        number of blocks = 1
        Local solve is same for all blocks, in the following KSP and PC objects:
        KSP Object: (mg_coarse_sub_) 1 MPI processes
          type: preonly
          maximum iterations=1, initial guess is zero
          tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
          left preconditioning
          using DEFAULT norm type for convergence test
        PC Object: (mg_coarse_sub_) 1 MPI processes
          type: lu
          PC has not been set up so information may be incomplete
            out-of-place factorization
            tolerance for zero pivot 2.22045e-14
            using diagonal shift on blocks to prevent zero pivot [INBLOCKS]
            matrix ordering: nd
          linear system matrix = precond matrix:
          Mat Object: 1 MPI processes
            type: seqaij
            rows=18, cols=18
            total: nonzeros=104, allocated nonzeros=104
            total number of mallocs used during MatSetValues calls =0
              not using I-node routines
      linear system matrix = precond matrix:
      Mat Object: 1 MPI processes
        type: seqaij
        rows=18, cols=18
        total: nonzeros=104, allocated nonzeros=104
        total number of mallocs used during MatSetValues calls =0
          not using I-node routines
  Down solver (pre-smoother) on level 1 -------------------------------
    KSP Object: (mg_levels_1_) 1 MPI processes
      type: chebyshev
        eigenvalue estimates used:  min = 0., max = 0.
        eigenvalues estimate via gmres min 0., max 0.
        eigenvalues estimated using gmres with translations  [0. 0.1; 0. 1.1]
        KSP Object: (mg_levels_1_esteig_) 1 MPI processes
          type: gmres
            restart=30, using Classical (unmodified) Gram-Schmidt 
Orthogonalization with no iterative refinement
            happy breakdown tolerance 1e-30
          maximum iterations=10, initial guess is zero
          tolerances:  relative=1e-12, absolute=1e-50, divergence=10000.
          left preconditioning
          using DEFAULT norm type for convergence test
        estimating eigenvalues using noisy right hand side
      maximum iterations=2, nonzero initial guess
      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
      left preconditioning
      using NONE norm type for convergence test
    PC Object: (mg_levels_1_) 1 MPI processes
      type: sor
        type = local_symmetric, iterations = 1, local iterations = 1, omega = 1.
      linear system matrix = precond matrix:
      Mat Object: 1 MPI processes
        type: seqaij
        rows=67, cols=67
        total: nonzeros=675, allocated nonzeros=675
        total number of mallocs used during MatSetValues calls =0
          not using I-node routines
  Up solver (post-smoother) same as down solver (pre-smoother)
  Down solver (pre-smoother) on level 2 -------------------------------
    KSP Object: (mg_levels_2_) 1 MPI processes
      type: chebyshev
        eigenvalue estimates used:  min = 0., max = 0.
        eigenvalues estimate via gmres min 0., max 0.
        eigenvalues estimated using gmres with translations  [0. 0.1; 0. 1.1]
        KSP Object: (mg_levels_2_esteig_) 1 MPI processes
          type: gmres
            restart=30, using Classical (unmodified) Gram-Schmidt 
Orthogonalization with no iterative refinement
            happy breakdown tolerance 1e-30
          maximum iterations=10, initial guess is zero
          tolerances:  relative=1e-12, absolute=1e-50, divergence=10000.
          left preconditioning
          using DEFAULT norm type for convergence test
        estimating eigenvalues using noisy right hand side
      maximum iterations=2, nonzero initial guess
      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
      left preconditioning
      using NONE norm type for convergence test
    PC Object: (mg_levels_2_) 1 MPI processes
      type: sor
        type = local_symmetric, iterations = 1, local iterations = 1, omega = 1.
      linear system matrix = precond matrix:
      Mat Object: 1 MPI processes
        type: seqaij
        rows=348, cols=348
        total: nonzeros=3928, allocated nonzeros=3928
        total number of mallocs used during MatSetValues calls =0
          not using I-node routines
  Up solver (post-smoother) same as down solver (pre-smoother)
  Down solver (pre-smoother) on level 3 -------------------------------
    KSP Object: (mg_levels_3_) 1 MPI processes
      type: chebyshev
        eigenvalue estimates used:  min = 0., max = 0.
        eigenvalues estimate via gmres min 0., max 0.
        eigenvalues estimated using gmres with translations  [0. 0.1; 0. 1.1]
        KSP Object: (mg_levels_3_esteig_) 1 MPI processes
          type: gmres
            restart=30, using Classical (unmodified) Gram-Schmidt 
Orthogonalization with no iterative refinement
            happy breakdown tolerance 1e-30
          maximum iterations=10, initial guess is zero
          tolerances:  relative=1e-12, absolute=1e-50, divergence=10000.
          left preconditioning
          using DEFAULT norm type for convergence test
        estimating eigenvalues using noisy right hand side
      maximum iterations=2, nonzero initial guess
      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
      left preconditioning
      using NONE norm type for convergence test
    PC Object: (mg_levels_3_) 1 MPI processes
      type: sor
        type = local_symmetric, iterations = 1, local iterations = 1, omega = 1.
      linear system matrix = precond matrix:
      Mat Object: 1 MPI processes
        type: seqaij
        rows=3584, cols=3584
        total: nonzeros=23616, allocated nonzeros=23616
        total number of mallocs used during MatSetValues calls =0
          has attached null space
          not using I-node routines
  Up solver (post-smoother) same as down solver (pre-smoother)
  linear system matrix = precond matrix:
  Mat Object: 1 MPI processes
    type: seqaij
    rows=3584, cols=3584
    total: nonzeros=23616, allocated nonzeros=23616
    total number of mallocs used during MatSetValues calls =0
      has attached null space
      not using I-node routines
  Pressure system has reached convergence in 0 iterations with reason 3.
  0 KSP unpreconditioned resid norm 4.798763170703e-16 true resid norm 
4.798763170703e-16 ||r(i)||/||b|| 1.000000000000e+00
  0 KSP Residual norm 4.798763170703e-16 % max 1.000000000000e+00 min 
1.000000000000e+00 max/min 1.000000000000e+00
  1 KSP unpreconditioned resid norm 1.648749109132e-17 true resid norm 
1.648749109132e-17 ||r(i)||/||b|| 3.435779284125e-02
  1 KSP Residual norm 1.648749109132e-17 % max 9.561792537103e-01 min 
9.561792537103e-01 max/min 1.000000000000e+00
  2 KSP unpreconditioned resid norm 4.737880600040e-19 true resid norm 
4.737880600040e-19 ||r(i)||/||b|| 9.873128619820e-04
  2 KSP Residual norm 4.737880600040e-19 % max 9.828636644296e-01 min 
9.293131521763e-01 max/min 1.057623753767e+00
  3 KSP unpreconditioned resid norm 2.542212716830e-20 true resid norm 
2.542212716830e-20 ||r(i)||/||b|| 5.297641551371e-05
  3 KSP Residual norm 2.542212716830e-20 % max 9.933572357920e-01 min 
9.158303248850e-01 max/min 1.084652046127e+00
  4 KSP unpreconditioned resid norm 6.614510286263e-21 true resid norm 
6.614510286269e-21 ||r(i)||/||b|| 1.378378146822e-05
  4 KSP Residual norm 6.614510286263e-21 % max 9.950912550705e-01 min 
6.296575800237e-01 max/min 1.580368896747e+00
  5 KSP unpreconditioned resid norm 1.981505525281e-22 true resid norm 
1.981505525272e-22 ||r(i)||/||b|| 4.129200493513e-07
  5 KSP Residual norm 1.981505525281e-22 % max 9.984097962703e-01 min 
5.316259535293e-01 max/min 1.878030577029e+00
Linear solve converged due to CONVERGED_RTOL iterations 5

Ksp_monitor_true_residual output for stalling r/b CFD iteration
0 KSP unpreconditioned resid norm 9.010260489109e-14 true resid norm 
9.010260489109e-14 ||r(i)||/||b|| 2.021559024868e+00
  0 KSP Residual norm 9.010260489109e-14 % max 1.000000000000e+00 min 
1.000000000000e+00 max/min 1.000000000000e+00
  1 KSP unpreconditioned resid norm 4.918108339808e-15 true resid norm 
4.918171792537e-15 ||r(i)||/||b|| 1.103450292594e-01
  1 KSP Residual norm 4.918108339808e-15 % max 9.566256813737e-01 min 
9.566256813737e-01 max/min 1.000000000000e+00
  2 KSP unpreconditioned resid norm 1.443599554690e-15 true resid norm 
1.444867143493e-15 ||r(i)||/||b|| 3.241731154382e-02
  2 KSP Residual norm 1.443599554690e-15 % max 9.614019380614e-01 min 
7.360950481750e-01 max/min 1.306083963538e+00
  3 KSP unpreconditioned resid norm 6.623206616803e-16 true resid norm 
6.654132553541e-16 ||r(i)||/||b|| 1.492933720678e-02
  3 KSP Residual norm 6.623206616803e-16 % max 9.764112945239e-01 min 
4.911485418014e-01 max/min 1.988016274960e+00
  4 KSP unpreconditioned resid norm 6.551896936698e-16 true resid norm 
6.646157296305e-16 ||r(i)||/||b|| 1.491144376933e-02
  4 KSP Residual norm 6.551896936698e-16 % max 9.883425885532e-01 min 
1.461270778833e-01 max/min 6.763582786091e+00
  5 KSP unpreconditioned resid norm 6.222297644887e-16 true resid norm 
1.720560536914e-15 ||r(i)||/||b|| 3.860282047823e-02
  5 KSP Residual norm 6.222297644887e-16 % max 1.000409371755e+00 min 
4.989767363560e-03 max/min 2.004921870829e+02
  6 KSP unpreconditioned resid norm 6.496945794974e-17 true resid norm 
2.031914800253e-14 ||r(i)||/||b|| 4.558842341106e-01
  6 KSP Residual norm 6.496945794974e-17 % max 1.004914985753e+00 min 
1.459258738706e-03 max/min 6.886475709192e+02
  7 KSP unpreconditioned resid norm 1.965237342540e-17 true resid norm 
1.684522207337e-14 ||r(i)||/||b|| 3.779425772373e-01
  7 KSP Residual norm 1.965237342540e-17 % max 1.005737762541e+00 min 
1.452603803766e-03 max/min 6.923689446035e+02
  8 KSP unpreconditioned resid norm 1.627718951285e-17 true resid norm 
1.958642967520e-14 ||r(i)||/||b|| 4.394448276241e-01
  8 KSP Residual norm 1.627718951285e-17 % max 1.006364278765e+00 min 
1.452081813014e-03 max/min 6.930492963590e+02
  9 KSP unpreconditioned resid norm 1.616577677764e-17 true resid norm 
2.019110946644e-14 ||r(i)||/||b|| 4.530115373837e-01
  9 KSP Residual norm 1.616577677764e-17 % max 1.006648747131e+00 min 
1.452031376577e-03 max/min 6.932692801059e+02
10 KSP unpreconditioned resid norm 1.285788988203e-17 true resid norm 
2.065082694477e-14 ||r(i)||/||b|| 4.633258453698e-01
10 KSP Residual norm 1.285788988203e-17 % max 1.007469033514e+00 min 
1.433291867068e-03 max/min 7.029057072477e+02
11 KSP unpreconditioned resid norm 5.490854431580e-19 true resid norm 
1.798071628891e-14 ||r(i)||/||b|| 4.034187394623e-01
11 KSP Residual norm 5.490854431580e-19 % max 1.008058905554e+00 min 
1.369401685301e-03 max/min 7.361309076612e+02
12 KSP unpreconditioned resid norm 1.371754802104e-20 true resid norm 
1.965688920064e-14 ||r(i)||/||b|| 4.410256708163e-01
12 KSP Residual norm 1.371754802104e-20 % max 1.008409402214e+00 min 
1.369243011779e-03 max/min 7.364721919624e+02
Linear solve converged due to CONVERGED_RTOL iterations 12



Marco Cisternino

From: Barry Smith <bsm...@petsc.dev<mailto:bsm...@petsc.dev>>
Sent: mercoledì 29 settembre 2021 18:34
To: Marco Cisternino 
<marco.cistern...@optimad.it<mailto:marco.cistern...@optimad.it>>
Cc: petsc-users@mcs.anl.gov<mailto:petsc-users@mcs.anl.gov>
Subject: Re: [petsc-users] Disconnected domains and Poisson equation





On Sep 29, 2021, at 11:59 AM, Marco Cisternino 
<marco.cistern...@optimad.it<mailto:marco.cistern...@optimad.it>> wrote:

For sake of completeness, explicitly building the null space using a vector per 
sub-domain make s the CFD runs using BCGS and GMRES more stable, but still 
slower than FGMRES.

  Something is strange. Please run with -ksp_view and send the output on the 
solver details.



I had divergence using BCGS and GMRES setting the null space with only one 
constant.
Thanks

Marco Cisternino

From: Marco Cisternino
Sent: mercoledì 29 settembre 2021 17:54
To: Barry Smith <bsm...@petsc.dev<mailto:bsm...@petsc.dev>>
Cc: petsc-users@mcs.anl.gov<mailto:petsc-users@mcs.anl.gov>
Subject: RE: [petsc-users] Disconnected domains and Poisson equation

Thank you Barry for the quick reply.
About the null space: I already tried what you suggest, building 2 Vec 
(constants) with 0 and 1 chosen by sub-domain, normalizing them and setting the 
null space like this
MatNullSpaceCreate(PETSC_COMM_WORLD,PETSC_FALSE,nconstants,constants,&nullspace);
The solution is slightly different in values but it is still different in the 
two sub-domains.
About the solver: I tried BCGS, GMRES and FGMRES. The linear system is a 
pressure system in a navier-stokes solver and only solving with FGMRES makes 
the CFD stable, with BCGS and GMRES the CFD solution diverges. Moreover, in the 
same case but with a single domain, CFD solution is stable using all the 
solvers, but FGMRES converges in much less iterations than the others.

Marco Cisternino

From: Barry Smith <bsm...@petsc.dev<mailto:bsm...@petsc.dev>>
Sent: mercoledì 29 settembre 2021 15:59
To: Marco Cisternino 
<marco.cistern...@optimad.it<mailto:marco.cistern...@optimad.it>>
Cc: petsc-users@mcs.anl.gov<mailto:petsc-users@mcs.anl.gov>
Subject: Re: [petsc-users] Disconnected domains and Poisson equation


  The problem actually has a two dimensional null space; constant on each 
domain but possibly different constants. I think you need to build the 
MatNullSpace by explicitly constructing two vectors, one with 0 on one domain 
and constant value on the other and one with 0 on the other domain and constant 
on the first.

   Separate note: why use FGMRES instead of just GMRES? If the problem is 
linear and the preconditioner is linear (no GMRES inside the smoother) then you 
can just use GMRES and it will save a little space/work and be conceptually 
clearer.

  Barry

On Sep 29, 2021, at 8:46 AM, Marco Cisternino 
<marco.cistern...@optimad.it<mailto:marco.cistern...@optimad.it>> wrote:

Good morning,
I want to solve the Poisson equation on a 3D domain with 2 non-connected 
sub-domains.
I am using FGMRES+GAMG and I have no problem if the two sub-domains see a 
Dirichlet boundary condition each.
On the same domain I would like to solve the Poisson equation imposing periodic 
boundary condition in one direction and homogenous Neumann boundary conditions 
in the other two directions. The two sub-domains are symmetric with respect to 
the separation between them and the operator discretization and the right hand 
side are symmetric as well. It would be nice to have the same solution in both 
the sub-domains.
Setting the null space to the constant, the solver converges to a solution 
having the same gradients in both sub-domains but different values.
Am I doing some wrong with the null space? I’m not setting a block matrix (one 
block for each sub-domain), should I?
I tested the null space against the matrix using MatNullSpaceTest and the 
answer is true. Can I do something more to have a symmetric solution as outcome 
of the solver?
Thank you in advance for any comments and hints.

Best regards,

Marco Cisternino

Reply via email to