Re: [petsc-users] Scalable Solver for Incompressible Flow

2023-07-28 Thread Jed Brown
See src/snes/tutorials/ex70.c for the code that I think was used for that paper.

Alexander Lindsay  writes:

> Sorry for the spam. Looks like these authors have published multiple papers 
> on the subject 
>
> cover.jpg 
> Combining the Augmented Lagrangian Preconditioner with the Simple Schur 
> Complement Approximation | SIAM Journal on  
> Scientific Computingdoi.org  
> cover.jpg
>
>  On Jul 28, 2023, at 12:59 PM, Alexander Lindsay  
> wrote:
>
>  Do you know of anyone who has applied the augmented Lagrange methodology to 
> a finite volume discretization?
>
>  On Jul 6, 2023, at 6:25 PM, Matthew Knepley  wrote:
>
>  On Thu, Jul 6, 2023 at 8:30 PM Alexander Lindsay  
> wrote:
>
>  This is an interesting article that compares a multi-level ILU algorithm to 
> approximate commutator and augmented
>  Lagrange methods: https://doi.org/10.1002/fld.5039
>
>  That is for incompressible NS. The results are not better than 
> https://arxiv.org/abs/1810.03315, and that PC is considerably
>  simpler and already implemented in PETSc. There is an update in to this
>
>   
>  
> https://epubs.siam.org/doi/abs/10.1137/21M1430698?casa_token=Fp_XhuZStZ0A:YDhnkW9XvAom_b8KocWz-hBEI7FAt46aw3ICa0FvCrOVCtYr9bwvtqJ4aBOTkDSvANKh6YTQEw
>  
>
>  which removes the need for complicated elements.
>
>  You might need stuff like ILU for compressible flow, but I think 
> incompressible is solved.
>
>Thanks,
>
>   Matt
>   
>  On Wed, Jun 28, 2023 at 11:37 AM Alexander Lindsay 
>  wrote:
>
>  I do believe that based off the results in https://doi.org/10.1137/040608817 
> we should be able to make LSC, with
>  proper scaling, compare very favorably with PCD
>
>  On Tue, Jun 27, 2023 at 10:41 AM Alexander Lindsay 
>  wrote:
>
>  I've opened https://gitlab.com/petsc/petsc/-/merge_requests/6642 which adds 
> a couple more scaling
>  applications of the inverse of the diagonal of A
>
>  On Mon, Jun 26, 2023 at 6:06 PM Alexander Lindsay  
> wrote:
>
>  I guess that similar to the discussions about selfp, the approximation of 
> the velocity mass matrix by the
>  diagonal of the velocity sub-matrix will improve when running a transient as 
> opposed to a steady
>  calculation, especially if the time derivative is lumped Just thinking 
> while typing
>
>  On Mon, Jun 26, 2023 at 6:03 PM Alexander Lindsay  
> wrote:
>
>  Returning to Sebastian's question about the correctness of the current LSC 
> implementation: in the
>  taxonomy paper that Jed linked to (which talks about SIMPLE, PCD, and LSC), 
> equation 21 shows four
>  applications of the inverse of the velocity mass matrix. In the PETSc 
> implementation there are at
>  most two applications of the reciprocal of the diagonal of A (an 
> approximation to the velocity mass
>  matrix without more plumbing, as already pointed out). It seems like for 
> code implementations in
>  which there are possible scaling differences between the velocity and 
> pressure equations, that this
>  difference in the number of inverse applications could be significant? I 
> know Jed said that these
>  scalings wouldn't really matter if you have a uniform grid, but I'm not 100% 
> convinced yet.
>
>  I might try fiddling around with adding two more reciprocal applications.
>
>  On Fri, Jun 23, 2023 at 1:09 PM Pierre Jolivet  
> wrote:
>
>  On 23 Jun 2023, at 10:06 PM, Pierre Jolivet  wrote:
>
>  On 23 Jun 2023, at 9:39 PM, Alexander Lindsay  
> wrote:
>
>  Ah, I see that if I use Pierre's new 'full' option for 
> -mat_schur_complement_ainv_type
>
>  That was not initially done by me
>
>  Oops, sorry for the noise, looks like it was done by me indeed in
>  9399e4fd88c6621aad8fe9558ce84df37bd6fada…
>
>  Thanks,
>  Pierre
>
>  (though I recently tweaked MatSchurComplementComputeExplicitOperator() a bit 
> to use
>  KSPMatSolve(), so that if you have a small Schur complement — which is not 
> really the case
>  for NS — this could be a viable option, it was previously painfully slow).
>
>  Thanks,
>  Pierre
>
>  that I get a single iteration for the Schur complement solve with LU. That's 
> a nice testing
>  option
>
>  On Fri, Jun 23, 2023 at 12:02 PM Alexander Lindsay 
>  wrote:
>
>  I guess it is because the inverse of the diagonal form of A00 becomes a poor
>  representation of the inverse of A00? I guess naively I would have thought 
> that the
>  blockdiag form of A00 is A00
>
>  On Fri, Jun 23, 2023 at 10:18 AM Alexander Lindsay 
>  wrote:
>
>  Hi Jed, I will come back with answers to all of your questions at some 
> point. I
>  mostly just deal with MOOSE users who come to me and tell me their solve is
>  converging slowly, asking me how to fix it. So I generally assume they have
>  built an appropriate mesh and problem size for the problem they want to solve
>  and added appropriate turbulence modeling (although my general assumption
>  is often violated).
>
>  > And to confirm, are you doing a nonlinearly implicit velocity-pressure 
> solve?
>
>  Yes, this is our default.
>
>  A 

Re: [petsc-users] Scalable Solver for Incompressible Flow

2023-07-28 Thread Alexander Lindsay
Do you know of anyone who has applied the augmented Lagrange methodology to a finite volume discretization?On Jul 6, 2023, at 6:25 PM, Matthew Knepley  wrote:On Thu, Jul 6, 2023 at 8:30 PM Alexander Lindsay  wrote:This is an interesting article that compares a multi-level ILU algorithm to approximate commutator and augmented Lagrange methods: https://doi.org/10.1002/fld.5039That is for incompressible NS. The results are not better than https://arxiv.org/abs/1810.03315, and that PC is considerably simpler and already implemented in PETSc. There is an update in to this  https://epubs.siam.org/doi/abs/10.1137/21M1430698?casa_token=Fp_XhuZStZ0A:YDhnkW9XvAom_b8KocWz-hBEI7FAt46aw3ICa0FvCrOVCtYr9bwvtqJ4aBOTkDSvANKh6YTQEwwhich removes the need for complicated elements.You might need stuff like ILU for compressible flow, but I think incompressible is solved.  Thanks,     Matt On Wed, Jun 28, 2023 at 11:37 AM Alexander Lindsay  wrote:I do believe that based off the results in https://doi.org/10.1137/040608817 we should be able to make LSC, with proper scaling, compare very favorably with PCDOn Tue, Jun 27, 2023 at 10:41 AM Alexander Lindsay  wrote:I've opened https://gitlab.com/petsc/petsc/-/merge_requests/6642 which adds a couple more scaling applications of the inverse of the diagonal of AOn Mon, Jun 26, 2023 at 6:06 PM Alexander Lindsay  wrote:I guess that similar to the discussions about selfp, the approximation of the velocity mass matrix by the diagonal of the velocity sub-matrix will improve when running a transient as opposed to a steady calculation, especially if the time derivative is lumped Just thinking while typingOn Mon, Jun 26, 2023 at 6:03 PM Alexander Lindsay  wrote:Returning to Sebastian's question about the correctness of the current LSC implementation: in the taxonomy paper that Jed linked to (which talks about SIMPLE, PCD, and LSC), equation 21 shows four applications of the inverse of the velocity mass matrix. In the PETSc implementation there are at most two applications of the reciprocal of the diagonal of A (an approximation to the velocity mass matrix without more plumbing, as already pointed out). It seems like for code implementations in which there are possible scaling differences between the velocity and pressure equations, that this difference in the number of inverse applications could be significant? I know Jed said that these scalings wouldn't really matter if you have a uniform grid, but I'm not 100% convinced yet.I might try fiddling around with adding two more reciprocal applications.On Fri, Jun 23, 2023 at 1:09 PM Pierre Jolivet  wrote:On 23 Jun 2023, at 10:06 PM, Pierre Jolivet  wrote:On 23 Jun 2023, at 9:39 PM, Alexander Lindsay  wrote:Ah, I see that if I use Pierre's new 'full' option for -mat_schur_complement_ainv_typeThat was not initially done by meOops, sorry for the noise, looks like it was done by me indeed in 9399e4fd88c6621aad8fe9558ce84df37bd6fada…Thanks,Pierre (though I recently tweaked MatSchurComplementComputeExplicitOperator() a bit to use KSPMatSolve(), so that if you have a small Schur complement — which is not really the case for NS — this could be a viable option, it was previously painfully slow).Thanks,Pierre that I get a single iteration for the Schur complement solve with LU. That's a nice testing optionOn Fri, Jun 23, 2023 at 12:02 PM Alexander Lindsay  wrote:I guess it is because the inverse of the diagonal form of A00 becomes a poor representation of the inverse of A00? I guess naively I would have thought that the blockdiag form of A00 is A00On Fri, Jun 23, 2023 at 10:18 AM Alexander Lindsay  wrote:Hi Jed, I will come back with answers to all of your questions at some point. I mostly just deal with MOOSE users who come to me and tell me their solve is converging slowly, asking me how to fix it. So I generally assume they have built an appropriate mesh and problem size for the problem they want to solve and added appropriate turbulence modeling (although my general assumption is often violated).> And to confirm, are you doing a nonlinearly implicit velocity-pressure solve?Yes, this is our default.A general question: it seems that it is well known that the quality of selfp degrades with increasing advection. Why is that?On Wed, Jun 7, 2023 at 8:01 PM Jed Brown  wrote:Alexander Lindsay  writes:

> This has been a great discussion to follow. Regarding
>
>> when time stepping, you have enough mass matrix that cheaper preconditioners are good enough
>
> I'm curious what some algebraic recommendations might be for high Re in
> transients. 

What mesh aspect ratio and streamline CFL number? Assuming your model is turbulent, can you say anything about