Re: [petsc-users] Question on ISLocalToGlobalMappingGetIndices Fortran Interface

2023-05-08 Thread Danyang Su
Thanks, Mark. Yes, it actually works when I update to 
ISLocalToGlobalMappingGetIndicesF90. I made a mistake reporting this does not 
work.

 

Danyang

 

From: Mark Adams 
Date: Monday, May 8, 2023 at 7:22 PM
To: Danyang Su 
Cc: petsc-users 
Subject: Re: [petsc-users] Question on ISLocalToGlobalMappingGetIndices Fortran 
Interface

 

 

 

On Mon, May 8, 2023 at 6:50 PM Danyang Su  wrote:

Dear PETSc-Users,

 

Is there any changes in ISLocalToGlobalMappingGetIndices function after PETSc 
3.17? 

 

In the previous PETSc version (<= 3.17), the function 
‘ISLocalToGlobalMappingGetIndices(ltogm,ltog,idltog,ierr)’ works fine, even 
though the value of idltog looks out of bound (-11472655627481), 
https://www.mcs.anl.gov/petsc/petsc-3.14/src/ksp/ksp/tutorials/ex14f.F90.html. 
The value of idltog is not clear.

 

In the latest PETSc version,  this function can be called, but due to the 
extreme value of idltog, the code fails. I also tried to use 
‘ISLocalToGlobalMappingGetIndicesF90(ltogm,ltog,ierr)’ () but no success.

 

* You do want the latter:

 

doc/changes/319.rst:- Deprecate ``ISLocalToGlobalMappingGetIndices()`` in favor 
of ``ISLocalToGlobalMappingGetIndicesF90()``

 

* You might look at a test:

 

src/ksp/ksp/tutorials/ex14f.F90:  
PetscCall(ISLocalToGlobalMappingGetIndicesF90(ltogm,ltog,ierr))

 

* If you use 64 bit integers be careful. 

 

* You want to use a memory checker like Valgrind or Sanitize.

 

Mark

 

 

#if (PETSC_VERSION_MAJOR == 3 && PETSC_VERSION_MINOR <= 4)

call DMDAGetGlobalIndicesF90(dmda_flow%da,PETSC_NULL_INTEGER,  &

 idx,ierr)

CHKERRQ(ierr)

#else

call DMGetLocalToGlobalMapping(dmda_flow%da,ltogm,ierr)

CHKERRQ(ierr)

call ISLocalToGlobalMappingGetIndices(ltogm,ltog,idltog,ierr)

CHKERRQ(ierr)

#endif



dof = dmda_flow%dof

  

do ivol = 1, nngl

 

#if (PETSC_VERSION_MAJOR == 3 && PETSC_VERSION_MINOR <= 4)

node_idx_lg2pg(ivol) = (idx(ivol*dof)+1)/dof

#else

node_idx_lg2pg(ivol) = (ltog(ivol*dof + idltog)+1)/dof

#endif

   end do

 

Any suggestions on that. 

 

Thanks,

 

Danyang



Re: [petsc-users] Question on ISLocalToGlobalMappingGetIndices Fortran Interface

2023-05-08 Thread Mark Adams
On Mon, May 8, 2023 at 6:50 PM Danyang Su  wrote:

> Dear PETSc-Users,
>
>
>
> Is there any changes in ISLocalToGlobalMappingGetIndices function after
> PETSc 3.17?
>
>
>
> In the previous PETSc version (<= 3.17), the function
> ‘ISLocalToGlobalMappingGetIndices(ltogm,ltog,idltog,ierr)’ works fine, even
> though the value of idltog looks out of bound (-11472655627481),
> https://www.mcs.anl.gov/petsc/petsc-3.14/src/ksp/ksp/tutorials/ex14f.F90.html.
> The value of idltog is not clear.
>
>
>
> In the latest PETSc version,  this function can be called, but due to the
> extreme value of idltog, the code fails. I also tried to use
> ‘ISLocalToGlobalMappingGetIndicesF90(ltogm,ltog,ierr)’ () but no success.
>

* You do want the latter:

doc/changes/319.rst:- Deprecate ``ISLocalToGlobalMappingGetIndices()`` in
favor of ``ISLocalToGlobalMappingGetIndicesF90()``

* You might look at a test:

src/ksp/ksp/tutorials/ex14f.F90:
 PetscCall(ISLocalToGlobalMappingGetIndicesF90(ltogm,ltog,ierr))

* If you use 64 bit integers be careful.

* You want to use a memory checker like Valgrind or Sanitize.

Mark


>
> #if (PETSC_VERSION_MAJOR == 3 && PETSC_VERSION_MINOR <= 4)
>
> call DMDAGetGlobalIndicesF90(dmda_flow%da,PETSC_NULL_INTEGER,  &
>
>  idx,ierr)
>
> CHKERRQ(ierr)
>
> #else
>
> call DMGetLocalToGlobalMapping(dmda_flow%da,ltogm,ierr)
>
> CHKERRQ(ierr)
>
> call ISLocalToGlobalMappingGetIndices(ltogm,ltog,idltog,ierr)
>
> CHKERRQ(ierr)
>
> #endif
>
>
>
> dof = dmda_flow%dof
>
>
>
> do ivol = 1, nngl
>
>
>
> #if (PETSC_VERSION_MAJOR == 3 && PETSC_VERSION_MINOR <= 4)
>
> node_idx_lg2pg(ivol) = (idx(ivol*dof)+1)/dof
>
> #else
>
> node_idx_lg2pg(ivol) = (ltog(ivol*dof + idltog)+1)/dof
>
> #endif
>
>end do
>
>
>
> Any suggestions on that.
>
>
>
> Thanks,
>
>
>
> Danyang
>


Re: [petsc-users] Help with KSPSetConvergenceTest

2023-05-08 Thread Barry Smith

  See https://gitlab.com/petsc/petsc/-/merge_requests/6436 with commit 
barry/2023-05-08/add-ksp-min-its 



> On May 7, 2023, at 2:22 PM, Edoardo alinovi  wrote:
> 
> Hello Barry,
> 
> Mega! Thank you Berry much for providing me with a working example! I ended 
> up in writing this:
> 
> call KSPConvergedDefault(ksp, n, rnorm, flag, PETSC_NULL_FUNCTION, 
> ierr)
> if (n flag = 0
> endif
> ierr = 0
> 
> and it looks working but I'll take advantage of your suggestion ;)  Is 
> KSPConvergedDefaultDestroy mandatory?
> 
> I know it's easy code, but maybe you might have a think to add this control 
> and expose it as the max number of iterations in KSP. I can tell you it is 
> very much used, I found myself many times in need to tell the solver to 
> iterate regardless of the tolerances criteria. It happens for example in the 
> RANS equation, especially omega. Sometimes they tend to stall and you do want 
> to tighten the tolerances for a bunch of iters, or you might not know if they 
> do while iterating! 
> 
> Cheers



[petsc-users] Question on ISLocalToGlobalMappingGetIndices Fortran Interface

2023-05-08 Thread Danyang Su
Dear PETSc-Users,

 

Is there any changes in ISLocalToGlobalMappingGetIndices function after PETSc 
3.17? 

 

In the previous PETSc version (<= 3.17), the function 
‘ISLocalToGlobalMappingGetIndices(ltogm,ltog,idltog,ierr)’ works fine, even 
though the value of idltog looks out of bound (-11472655627481), 
https://www.mcs.anl.gov/petsc/petsc-3.14/src/ksp/ksp/tutorials/ex14f.F90.html. 
The value of idltog is not clear.

 

In the latest PETSc version,  this function can be called, but due to the 
extreme value of idltog, the code fails. I also tried to use 
‘ISLocalToGlobalMappingGetIndicesF90(ltogm,ltog,ierr)’ () but no success. 

 

#if (PETSC_VERSION_MAJOR == 3 && PETSC_VERSION_MINOR <= 4)

    call DMDAGetGlobalIndicesF90(dmda_flow%da,PETSC_NULL_INTEGER,  &

 idx,ierr)

    CHKERRQ(ierr)

#else

    call DMGetLocalToGlobalMapping(dmda_flow%da,ltogm,ierr)

    CHKERRQ(ierr)

    call ISLocalToGlobalMappingGetIndices(ltogm,ltog,idltog,ierr)

    CHKERRQ(ierr)

#endif

    

dof = dmda_flow%dof

  

do ivol = 1, nngl

 

#if (PETSC_VERSION_MAJOR == 3 && PETSC_VERSION_MINOR <= 4)

    node_idx_lg2pg(ivol) = (idx(ivol*dof)+1)/dof

#else

    node_idx_lg2pg(ivol) = (ltog(ivol*dof + idltog)+1)/dof

#endif

    end do

 

Any suggestions on that. 

 

Thanks,

 

Danyang



Re: [petsc-users] Scalable Solver for Incompressible Flow

2023-05-08 Thread Jed Brown
Sebastian Blauth  writes:

> Hello everyone,
>
> I wanted to briefly follow up on my question (see my last reply).
> Does anyone know / have an idea why the LSC preconditioner in PETSc does 
> not seem to scale well with the problem size (the outer fgmres solver I 
> am using nearly scale nearly linearly with the problem size in my example).

The implementation was tested on heterogeneous Stokes problems from 
geodynamics, and perhaps not on NS (or not with the discretization you're 
using).

https://doi.org/10.1016/j.pepi.2008.07.036

There is a comment about not having plumbing to provide a mass matrix. A few 
lines earlier there is code using PetscObjectQuery, and that same pattern could 
be applied for the mass matrix. If you're on a roughly uniform mesh, including 
the mass scaling will probably have little effect, but it could have a big 
impact in the presence of highly anistropic elements or a broad range of scales.

I don't think LSC has gotten a lot of use since everyone I know who tried it 
has been sort of disappointed relative to other methods (e.g., inverse 
viscosity scaled mass matrix for heterogeneous Stokes, PCD for moderate Re 
Navier-Stokes). Of course there are no steady solutions to high Re so you 
either have a turbulence model or are time stepping. I'm not aware of work with 
LSC with turbulence models, and when time stepping, you have enough mass matrix 
that cheaper preconditioners are good enough. That said, it would be a great 
contribution to support this scaling.

> I have also already tried using -ksp_diagonal_scale but the results are 
> identical.

That's expected, and likely to mess up some MG implementations so I wouldn't 
recommend it.


Re: [petsc-users] Petsc ObjectStateIncrease without proivate header

2023-05-08 Thread Stefano Zampini
You can achieve the same effect by calling MatAssemblyBegin/End

Il giorno lun 8 mag 2023 alle ore 15:54 Pichler, Franz <
franz.pich...@v2c2.at> ha scritto:

> Hello,
> i am using petsc in a single cpu setup where I have preassembled crs
> matrices that I wrap via PetSC’s
>
> MatCreateSeqAIJWithArrays  Functionality.
>
>
>
> Now I manipulate the values of these matrices (wohtout changing the
> sparsity) without using petsc,
>
>
>
> When I now want to solve again I have to call
>
> PetscObjectStateIncrease((PetscObject)petsc_A);
>
>
>
> So that oetsc actually solves again (otherwise thinking nothing hs changed
> ,
>
> This means I have to include the private header
>
> #include 
>
>
>
> Which makes a seamingless implementation of petsc into a cmake process
> more complicate (This guy has to be stated explicitly in the cmake process
> at the moment)
>
>
>
> I would like to resolve that by “going” around the private header,
>
> My first intuition was to increase the state by hand
>
> ((PetscObject)petsc_A_aux[the_sys])->state++;
>
> This is the definition of petscstateincrease in the header. This throws me
> an
>
> error: invalid use of incomplete type ‘struct _p_PetscObject’
>
>
>
> compilation error.
>
>
>
> Is there any elegeant way around this?
>
> This is the first time I use the petsc mailing list so apologies for any
> beginners mistake I did in formatting or anything else.
>
>
>
> Best regards
>
>
>
> Franz Pichler
>
>
>


-- 
Stefano


[petsc-users] Petsc ObjectStateIncrease without proivate header

2023-05-08 Thread Pichler, Franz
Hello,
i am using petsc in a single cpu setup where I have preassembled crs matrices 
that I wrap via PetSC's
MatCreateSeqAIJWithArrays  Functionality.

Now I manipulate the values of these matrices (wohtout changing the sparsity) 
without using petsc,

When I now want to solve again I have to call
PetscObjectStateIncrease((PetscObject)petsc_A);

So that oetsc actually solves again (otherwise thinking nothing hs changed ,
This means I have to include the private header
#include 

Which makes a seamingless implementation of petsc into a cmake process more 
complicate (This guy has to be stated explicitly in the cmake process at the 
moment)

I would like to resolve that by "going" around the private header,
My first intuition was to increase the state by hand
((PetscObject)petsc_A_aux[the_sys])->state++;

This is the definition of petscstateincrease in the header. This throws me an
error: invalid use of incomplete type 'struct _p_PetscObject'

compilation error.

Is there any elegeant way around this?
This is the first time I use the petsc mailing list so apologies for any 
beginners mistake I did in formatting or anything else.

Best regards

Franz Pichler



Re: [petsc-users] Scalable Solver for Incompressible Flow

2023-05-08 Thread Matthew Knepley
On Mon, May 8, 2023 at 7:27 AM Mark Adams  wrote:

>
>
> On Mon, May 8, 2023 at 2:32 AM Sebastian Blauth <
> sebastian.bla...@itwm.fraunhofer.de> wrote:
>
>> Hello everyone,
>>
>> I wanted to briefly follow up on my question (see my last reply).
>> Does anyone know / have an idea why the LSC preconditioner in PETSc does
>> not seem to scale well with the problem size (the outer fgmres solver I
>> am using nearly scale nearly linearly with the problem size in my
>> example).
>> I have also already tried using -ksp_diagonal_scale but the results are
>> identical.
>> Any help is really appreciated.
>>
>
> I would start by finding results in the literature that you like, in that
> they are on a problem similar to yours and you like the performance, and
> reproduce them in your code.
> If you can do that then you have a research problem to see how to get your
> problem to work well.
> If your solver does not reproduce published results then we might be
> able to provide some advice.
>

I have not seen LSC scale well. The only thing I have seen to actually work
is patch smoothing (I gave a paper reference). It could
be that we have a bug in LSC, but I thought we verified it with the
Shuttleworth paper.

  THanks,

 Matt


> Mark
>
>
>>
>> Thanks a lot,
>> Sebastian
>>
>> On 03.05.2023 09:07, Sebastian Blauth wrote:
>> > First of all, yes you are correct that I am trying to solve the
>> > stationary incompressible Navier Stokes equations.
>> >
>> > On 02.05.2023 21:33, Matthew Knepley wrote:
>> >> On Tue, May 2, 2023 at 2:29 PM Jed Brown > >> > wrote:
>> >>
>> >> Sebastian Blauth > >> > writes:
>> >>
>> >>  > I agree with your comment for the Stokes equations - for these,
>> I
>> >> have
>> >>  > already tried and used the pressure mass matrix as part of a
>> >> (additive)
>> >>  > block preconditioner and it gave mesh independent results.
>> >>  >
>> >>  > However, for the Navier Stokes equations, is the Schur
>> complement
>> >> really
>> >>  > spectrally equivalent to the pressure mass matrix?
>> >>
>> >> No, it's not. You'd want something like PCD (better, but not
>> >> algebraic) or LSC.
>> >>
>> >
>> > I would like to take a look at the LSC preconditioner. For this, I did
>> > also not achieve mesh-independent results. I am using the following
>> > options (I know that the tolerances are too high at the moment, but it
>> > should just illustrate the behavior w.r.t. mesh refinement). Again, I
>> am
>> > using a simple 2D channel problem for testing purposes.
>> >
>> > I am using the following options
>> >
>> > -ksp_type fgmres
>> > -ksp_gmres_restart 100
>> > -ksp_gmres_cgs_refinement_type refine_ifneeded
>> > -ksp_max_it 1000
>> > -ksp_rtol 1e-10
>> > -ksp_atol 1e-30
>> > -pc_type fieldsplit
>> > -pc_fieldsplit_type schur
>> > -pc_fieldsplit_schur_fact_type full
>> > -pc_fieldsplit_schur_precondition self
>> > -fieldsplit_0_ksp_type preonly
>> > -fieldsplit_0_pc_type lu
>> > -fieldsplit_1_ksp_type gmres
>> > -fieldsplit_1_ksp_pc_side right
>> > -fieldsplit_1_ksp_gmres_restart 100
>> > -fieldsplit_1_ksp_gmres_cgs_refinement_type refine_ifneeded
>> > -fieldsplit_1_ksp_max_it 1000
>> > -fieldsplit_1_ksp_rtol 1e-10
>> > -fieldsplit_1_ksp_atol 1e-30
>> > -fieldsplit_1_pc_type lsc
>> > -fieldsplit_1_lsc_ksp_type preonly
>> > -fieldsplit_1_lsc_pc_type lu
>> > -fieldsplit_1_ksp_converged_reason
>> >
>> > Again, the direct solvers are used so that only the influence of the
>> LSC
>> > preconditioner is seen. I have suitable preconditioners for all of
>> these
>> > available (using boomeramg).
>> >
>> > At the bottom, I attach the output for different discretizations. As
>> you
>> > can see there, the number of iterations increases nearly linearly with
>> > the problem size.
>> >
>> > I think that the problem could occur due to a wrong scaling. In your
>> > docs https://petsc.org/release/manualpages/PC/PCLSC/ , you write that
>> > the LSC preconditioner is implemented as
>> >
>> > inv(S) \approx inv(A10 A01) A10 A00 A01 inv(A10 A01)
>> >
>> > However, in the book of Elman, Sylvester and Wathen (Finite Elements
>> and
>> > Fast Iterative Solvers), the LSC preconditioner is defined as
>> >
>> >  inv(S) \approx inv(A10 inv(T) A01) A10 inv(T) A00 inv(T) A01
>> > inv(A10 inv(T) A01)
>> >
>> > where T = diag(Q) and Q is the velocity mass matrix.
>> >
>> > There is an options -pc_lsc_scale_diag, which states that it uses the
>> > diagonal of A for scaling. I suppose, that this means, that the
>> diagonal
>> > of the A00 block is used for scaling - however, this is not the right
>> > scaling, is it? Even in the source code for the LSC preconditioner, in
>> > /src/ksp/pc/impls/lsc/lsc.c it is mentioned, that a mass matrix should
>> > be used...
>> > Is there any way to implement this in PETSc? Maybe by supplying the
>> mass
>> > matrix as Pmat?
>> >
>> > Thanks a lot in advance,
>> > Sebastian

Re: [petsc-users] Scalable Solver for Incompressible Flow

2023-05-08 Thread Mark Adams
On Mon, May 8, 2023 at 2:32 AM Sebastian Blauth <
sebastian.bla...@itwm.fraunhofer.de> wrote:

> Hello everyone,
>
> I wanted to briefly follow up on my question (see my last reply).
> Does anyone know / have an idea why the LSC preconditioner in PETSc does
> not seem to scale well with the problem size (the outer fgmres solver I
> am using nearly scale nearly linearly with the problem size in my example).
> I have also already tried using -ksp_diagonal_scale but the results are
> identical.
> Any help is really appreciated.
>

I would start by finding results in the literature that you like, in that
they are on a problem similar to yours and you like the performance, and
reproduce them in your code.
If you can do that then you have a research problem to see how to get your
problem to work well.
If your solver does not reproduce published results then we might be
able to provide some advice.

Mark


>
> Thanks a lot,
> Sebastian
>
> On 03.05.2023 09:07, Sebastian Blauth wrote:
> > First of all, yes you are correct that I am trying to solve the
> > stationary incompressible Navier Stokes equations.
> >
> > On 02.05.2023 21:33, Matthew Knepley wrote:
> >> On Tue, May 2, 2023 at 2:29 PM Jed Brown  >> > wrote:
> >>
> >> Sebastian Blauth  >> > writes:
> >>
> >>  > I agree with your comment for the Stokes equations - for these, I
> >> have
> >>  > already tried and used the pressure mass matrix as part of a
> >> (additive)
> >>  > block preconditioner and it gave mesh independent results.
> >>  >
> >>  > However, for the Navier Stokes equations, is the Schur complement
> >> really
> >>  > spectrally equivalent to the pressure mass matrix?
> >>
> >> No, it's not. You'd want something like PCD (better, but not
> >> algebraic) or LSC.
> >>
> >
> > I would like to take a look at the LSC preconditioner. For this, I did
> > also not achieve mesh-independent results. I am using the following
> > options (I know that the tolerances are too high at the moment, but it
> > should just illustrate the behavior w.r.t. mesh refinement). Again, I am
> > using a simple 2D channel problem for testing purposes.
> >
> > I am using the following options
> >
> > -ksp_type fgmres
> > -ksp_gmres_restart 100
> > -ksp_gmres_cgs_refinement_type refine_ifneeded
> > -ksp_max_it 1000
> > -ksp_rtol 1e-10
> > -ksp_atol 1e-30
> > -pc_type fieldsplit
> > -pc_fieldsplit_type schur
> > -pc_fieldsplit_schur_fact_type full
> > -pc_fieldsplit_schur_precondition self
> > -fieldsplit_0_ksp_type preonly
> > -fieldsplit_0_pc_type lu
> > -fieldsplit_1_ksp_type gmres
> > -fieldsplit_1_ksp_pc_side right
> > -fieldsplit_1_ksp_gmres_restart 100
> > -fieldsplit_1_ksp_gmres_cgs_refinement_type refine_ifneeded
> > -fieldsplit_1_ksp_max_it 1000
> > -fieldsplit_1_ksp_rtol 1e-10
> > -fieldsplit_1_ksp_atol 1e-30
> > -fieldsplit_1_pc_type lsc
> > -fieldsplit_1_lsc_ksp_type preonly
> > -fieldsplit_1_lsc_pc_type lu
> > -fieldsplit_1_ksp_converged_reason
> >
> > Again, the direct solvers are used so that only the influence of the LSC
> > preconditioner is seen. I have suitable preconditioners for all of these
> > available (using boomeramg).
> >
> > At the bottom, I attach the output for different discretizations. As you
> > can see there, the number of iterations increases nearly linearly with
> > the problem size.
> >
> > I think that the problem could occur due to a wrong scaling. In your
> > docs https://petsc.org/release/manualpages/PC/PCLSC/ , you write that
> > the LSC preconditioner is implemented as
> >
> > inv(S) \approx inv(A10 A01) A10 A00 A01 inv(A10 A01)
> >
> > However, in the book of Elman, Sylvester and Wathen (Finite Elements and
> > Fast Iterative Solvers), the LSC preconditioner is defined as
> >
> >  inv(S) \approx inv(A10 inv(T) A01) A10 inv(T) A00 inv(T) A01
> > inv(A10 inv(T) A01)
> >
> > where T = diag(Q) and Q is the velocity mass matrix.
> >
> > There is an options -pc_lsc_scale_diag, which states that it uses the
> > diagonal of A for scaling. I suppose, that this means, that the diagonal
> > of the A00 block is used for scaling - however, this is not the right
> > scaling, is it? Even in the source code for the LSC preconditioner, in
> > /src/ksp/pc/impls/lsc/lsc.c it is mentioned, that a mass matrix should
> > be used...
> > Is there any way to implement this in PETSc? Maybe by supplying the mass
> > matrix as Pmat?
> >
> > Thanks a lot in advance,
> > Sebastian
> >
> >>
> >> I think you can do a better job than that using something like
> >>
> >> https://arxiv.org/abs/1810.03315 
> >>
> >> Basically, you use an augmented Lagrangian thing to make the Schur
> >> complement well-conditioned,
> >> and then use a special smoother to handle that perturbation.
> >>
> >>  > And even if it is, the convergence is only good for small
> >> Reynolds numbers, for moderately high ones 

Re: [petsc-users] Scalable Solver for Incompressible Flow

2023-05-08 Thread Sebastian Blauth

Hello everyone,

I wanted to briefly follow up on my question (see my last reply).
Does anyone know / have an idea why the LSC preconditioner in PETSc does 
not seem to scale well with the problem size (the outer fgmres solver I 
am using nearly scale nearly linearly with the problem size in my example).
I have also already tried using -ksp_diagonal_scale but the results are 
identical.

Any help is really appreciated.

Thanks a lot,
Sebastian

On 03.05.2023 09:07, Sebastian Blauth wrote:
First of all, yes you are correct that I am trying to solve the 
stationary incompressible Navier Stokes equations.


On 02.05.2023 21:33, Matthew Knepley wrote:
On Tue, May 2, 2023 at 2:29 PM Jed Brown > wrote:


    Sebastian Blauth mailto:sebastian.bla...@itwm.fraunhofer.de>> writes:

 > I agree with your comment for the Stokes equations - for these, I
    have
 > already tried and used the pressure mass matrix as part of a
    (additive)
 > block preconditioner and it gave mesh independent results.
 >
 > However, for the Navier Stokes equations, is the Schur complement
    really
 > spectrally equivalent to the pressure mass matrix?

    No, it's not. You'd want something like PCD (better, but not
    algebraic) or LSC.



I would like to take a look at the LSC preconditioner. For this, I did 
also not achieve mesh-independent results. I am using the following 
options (I know that the tolerances are too high at the moment, but it 
should just illustrate the behavior w.r.t. mesh refinement). Again, I am 
using a simple 2D channel problem for testing purposes.


I am using the following options

-ksp_type fgmres
-ksp_gmres_restart 100
-ksp_gmres_cgs_refinement_type refine_ifneeded
-ksp_max_it 1000
-ksp_rtol 1e-10
-ksp_atol 1e-30
-pc_type fieldsplit
-pc_fieldsplit_type schur
-pc_fieldsplit_schur_fact_type full
-pc_fieldsplit_schur_precondition self
-fieldsplit_0_ksp_type preonly
-fieldsplit_0_pc_type lu
-fieldsplit_1_ksp_type gmres
-fieldsplit_1_ksp_pc_side right
-fieldsplit_1_ksp_gmres_restart 100
-fieldsplit_1_ksp_gmres_cgs_refinement_type refine_ifneeded
-fieldsplit_1_ksp_max_it 1000
-fieldsplit_1_ksp_rtol 1e-10
-fieldsplit_1_ksp_atol 1e-30
-fieldsplit_1_pc_type lsc
-fieldsplit_1_lsc_ksp_type preonly
-fieldsplit_1_lsc_pc_type lu
-fieldsplit_1_ksp_converged_reason

Again, the direct solvers are used so that only the influence of the LSC 
preconditioner is seen. I have suitable preconditioners for all of these 
available (using boomeramg).


At the bottom, I attach the output for different discretizations. As you 
can see there, the number of iterations increases nearly linearly with 
the problem size.


I think that the problem could occur due to a wrong scaling. In your 
docs https://petsc.org/release/manualpages/PC/PCLSC/ , you write that 
the LSC preconditioner is implemented as


    inv(S) \approx inv(A10 A01) A10 A00 A01 inv(A10 A01)

However, in the book of Elman, Sylvester and Wathen (Finite Elements and 
Fast Iterative Solvers), the LSC preconditioner is defined as


     inv(S) \approx inv(A10 inv(T) A01) A10 inv(T) A00 inv(T) A01 
inv(A10 inv(T) A01)


where T = diag(Q) and Q is the velocity mass matrix.

There is an options -pc_lsc_scale_diag, which states that it uses the 
diagonal of A for scaling. I suppose, that this means, that the diagonal 
of the A00 block is used for scaling - however, this is not the right 
scaling, is it? Even in the source code for the LSC preconditioner, in 
/src/ksp/pc/impls/lsc/lsc.c it is mentioned, that a mass matrix should 
be used...
Is there any way to implement this in PETSc? Maybe by supplying the mass 
matrix as Pmat?


Thanks a lot in advance,
Sebastian



I think you can do a better job than that using something like

https://arxiv.org/abs/1810.03315 

Basically, you use an augmented Lagrangian thing to make the Schur 
complement well-conditioned,

and then use a special smoother to handle that perturbation.

 > And even if it is, the convergence is only good for small
    Reynolds numbers, for moderately high ones the convergence really
    deteriorates. This is why I am trying to make
    fieldsplit_schur_precondition selfp work better (this is, if I
    understand it correctly, a SIMPLE type preconditioner).

    SIMPLE is for short time steps (not too far from resolving CFL) and
    bad for steady. This taxonomy is useful, though the problems are
    super academic and they don't use high aspect ratio.



Okay, I get that I cannot expect the SIMPLE preconditioning 
(schur_precondition selfp) to work efficiently. I guess the reason it 
works for small time steps (for the instationary equations) is that the 
velocity block becomes diagonally dominant in this case, so that diag(A) 
is a good approximation of A.




    https://doi.org/10.1016/j.jcp.2007.09.026
    


    Thanks,

       Matt

--
What most experimenters take for 

Re: [petsc-users] Fortran preprocessor not work in pets-dev

2023-05-08 Thread Danyang Su
Hi Satish,

Exactly. Something went wrong when I pull the remote content last week. I made 
a clean download of dev version and the problem is solved.

Thanks,

Danyang

On 2023-05-07, 7:14 AM, "Satish Balay" mailto:ba...@mcs.anl.gov>> wrote:


Perhaps you are not using the latest 'main' (or release) branch?


I get (with current main):


$ mpiexec -n 4 ./petsc_fppflags
compiled by STANDARD_FORTRAN compiler
called by rank 0
called by rank 1
called by rank 2
called by rank 3


There was a issue with early petsc-3.19 release - here one had to reorder the 
lines from:


FPPFLAGS =
include ${PETSC_DIR}/lib/petsc/conf/variables
include ${PETSC_DIR}/lib/petsc/conf/rules


to


include ${PETSC_DIR}/lib/petsc/conf/variables
include ${PETSC_DIR}/lib/petsc/conf/rules
FPPFLAGS =


But this is fixed in latest release and main branches.


Satish


On Sun, 7 May 2023, Danyang Su wrote:


> Hi Satish,
> 
> Sorry, this is a typo when copy to the email. I use FPPFLAGS in the makefile. 
> Not sure why this occurs.
> 
> Actually not only the preprocessor fails, the petsc initialize does not work 
> either. Attached is a very simple fortran code and below is the test results. 
> Looks like the petsc is not properly installed. I am working on macOS 
> Monterey version 12.5 (Intel Xeon W processor).
> 
> Compiled using petsc-3.18
> (base) ➜ petsc-dev-fppflags mpiexec -n 4 ./petsc_fppflags
> compiled by STANDARD_FORTRAN compiler
> called by rank 0
> called by rank 1
> called by rank 2
> called by rank 3
> 
> compiled using petsc-dev
> (base) ➜ petsc-dev-fppflags mpiexec -n 4 ./petsc_fppflags
> called by rank 2
> called by rank 2
> called by rank 2
> called by rank 2
> 
> Thanks,
> 
> Danyang
> 
> On 2023-05-06, 10:22 PM, "Satish Balay"    >> wrote:
> 
> 
> On Sat, 6 May 2023, Danyang Su wrote:
> 
> 
> > Hi All,
> > 
> > 
> > 
> > My code has some FPP. It works fine in PETSc 3.18 and earlier version, but 
> > stops working in the latest PETSc-Dev. For example the following FPP 
> > STANDARD_FORTRAN is not recognized. 
> > 
> > 
> > 
> > #ifdef STANDARD_FORTRAN
> > 
> > 1 format(15x,1000a15)
> > 
> > 2 format(1pe15.6e3,1000(1pe15.6e3))
> > 
> > #else
> > 
> > 1 format(15x,a15) 
> > 
> > 2 format(1pe15.6e3,(1pe15.6e3))
> > 
> > #endif
> > 
> > 
> > 
> > In the makefile, I define the preprocessor as PPFLAGS.
> > 
> > 
> > 
> > PPFLAGS := -DLINUX -DRELEASE -DRELEASE_X64 -DSTANDARD_FORTRAN
> 
> 
> Shouldn't this be FPPFLAGS?
> 
> 
> 
> 
> Can you send us a simple test case [with the makefile] that we can try to 
> demonstrate this problem?
> 
> 
> Satish
> 
> 
> > 
> > …
> > 
> > exe: $(OBJS) chkopts
> > 
> > -${FLINKER} $(FFLAGS) $(FPPFLAGS) $(CPPFLAGS) -o $(EXENAME) $(OBJS) 
> > ${PETSC_LIB} ${LIS_LIB} ${DLIB} ${SLIB}
> > 
> > 
> > 
> > Any idea on this problem?
> > 
> > 
> > 
> > All the best,
> > 
> > 
> > 
> > 
> > 
> > 
> 
> 
> 
>