[petsc-users] Question about changing time step during calculation

2019-11-17 Thread Yingjie Wu via petsc-users
Dear Petsc developers
Hi,
Recently I am trying to using TS to solve time-dependent nonlinear PDEs. In
my program, next time step is based on the results of previous time step. I
want to add this control in TSmonitor() to change time step length in
calculation. I referred to user guide but I didn't find what I wanted.
Please give me some advice.

Thanks,
Yingjie


[petsc-users] Problem about Scaling

2019-09-24 Thread Yingjie Wu via petsc-users
Respected Petsc developers
Hi,
I am currently using SNES to solve some non-linear PDEs. The model is a
two-dimensional X-Y geometry. Because the magnitude of different physical
variables is too large, it is difficult to find the direction in Krylov
subspace, and the residual descends very slowly or even does not converge.
I think my PDEs need scaling. I need some help to solve the following
quentions.

1. I use - snes_mf_operator, so instead of providing Jacobian matrix, I
only set up an approximate Jacobian matrix for precondition. For my model,
do I just need to magnify the residuals to the same level? Is there any
need to modify the precondition matrix?
2. I have seen some articles referring to the non-dimensional method. I
don't know how to implement this method in the program and how difficult it
is to implement.

Thanks,
Yingjie


Re: [petsc-users] Problems about GMRES restart and Scaling

2019-03-21 Thread Yingjie Wu via petsc-users
Thanks for all the reply.
The model I simulated is a thermal model that contains multiple physical
fields(eg. temperature, pressure, velocity). In PDEs, these variables are
preceded by some physical parameters, which in turn are functions of these
variables(eg. density is a function of pressure and temperature.). Due to
the complexity of these physical parameter functions, we cannot explicitly
construct Jacobian matrices for this problem. So I use -snes_mf_operator.

My preconditioner is to treat these physical parameters as constants. At
the beginning of each nonlinear step(SNES), the Jacobian matrix is updated
with the result of the previous nonlinear step output(the physical
parameters are updated).


After setting a large KSP restart step, about 60 KSP can converge(ksp_rtol
= 1.e-5).


I have a feeling that my initial values are too large to cause this
phenomenon.


Snes/ex19 is actually a lot like my example, setting up: -da_grid_x 200
-da_grid_y 200 -snes_mf

There will also be a residual rise in step 1290 of KSP

But not all examples will produce this phenomenon.


Thanks, Yingjie

Smith, Barry F.  于2019年3月21日周四 上午1:18写道:

>
>
> > On Mar 20, 2019, at 5:52 AM, Yingjie Wu via petsc-users <
> petsc-users@mcs.anl.gov> wrote:
> >
> > Dear PETSc developers:
> > Hi,
> > Recently, I used PETSc to solve a non-linear PDEs for thermodynamic
> problems. In the process of solving, I found the following two phenomena,
> hoping to get some help and suggestions.
> >
> > 1. Because my problem involves a lot of physical parameters, it needs to
> call a series of functions, and can not analytically construct Jacobian
> matrix, so I use - snes_mf_operator to solve it, and give an approximate
> Jacobian matrix as a preconditioner. Because of the large dimension of the
> problem and the magnitude difference of the physical variables involved, it
> is found that the linear step residuals will increase at each restart
> (default 30th linear step) . This problem can be solved by setting a large
> number of restart steps. I would like to ask the reasons for this
> phenomenon? What knowledge or articles should I learn if I want to find out
> this problem?
>
>I've seen this behavior. I think in your case it is likely the
> -snes_mf_operator is not really producing an "accurate enough"
> Jacobian-Vector product (and the "solution" being generated by GMRES may be
> garbage). Run with -ksp_monitor_true_residual
>
>If your residual function has if () statements in it or other very
> sharp changes (discontinuities) then it may not even have a true Jacobian
> at the locations it is being evaluated at.  In the sense that the
> "Jacobian" you are applying via finite differences is not a linear operator
> and hence GMRES will fail on it.
>
> What are you using for a preconditioner? And roughly how many KSP
> iterations are being used.
>
>Barry
>
> >
> >
> > 2. In my problem model, there are many physical fields (variables are
> realized by finite difference method), and the magnitude of variables
> varies greatly. Is there any Scaling interface or function in Petsc?
> >
> > Thanks,
> > Yingjie
>
>


Re: [petsc-users] Problems about GMRES restart and Scaling

2019-03-20 Thread Yingjie Wu via petsc-users
Thank you very much for your reply.
I think my statement may not be very clear. I want to know why the linear
residual increases at gmres restart.
I think I should have no problem with the residual evaluation function,
because after setting a large gmres restart, the results are also in line
with expectations.
Thanks,
Yingjie

Matthew Knepley  于2019年3月20日周三 下午8:00写道:

> On Wed, Mar 20, 2019 at 6:53 AM Yingjie Wu via petsc-users <
> petsc-users@mcs.anl.gov> wrote:
>
>> Dear PETSc developers:
>> Hi,
>> Recently, I used PETSc to solve a non-linear PDEs for thermodynamic
>> problems. In the process of solving, I found the following two phenomena,
>> hoping to get some help and suggestions.
>>
>> 1. Because my problem involves a lot of physical parameters, it needs to
>> call a series of functions, and can not analytically construct Jacobian
>> matrix, so I use - snes_mf_operator to solve it, and give an approximate
>> Jacobian matrix as a preconditioner. Because of the large dimension of the
>> problem and the magnitude difference of the physical variables involved, it
>> is found that the linear step residuals will increase at each restart
>> (default 30th linear step) . This problem can be solved by setting a large
>> number of restart steps. I would like to ask the reasons for this
>> phenomenon? What knowledge or articles should I learn if I want to find out
>> this problem?
>>
>
> Make sure you non-dimensionalize the problem first, so that any scale
> differences are real and not the result of units.
>
>
>> 2. In my problem model, there are many physical fields (variables are
>> realized by finite difference method), and the magnitude of variables
>> varies greatly. Is there any Scaling interface or function in Petsc?
>>
>
> That is what Jacobi does.
>
>  Thanks,
>
> Matt
>
>
>> Thanks,
>> Yingjie
>>
>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
> https://www.cse.buffalo.edu/~knepley/
> <http://www.cse.buffalo.edu/~knepley/>
>


[petsc-users] Problems about SNES

2019-01-16 Thread Yingjie Wu via petsc-users
Dear PETSc developers:
Hi,
During the process of testing the program, I found some questions about
SNES. These are some basic questions that I have overlooked. Please help me
to answer them.
1. Because my program uses - snes_mf_operator, there is no Jacobian matrix.
Linear and non-linear step residuals are different in petsc. The linear
step residuals are r_linear = J*δx-f(x). Since I don't have a Jacobian
matrix, I don't know how to calculate the relative residuals of linear
steps provided in petsc. Do we use the finite difference approximation
matrix vector product when calculating the residuals?
2. Read the user's manual for a brief introduction to the inexact Newton
method, but I am very interested in the use of this method. I want to know
how to use this method in petsc.
3. The default line search used by SNES in PETSc is bt, which often fails
in program debugging. I don't know much about linesearch, and I'm curious
to know why it failed. How can I supplement this knowledge?

Thanks,
Yingjie


Re: [petsc-users] Problems about Picard and NolinearGS

2019-01-04 Thread Yingjie Wu via petsc-users
: Floating point exception
[0]PETSC ERROR: Vec entry at local location 0 is not-a-number or infinite
at beginning of function: Parameter number 2
[0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for
trouble shooting.
[0]PETSC ERROR: Petsc Release Version 3.10.1, Sep, 26, 2018
[0]PETSC ERROR: ./ex19 on a arch-linux2-c-debug named yjwu-XPS-8910 by yjwu
Fri Jan  4 09:05:10 2019
[1]PETSC ERROR: - Error Message
--
[1]PETSC ERROR: Floating point exception
[1]PETSC ERROR: Vec entry at local location 0 is not-a-number or infinite
at beginning of function: Parameter number 2
[1]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for
trouble shooting.
[1]PETSC ERROR: Petsc Release Version 3.10.1, Sep, 26, 2018
[1]PETSC ERROR: ./ex19 on a arch-linux2-c-debug named yjwu-XPS-8910 by yjwu
Fri Jan  4 09:05:10 2019
[1]PETSC ERROR: Configure options --with-cc=gcc --with-cxx=g++
--with-fc=gfortran --download-mpich --download-fblaslapack
[1]PETSC ERROR: #1 VecValidValues() line 26 in
/home/yjwu/petsc-3.10.1/src/vec/vec/interface/rvector.c
[1]PETSC ERROR: [0]PETSC ERROR: Configure options --with-cc=gcc
--with-cxx=g++ --with-fc=gfortran --download-mpich --download-fblaslapack
[0]PETSC ERROR: #1 VecValidValues() line 26 in
/home/yjwu/petsc-3.10.1/src/vec/vec/interface/rvector.c
[0]PETSC ERROR: #2 SNESComputeFunction() line 2234 in
/home/yjwu/petsc-3.10.1/src/snes/interface/snes.c
[0]PETSC ERROR: #3 SNESLineSearchApply_CP() line 48 in
/home/yjwu/petsc-3.10.1/src/snes/linesearch/impls/cp/linesearchcp.c
[0]PETSC ERROR: #2 SNESComputeFunction() line 2234 in
/home/yjwu/petsc-3.10.1/src/snes/interface/snes.c
[1]PETSC ERROR: #3 SNESLineSearchApply_CP() line 48 in
/home/yjwu/petsc-3.10.1/src/snes/linesearch/impls/cp/linesearchcp.c
[1]PETSC ERROR: #4 SNESLineSearchApply() line 648 in
/home/yjwu/petsc-3.10.1/src/snes/linesearch/interface/linesearch.c
[1]PETSC ERROR: #4 SNESLineSearchApply() line 648 in
/home/yjwu/petsc-3.10.1/src/snes/linesearch/interface/linesearch.c
[0]PETSC ERROR: #5 SNESSolve_QN() line 403 in
/home/yjwu/petsc-3.10.1/src/snes/impls/qn/qn.c
[0]PETSC ERROR: #5 SNESSolve_QN() line 403 in
/home/yjwu/petsc-3.10.1/src/snes/impls/qn/qn.c
[1]PETSC ERROR: #6 SNESSolve() line 4396 in
/home/yjwu/petsc-3.10.1/src/snes/interface/snes.c
[1]PETSC ERROR: #7 main() line 161 in
/home/yjwu/petsc-3.10.1/src/snes/examples/tutorials/ex19.c
#6 SNESSolve() line 4396 in
/home/yjwu/petsc-3.10.1/src/snes/interface/snes.c
[0]PETSC ERROR: #7 main() line 161 in
/home/yjwu/petsc-3.10.1/src/snes/examples/tutorials/ex19.c
[0]PETSC ERROR: PETSc Option Table entries:
[1]PETSC ERROR: PETSc Option Table entries:
[1]PETSC ERROR: -snes_qn_restart_type periodic
[0]PETSC ERROR: -snes_qn_restart_type periodic
[0]PETSC ERROR: -snes_qn_scale_type jacobian
[0]PETSC ERROR: -snes_type qn
[1]PETSC ERROR: -snes_qn_scale_type jacobian
[1]PETSC ERROR: -snes_type qn
[0]PETSC ERROR: -snes_view
[0]PETSC ERROR: End of Error Message ---send entire
error message to petsc-ma...@mcs.anl.gov--
[1]PETSC ERROR: -snes_view
[1]PETSC ERROR: End of Error Message ---send entire
error message to petsc-ma...@mcs.anl.gov--
application called MPI_Abort(MPI_COMM_WORLD, 72) - process 0
application called MPI_Abort(MPI_COMM_WORLD, 72) - process 1

I am very interested in quasi-Newton method, which may not be well
understood at the moment.
I look forward to your reply.

Thanks,
Yingjie


Matthew Knepley  于2019年1月3日周四 上午8:36写道:

> On Thu, Jan 3, 2019 at 7:36 AM Yingjie Wu via petsc-users <
> petsc-users@mcs.anl.gov> wrote:
>
>> Thanks for your reply.
>> I read the article you provided. This is my first contact with the
>> quasi-Newton method.
>> I have some problem:
>> 1. From the point of view of algorithm, the quasi-Newton method does not
>> need to be solved by linear equations, so why would KSP be used?
>>
>
> It is solving the equations, but we know the analytical answer. KSP is our
> abstraction for linear equation solver,
> so we also use it in this case.
>
>
>> 2. Where and how to  use preconditioner in quasi-Newton method?
>>
>
> You do not need a PC here. Note that this is only going to work well if
> you have a good initial
> guess for the inverse of your Jacobian. The optimization people who
> invented have that (it is
> the identity). In Jed's paper, they use a V-cycle as the good initial
> guess.
>
>   Thanks,
>
> Matt
>
>
>> Thanks,
>> Yingjie
>>
>> Jed Brown  于2018年12月27日周四 上午10:11写道:
>>
>>> Yingjie Wu via petsc-users  writes:
>>>
>>> > I my opinion, the difficulty in constructing my Jacobian matrix is
>>> complex
>>> > coefficient.(eg, thermal conductivity* λ* , density )
>>> > For 

Re: [petsc-users] Problems about Picard and NolinearGS

2018-12-26 Thread Yingjie Wu via petsc-users
Thank you very much for your previous reply.
I pay attention to Picard and NonlinearGS method because I use snes_mf
method with preconditioning matrix to solve my problem. The quality of
preconditioning matrix and initial values seriously affect the convergence
of my problem. Because of the poor quality of the preconditioning matrix
and inability to give a reasonable initial value for my problem, the
convergence is not good. High-order ILU ( in my case ILU20) is needed to
ensure that the linear step residual can be reduced  to ksp_rtol 1e-2. In
the future, the program may expand more complex and convergence may be
worse. However, Picard and NonlinearGS are two method to ensure
convergence. Although the convergence rate is very slow, the results of
these two method can be used to provide a reasonable initial value for my
program. So I need a solver similar to Picard or GS's iterative method,
which is very useful for providing initial guess or comparing with the
original program.
I tried Picard method, which split original residual function
SNESComputeFunction F(x)=A(x)x - b(x) in to two part, give the right-hand
part of the equation b(x) to SNESPicardComputeFunction,and then transfer
the A(x) to SNESPicardComputeJacobian(in snes_mf , I use A(x) as
preconditioning matrix) . Because A(x) only include the relations within a
single physical field, it does not consider the relationship between
physical fields. So it is equivalent to the diagonal block part of the
Jacobian matrix J.
I really need some advice on my question.

Thanks,
Yingjie

Smith, Barry F.  于2018年12月26日周三 下午4:23写道:

>
>Try with -snes_mf_operator from the manual page of SNESSetPicard() this
> indicates that  with Newton's method using A(x^{n}) to construct the
> preconditioner.
>
> > On Dec 26, 2018, at 7:48 AM, Yingjie Wu via petsc-users <
> petsc-users@mcs.anl.gov> wrote:
> >
> > Dear Petsc developers:
> > Hi,
> > 1. I tried to use the Picard solver in Petsc, but the program didn't
> converge. My program is still a thermal program that contains multiple
> physical fields, and is a PDEs' problem. The error message is as follows.
> The reason I use Picard is that it can guarantee convergence(though slow
> and expensive). I follow the ex15.c, but I don't use DM to organize the
> solution vector. So I try the SNESSetPicard().
> > 0 SNES Function norm 2.91302e+08
> > 0 KSP Residual norm 5.79907e+08
> > 1 KSP Residual norm 1.46843e-05
> >   Linear solve converged due to CONVERGED_RTOL iterations 1
> >   1 SNES Function norm 2.891e+08
> > 0 KSP Residual norm 5.5989e+08
> > 1 KSP Residual norm 4.21314e-06
> >   Linear solve converged due to CONVERGED_RTOL iterations 1
> >   2 SNES Function norm 2.78289e+08
> > 0 KSP Residual norm 5.53553e+08
> > 1 KSP Residual norm 2.04076e-05
> >   Linear solve converged due to CONVERGED_RTOL iterations 1
> >   3 SNES Function norm 2.77833e+08
> > 0 KSP Residual norm 5.52907e+08
> > 1 KSP Residual norm 2.09919e-05
> >   Linear solve converged due to CONVERGED_RTOL iterations 1
> >   4 SNES Function norm 2.77821e+08
> > 0 KSP Residual norm 5.52708e+08
> > 1 KSP Residual norm 2.08677e-05
> >   Linear solve converged due to CONVERGED_RTOL iterations 1
> > Nonlinear solve did not converge due to DIVERGED_LINE_SEARCH iterations 4
> > SNES Object: 1 MPI processes
> >   type: newtonls
> >   maximum iterations=50, maximum function evaluations=1
> >   tolerances: relative=1e-08, absolute=1e-50, solution=1e-08
> >   total number of linear solver iterations=5
> >   total number of function evaluations=34
> >   norm schedule ALWAYS
> >   SNESLineSearch Object: 1 MPI processes
> > type: bt
> >   interpolation: cubic
> >   alpha=1.00e-04
> > maxstep=1.00e+08, minlambda=1.00e-12
> > tolerances: relative=1.00e-08, absolute=1.00e-15,
> lambda=1.00e-08
> > maximum iterations=40
> >   KSP Object: 1 MPI processes
> > type: gmres
> >   restart=30, using Classical (unmodified) Gram-Schmidt
> Orthogonalization with no iterative refinement
> >   happy breakdown tolerance 1e-30
> > maximum iterations=1, initial guess is zero
> > tolerances:  relative=1e-05, absolute=1e-50, divergence=1.
> > left preconditioning
> > using PRECONDITIONED norm type for convergence test
> >   PC Object: 1 MPI processes
> > type: lu
> >   out-of-place factorization
> >   tolerance for zero pivot 2.22045e-14
> >   matrix ordering: nd
> >   factor fill ratio given 5., needed 5.48356
> > Factored matrix follows:
> >   Mat O

[petsc-users] Problems about Picard and NolinearGS

2018-12-26 Thread Yingjie Wu via petsc-users
Dear Petsc developers:
Hi,
1. I tried to use the Picard solver in Petsc, but the program didn't
converge. My program is still a thermal program that contains multiple
physical fields, and is a PDEs' problem. The error message is as follows.
The reason I use Picard is that it can guarantee convergence(though slow
and expensive). I follow the ex15.c, but I don't use DM to organize the
solution vector. So I try the SNESSetPicard().
0 SNES Function norm 2.91302e+08
0 KSP Residual norm 5.79907e+08
1 KSP Residual norm 1.46843e-05
  Linear solve converged due to CONVERGED_RTOL iterations 1
  1 SNES Function norm 2.891e+08
0 KSP Residual norm 5.5989e+08
1 KSP Residual norm 4.21314e-06
  Linear solve converged due to CONVERGED_RTOL iterations 1
  2 SNES Function norm 2.78289e+08
0 KSP Residual norm 5.53553e+08
1 KSP Residual norm 2.04076e-05
  Linear solve converged due to CONVERGED_RTOL iterations 1
  3 SNES Function norm 2.77833e+08
0 KSP Residual norm 5.52907e+08
1 KSP Residual norm 2.09919e-05
  Linear solve converged due to CONVERGED_RTOL iterations 1
  4 SNES Function norm 2.77821e+08
0 KSP Residual norm 5.52708e+08
1 KSP Residual norm 2.08677e-05
  Linear solve converged due to CONVERGED_RTOL iterations 1
Nonlinear solve did not converge due to DIVERGED_LINE_SEARCH iterations 4
SNES Object: 1 MPI processes
  type: newtonls
  maximum iterations=50, maximum function evaluations=1
  tolerances: relative=1e-08, absolute=1e-50, solution=1e-08
  total number of linear solver iterations=5
  total number of function evaluations=34
  norm schedule ALWAYS
  SNESLineSearch Object: 1 MPI processes
type: bt
  interpolation: cubic
  alpha=1.00e-04
maxstep=1.00e+08, minlambda=1.00e-12
tolerances: relative=1.00e-08, absolute=1.00e-15,
lambda=1.00e-08
maximum iterations=40
  KSP Object: 1 MPI processes
type: gmres
  restart=30, using Classical (unmodified) Gram-Schmidt
Orthogonalization with no iterative refinement
  happy breakdown tolerance 1e-30
maximum iterations=1, initial guess is zero
tolerances:  relative=1e-05, absolute=1e-50, divergence=1.
left preconditioning
using PRECONDITIONED norm type for convergence test
  PC Object: 1 MPI processes
type: lu
  out-of-place factorization
  tolerance for zero pivot 2.22045e-14
  matrix ordering: nd
  factor fill ratio given 5., needed 5.48356
Factored matrix follows:
  Mat Object: 1 MPI processes
type: seqaij
rows=11368, cols=11368
package used to perform factorization: petsc
total: nonzeros=234554, allocated nonzeros=234554
total number of mallocs used during MatSetValues calls =0
  not using I-node routines
linear system matrix = precond matrix:
Mat Object: 1 MPI processes
  type: seqaij
  rows=11368, cols=11368
  total: nonzeros=42774, allocated nonzeros=56840
  total number of mallocs used during MatSetValues calls =0
not using I-node routines
Are there any other examples of Picard methods? I'm very interested in this
method.

2. I found that in ex15.c and ex19.c use the NonlinearGS. I know it's a
iterative method. I don't know how to use this method in above examples.
As for as I know, NonlinearGS is an iterative method parallel to subspace
method. NonlinearGS should not be required if subspace methods are used.

Thanks,
Yingjie


[petsc-users] Problems about parallel version in SNES_MF

2018-12-25 Thread Yingjie Wu via petsc-users
Dear Petsc developers:
Hi,
I am currently using Petsc to solve a two-dimensional system of non-linear
PDEs, which is a thermal problem, including pressure field, temperature
field and velocity field. Since it is not easy to construct Jacobian matrix
explicitly, I adopt snes_mf method and provide a preconditioning matrix.
Now I want to adapt the program to parallel version.
The main steps of the current procedure are as fellows:

   - A solution vector U is constructed. The dimension is the sum of the
   meshed of each physical field.
   - FormFunction is provided to calculate residual.
   - Preconditioning matrix Pmat is assembled using FormJacibian, but Amat
   (Jacobian matrix) is not assembled.
   - Compute with -snes_mf_operator.

There are following questions about the development of parallel version:

   1. If one processor is assigned to each physical field, such as pressure
   field P,velocity field V, temperature field T, belongs to three processors.
   Because the problem is nonlinear, the information of temperature T may be
   used in the calculation of pressure P(in FormFunction). How to transfer the
   information?
   2. Since the snes_mf method needs a good preconditioning matrix to
   ensure convergence. How to use preconditioning matrix in parallel? As far
   as I know, BJACOBI can run in parallel. How can it be used in snes_mf
   method?

Because there are few snes_mf examples with parallel computation, I need
some advice from you.

Thanks,
Yingjie


[petsc-users] How to fix max linear steps in SNES

2018-11-08 Thread Yingjie Wu via petsc-users
Dear Petsc developer:
Hi,
I recently debugged my program, which is a two-dimensional nonlinear PDEs
problem, and solved by SNES. I find that the residual drop in KSP is slow.
I want to fix the number of steps in the linear step, because I can not
choose a suitable ksp_rtol. I use the command: -ksp_max_it 200, and I want
to fix the number of iterations per ksp. But the program seemed to stop in
the first nonlinear step.

Linear solve did not converge due to DIVERGED_ITS iterations 200
Nonlinear solve did not converge due to DIVERGED_LINEAR_SOLVE iterations 0


How can I fix the maximum number of iterations of linear steps without
stopping the program before the convergence of the non-linear steps?

Thanks,
Yingjie


Re: [petsc-users] Problems about Assemble DMComposite Precondition Matrix

2018-11-05 Thread Yingjie Wu via petsc-users
Thank you very much for your reply.
My equation is a neutron diffusion equation with eigenvalues, which is why
I use DMConposite because there is a single non-physical field variable,
eigenvalue. I am not very familiar with FieldSplit. I can understand it
first.
It seems not the problem of DmConposite because the reason of error
reporting is that the diagonal elements of the precondition matrix have
zero terms.
My requirement is to divide precondition matrix to sub-matrices, because I
have neutrons of multiple energy groups (each is a physical field). I want
to assign only diagonal block matrices, preferably using
MatSetValuesStencil (which can simplify the assignment of five diagonal
matrices).

Thanks,
Yingjie

Mark Adams  于2018年11月5日周一 下午11:22写道:

> DMComposite is not very mature, the last time I checked and I don't of
> anyone having worked on it recently, and it is probably not what you want
> anyway. FieldSplit is most likely what you want.
>
> What are your equations and discretization? eg, Stokes with cell centered
> pressure? There are probably examples that are close to what you want and
> it would not be hard to move your code over.
>
> Mark
>
> On Mon, Nov 5, 2018 at 10:00 AM Yingjie Wu via petsc-users <
> petsc-users@mcs.anl.gov> wrote:
>
>> Dear Petsc developer:
>> Hi,
>> I have recently studied the preconditioner of the program, and some
>> problems have arisen. Please help me to solve them.
>> At present, I have written a program to solve the system of non-linear
>> equations. The Matrix Free method has been used to calculate results. But I
>> want to add a preprocessing matrix to it.
>> I used the DMComposite object, which stores two sub-DM objects and a
>> single value (two physical field variables and one variable). I want to use
>> MatGetLocalSubMatrix to assign each physical field sub precondition matrix.
>> At the same time, because my DM object is two-dimensional, I use
>> MatSetValuesStencil() to fill the sub matrix.
>> At present, I just want to fill in a unit matrix (for global vectors) as
>> the precondition matrix of Matrix Free Method (just to test whether it can
>> be used, the unit matrix has no preprocessing effect). But the procedure
>> was wrong.
>>
>>yjwu@yjwu-XPS-8910:~/petsc-3.10.1/src/snes/examples/tutorials$
>> mpiexec -n 1 ./ex216 -f wu-readtwogroups -snes_mf_operator -snes_view
>> -snes_converged_reason -snes_monitor -ksp_converged_reason
>> -ksp_monitor_true_residual
>>
>>   0 SNES Function norm 8.235090086536e-02
>> iter = 0, SNES Function norm 0.0823509
>> iter = 0, Keff === 1.
>>   Linear solve did not converge due to DIVERGED_PCSETUP_FAILED iterations
>> 0
>>  PCSETUP_FAILED due to FACTOR_NUMERIC_ZEROPIVOT
>> Nonlinear solve did not converge due to DIVERGED_LINEAR_SOLVE iterations 0
>> SNES Object: 1 MPI processes
>>   type: newtonls
>>   maximum iterations=50, maximum function evaluations=1
>>   tolerances: relative=1e-08, absolute=1e-50, solution=1e-08
>>   total number of linear solver iterations=0
>>   total number of function evaluations=1
>>   norm schedule ALWAYS
>>   SNESLineSearch Object: 1 MPI processes
>> type: bt
>>   interpolation: cubic
>>   alpha=1.00e-04
>> maxstep=1.00e+08, minlambda=1.00e-12
>> tolerances: relative=1.00e-08, absolute=1.00e-15,
>> lambda=1.00e-08
>> maximum iterations=40
>>   KSP Object: 1 MPI processes
>> type: gmres
>>   restart=30, using Classical (unmodified) Gram-Schmidt
>> Orthogonalization with no iterative refinement
>>   happy breakdown tolerance 1e-30
>> maximum iterations=1, initial guess is zero
>> tolerances:  relative=1e-05, absolute=1e-50, divergence=1.
>> left preconditioning
>> using PRECONDITIONED norm type for convergence test
>>   PC Object: 1 MPI processes
>> type: ilu
>>   out-of-place factorization
>>   0 levels of fill
>>   tolerance for zero pivot 2.22045e-14
>>   matrix ordering: natural
>>   factor fill ratio given 1., needed 1.
>> Factored matrix follows:
>>   Mat Object: 1 MPI processes
>> type: seqaij
>> rows=961, cols=961
>> package used to perform factorization: petsc
>> total: nonzeros=4625, allocated nonzeros=4625
>> total number of mallocs used during MatSetValues calls =0
>>   not using I-node routines
>> linear system matrix followed by preconditioner matrix:
>> Mat Object: 1 MPI processes
>