Hi Dr. Bangerth, I think I have found the solution. I use add -ksp_norm_type unpreconditioned flag when run the simulation. My suggestion is that it will be better if this flag is automatically turned on when using petsc CG. Otherwise, the convergence test is not consistent between the built-in CG and petsc CG.
Thanks, Yiliang Wang On Tuesday, December 16, 2025 at 11:42:51 PM UTC-5 Yiliang Wang wrote: > Hi Dr. Bangerth, > > I think I found where the issue is. But I am not sure how to fix it. > > Here is the code I set up the preconditioner and solver convergence tol > and max iterations. > [image: Screenshot 2025-12-16 233225.png] > > However, if I run it with -ksp_view -pc_view, the printed info indicates > that petsc still uses the default settings. Especially, it is using > PRECONDITIONED norm for convergence test. > [image: Screenshot 2025-12-16 233532.png] > I am not sure why the input of solver_control is not passed into petsc. Or > maybe it is overwritten somewhere? Is it because the versions of petsc and > dealii I am using are not compatible? I am using petsc 3.16.6 and dealii > 9.7.0. > > I will appreciate it if you can help. > > Bests, > Yiliang Wang > > > On Tuesday, December 9, 2025 at 2:57:07 PM UTC-5 Wolfgang Bangerth wrote: > > Yiliang: > > > 1. When I use SI-system (N-m-s), the material properties value will be > > really large as expected. For example, the Young's modulus will be 1e11 > > Pa. Somehow, the dmp code will behave very strangely in this case. The > > CG will finish in 0 iteration and the solution will be empty. If I > > change the unit system to be N-mm-s, the Young's modulus become 1e5 MPa > > and then the CG starts to behave normally. The smp code seems to be more > > insensitive to the type of unit system. > > How do you set the stopping criterion for your solver? If you are > solving a single equation, the choice of physical units should not > matter because any choice scales all equations equally. In that case, > using a *relative* tolerance as a stopping criterion should lead to a > number of iterations that is the same for all choices of units. > > The situation is different if you have a system of equations (say, the > Stokes equations) in which case you need to scale the equations to a > common unit first. > > > > 2. Although the finally results of smp and dmp are the same, the > > computational time is different. Surprisingly, the dmp code is slower > > than smp code. It is not because the CG is slower in dmp, it is somehow > > the dmp code will need more N-R iterations them smp. > > I don't actually know what SMP and DMP mean. As a consequence, I can't > suggest why the two formulations may lead to different numbers of > nonlinear iterations. > > > > Based on the above observation, I have feeling that there are some loss > > of accuracy when using petsc. Most likely it happens when we transfer > > Vector or Matrix between petsc and dealii. > > I don't think this is a likely reason for the discrepancy. We have been > using the PETSc interfaces for more than 20 years by now, and I don't > think that PETSc is more or less accurate than our own linear algebra > classes, or that accuracy is lost in the transfer. The issue is almost > certainly somewhere else. > > Best > W. > > -- The deal.II project is located at http://www.dealii.org/ For mailing list/forum options, see https://groups.google.com/d/forum/dealii?hl=en --- You received this message because you are subscribed to the Google Groups "deal.II User Group" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To view this discussion visit https://groups.google.com/d/msgid/dealii/0f353c0d-94f8-419d-9416-aa5064013f6en%40googlegroups.com.
