Thanks for your valuable advice. Your patience and recognition is
highly appreciated.

I will do some reading and think it over. Some tests will be performed
in recent days.

Regards.

Yu

Barry Smith <bsm...@petsc.dev> 于2021年11月7日周日 下午11:15写道:
>
>
>    I think you can use the following approach. Instead of making the boundary 
> conditions, for example,
>
>    V_1(t) = 0
>
>    you do the equivalent,
>
>     v_1(t=0) = 0 and
>
>     d V_1(t)
>     ----------    = 0
>       dt
>
>
>    Now if you do backward Euler on this you get
>
>     V_1^{n+1} - V_1^{n}
>     --------------------------       = 0
>          Dt
>
>    or in your residual form
>
>                               V_1^{n+1} - V_1^{n}
>      R_{V_1} =      --------------------------
>                                 Dt
>
>    The Jacobian of this gives a 1/Dt on the diagonal for this row. (Similarly 
> for the other problematic rows).
>
>    Now your Jacobian is nonsingular and you can solve with it with no 
> difficulties.
>
>     Note that you get the same result if you think of the problem as a DAE 
> because the residual equation is R_{V_1} = V_1^{n+1}  and the derivative of 
> this with respect to V_1^{n+1} is 1 (not zero). So you still end up with a 
> nonzero on the diagonal, not zero. Regardless, you can keep these rows in the 
> ODE form right hand side and form Jacobian, or you can eliminate them 
> (analytically as Hong suggested) so the ODE solver does not see these rows 
> but in either case the results are identical.
>
>    Barry
>
>
> On Nov 7, 2021, at 3:45 AM, 仓宇 <yhcy1...@gmail.com> wrote:
>
> Thanks for your patient explanation, and sorry for this late reply.
>
> I do understand the manipulation skills that you mentioned, but this kind of 
> manually substitution will not change the numerical difficulty of the problem.
>
> In fact, before I turn to PETSc, I have tried to form an ODE for this problem 
> already,  by utilizing the well-established functions like 'ode15s' and 
> 'ode23' in MATLAB. But it failed to produce physically reasonable results. 
> The key flaw lies in the treatment of the eigenvalue, namely \hat{\Lambda}_r 
> denoted in the PDF file. Only implicit DAE formulation can fully embody its 
> effect. This is why I finally decided to turn to use the DAE functionality in 
> PETSc.
>
> I've excerpted some description from a monograph regarding this problem, hope 
> it can clearly demonstrate the numerical requirement for solving it.
> <bvp.png>
> Kee, R. J., Coltrin, M. E., Glarborg, P., & Zhu, H. (2017). Stagnation Flows. 
> In Chemically Reacting Flow (2 ed., pp. 232).
>
>
>
> Zhang, Hong <hongzh...@anl.gov> 于2021年11月1日周一 下午10:21写道:
>>
>> You can avoid the zero-diagonal elements in your case. Just remove the 
>> boundary points and the \hat{\Lambda}_r from the solution vector Y (eq(17) 
>> in your PDE file). You can calculate them separately in your RHS function 
>> (when computing the residual vector for 2 <=i<=N-1). It is not necessary to 
>> form a DAE. You can actually form an ODE instead.
>>
>> Hong (Mr.)
>>
>> > On Oct 31, 2021, at 5:38 AM, 仓宇 <yhcy1...@gmail.com> wrote:
>> >
>> > Thanks for your suggestions and experience sharing.
>> >
>> > But it seems that you have missed my point. This kind of zero-diagonal
>> > element that I described earlier is somehow inevitable.
>> > If interested in this problem, you may refer to the PDF file that was
>> > attached in the previous post.
>> >
>> > Regards
>> >
>> > Yu
>> >
>> > Zhang, Hong <hongzh...@anl.gov> 于2021年10月29日周五 下午10:05写道:
>> >>
>> >> One way to avoid the zero element in Jacobian is to exclude the boundary 
>> >> point from the solution vector. I often do this for Dirichlet boundary 
>> >> conditions since the value at the boundary is given directly and does not 
>> >> need to be taken as a degree of freedom.
>> >>
>> >> Hong (Mr.)
>> >>
>> >> On Oct 28, 2021, at 9:49 PM, 仓宇 <yhcy1...@gmail.com> wrote:
>> >>
>> >> Thanks for your careful inspection and thoughtful suggestions.
>> >>
>> >>>   finite differencing may put a small non-zero value in that location 
>> >>> due to numerical round-off
>> >>
>> >> I think your explanation is reasonable. This numerical round-off may 
>> >> somehow help to avoid this pivot issue.
>> >>
>> >> The structure of my jacobian matrix looks like this (generated by 
>> >> '-mat_view draw'):
>> >> <jac_view.png>
>> >> Analytically, the first diagonal element of the jacobian is indeed 0, as 
>> >> its corresponding residual function is solely determined from boundary 
>> >> condition of another variable. This seems a little bit wired but is 
>> >> mathematically well-posed. For more description about the background 
>> >> physics, please refer to attached PDF file for more detailed explanation 
>> >> on the discretization and boundary conditions.
>> >>
>> >> Actually, the jacobian matrix is not singular, but I do believe this 
>> >> numerical difficulty is caused by the zero-element in diagonal.
>> >> In this regard, I've performed some trial and test. It seems that several 
>> >> methods have been worked out for this pivot issue:
>> >> a) By setting '-pc_type svd', PETSC does not panic any more with my 
>> >> hand-coded jacobian, and converged solution is obtained. Efficiency is 
>> >> also preserved.
>> >> b) By setting '-pc_type none', converged solution is also obtained, but 
>> >> it takes too many KSP iterations to converge per SNES step (usually 
>> >> hundreds), making the overall solution procedure very slow.
>> >>
>> >> Do you think these methods really solved this kind of pivot issue? Not by 
>> >> chance like the numerical round-off in finite difference previously.
>> >>
>> >> Regards
>> >>
>> >> Yu Cang
>> >>
>> >> Barry Smith <bsm...@petsc.dev> 于2021年10月27日周三 下午9:43写道:
>> >>>
>> >>>
>> >>>   You can run with -ksp_error_if_not_converged to get it to stop as soon 
>> >>> as a linear solve fails to help track down the exact breaking point.
>> >>>
>> >>>> The problem under consideration contains an eigen-value to be solved,
>> >>>> making the first diagonal element of the jacobian matrix being zero.
>> >>>> From these outputs, it seems that the PC failed to factorize, which is
>> >>>> caused by this 0 diagonal element.  But I'm wondering why it works
>> >>>> with jacobian matrix generated by finite-difference?
>> >>>
>> >>>   Presumably your "exact" Jacobian puts a zero on the diagonal while the 
>> >>> finite differencing may put a small non-zero value in that location due 
>> >>> to numerical round-off. In that case even if the factorization succeeds 
>> >>> it may produce an inaccurate solution if the value on the diagonal is 
>> >>> very small.
>> >>>
>> >>>   If your matrix is singular or cannot be factored with LU then you need 
>> >>> to use a different solver for the linear system that will be robust to 
>> >>> the zero on the diagonal. What is the structure of your Jacobian? (The 
>> >>> analytic form).
>> >>>
>> >>>  Barry
>> >>>
>> >>>
>> >>>> On Oct 27, 2021, at 1:47 AM, 仓宇 <yhcy1...@gmail.com> wrote:
>> >>>>
>> >>>> Thanks for your kind reply.
>> >>>>
>> >>>> Several comparison tests have been performed. Attached are execution
>> >>>> output files. Below are corresponding descriptions.
>> >>>>
>> >>>> good.txt -- Run without hand-coded jacobian, solution converged, with
>> >>>> option '-ts_monitor -snes_monitor -snes_converged_reason
>> >>>> -ksp_monitor_true_residual -ksp_converged_reason';
>> >>>> jac1.txt -- Run with hand-coded jacobian, does not converge, with
>> >>>> option '-ts_monitor -snes_monitor -snes_converged_reason
>> >>>> -ksp_monitor_true_residual -ksp_converged_reason -snes_test_jacobian';
>> >>>> jac2.txt -- Run with hand-coded jacobian, does not converge, with
>> >>>> option '-ts_monitor -snes_monitor -snes_converged_reason
>> >>>> -ksp_monitor_true_residual -ksp_converged_reason -snes_test_jacobian
>> >>>> -ksp_view';
>> >>>> jac3.txt -- Run with hand-coded jacobian, does not converge, with
>> >>>> option '-ts_monitor -snes_monitor -snes_converged_reason
>> >>>> -ksp_monitor_true_residual -ksp_converged_reason -snes_test_jacobian
>> >>>> -ksp_view -ts_max_snes_failures -1 ';
>> >>>>
>> >>>> The problem under consideration contains an eigen-value to be solved,
>> >>>> making the first diagonal element of the jacobian matrix being zero.
>> >>>> From these outputs, it seems that the PC failed to factorize, which is
>> >>>> caused by this 0 diagonal element.  But I'm wondering why it works
>> >>>> with jacobian matrix generated by finite-difference? Would employing
>> >>>> DMDA for discretization be helpful?
>> >>>>
>> >>>> Regards
>> >>>>
>> >>>> Yu Cang
>> >>>>
>> >>>> Barry Smith <bsm...@petsc.dev> 于2021年10月25日周一 下午10:50写道:
>> >>>>>
>> >>>>>
>> >>>>> It is definitely unexpected that -snes_test_jacobian verifies the 
>> >>>>> Jacobian as matching but the solve process is completely different.
>> >>>>>
>> >>>>>  Please run with -snes_monitor -snes_converged_reason 
>> >>>>> -ksp_monitor_true_residual -ksp_converged_reason -snes_test_jacobian 
>> >>>>> and send all the output
>> >>>>>
>> >>>>> Barry
>> >>>>>
>> >>>>>
>> >>>>>> On Oct 25, 2021, at 9:53 AM, 仓宇 <yhcy1...@gmail.com> wrote:
>> >>>>>>
>> >>>>>> I'm using TS to solve a set of DAE, which originates from a
>> >>>>>> one-dimensional problem. The grid points are uniformly distributed.
>> >>>>>> For simplicity, the DMDA is not employed for discretization.
>> >>>>>>
>> >>>>>> At first, only the residual function is prescribed through
>> >>>>>> 'TSSetIFunction', and PETSC produces converged results. However, after
>> >>>>>> providing hand-coded Jacobian through 'TSSetIJacobian', the internal
>> >>>>>> SNES object fails (residual norm does not change), and TS reports
>> >>>>>> 'DIVERGED_STEP_REJECTED'.
>> >>>>>>
>> >>>>>> I have tried to add the option '-snes_test_jacobian' to see if the
>> >>>>>> hand-coded jacobian is somewhere wrong, but it shows '||J -
>> >>>>>> Jfd||_F/||J||_F = 1.07488e-10, ||J - Jfd||_F = 2.14458e-07',
>> >>>>>> indicating that the hand-coded jacobian is correct.
>> >>>>>>
>> >>>>>> Then, I added a monitor for the internal SNES object through
>> >>>>>> 'SNESMonitorSet', in which the solution vector will be displayed at
>> >>>>>> each iteration. It is interesting to find that, if the jacobian is not
>> >>>>>> provided, meaning finite-difference is utilized for jacobian
>> >>>>>> evaluation internally, the solution vector converges to steady
>> >>>>>> solution and the SNES residual norm is reduced continuously. However,
>> >>>>>> it turns out that, as long as the jacobian is provided, the solution
>> >>>>>> vector will NEVER get changed! So the solution procedure stucked!
>> >>>>>>
>> >>>>>> This is quite strange!  Hope to get some advice.
>> >>>>>> PETSC version=3.14.6, program run in serial mode.
>> >>>>>>
>> >>>>>> Regards
>> >>>>>>
>> >>>>>> Yu Cang
>> >>>>>
>> >>>> <jac3.txt><jac2.txt><jac1.txt><good.txt>
>> >>>
>> >> <TFM.pdf>
>> >>
>> >>
>>
>

Reply via email to