So long as you called 
SNESSetJacobian(snes,A,A,SNESComputeJacobianDefaultColor); and the matrix A has 
the correct nonzero pattern it should work (I assume you are not attaching a DM 
to the SNES?).



> On Jul 12, 2024, at 4:09 PM, Mark Lohry <[email protected]> wrote:
> 
> This Message Is From an External Sender
> This message came from outside your organization.
> The root cause of this turned out to be that I took code that i had 
> historically used for JFNK solves via TS and SNESSolve interfaces. Here 
> instead of calling SNESSolve I only wanted to compute the jacobian using what 
> i had set up as to be computed via finite differences via coloring. when 
> calling SNESSolve i confirmed it does generate the expected matrix, but if 
> instead i directly call 
> 
> SNESComputeJacobian(ctx.snes_, petscsoln, ctx.JPre_, ctx.JPre_); // this is 
> computing without coloring
> 
> This appears to produce a matrix that is erroneous in some spots due to not 
> using coloring. Is there some other way to get the jacobian only? i tried to 
> follow similar steps as what is in the actual SNESSolve but i seem to have 
> botched it somehow.
> 
> On Sun, Apr 21, 2024 at 3:36 PM Zou, Ling <[email protected] 
> <mailto:[email protected]>> wrote:
>> The other symptom is the same:
>> 
>> Using coloring, finite differencing respects the specified non-zero pattern, 
>> but gives wrong (very large) Jacobian entries (J_ij)
>> Using dense matrix assumption, finite differencing does not respect the 
>> non-zero pattern determined by your numeric, which is a clear sign of 
>> residual function code bug (your residual function does not respect your 
>> numeric).
>> -Ling
>> 
>>  
>> 
>> From: petsc-users <[email protected] 
>> <mailto:[email protected]>> on behalf of Zou, Ling via 
>> petsc-users <[email protected] <mailto:[email protected]>>
>> Date: Sunday, April 21, 2024 at 2:28 PM
>> To: Mark Lohry <[email protected] <mailto:[email protected]>>
>> Cc: PETSc <[email protected] <mailto:[email protected]>>
>> Subject: Re: [petsc-users] finite difference jacobian errors when given 
>> non-constant initial condition
>> 
>> Very interesting.
>> 
>> I happened to encounter something very similar a couple of days ago, which, 
>> of course, was due to a code bug I introduced. The code bug was in the 
>> residual function. I used a local vector to track ‘heat flux’, which should 
>> be zero-ed out at the beginning of each residual function evaluation. I did 
>> not zero it, and I observed very similar results, the Jacobian is completely 
>> wrong, with large values (J_ij keeps increasing after each iteration), and 
>> non-zero values are observed in locations which should be perfect zero. The 
>> symptom is very much like what you are seeing here. I suspect a similar bug. 
>> (Maybe you forgot zero the coefficients of P1 re-construction? Using 
>> constant value 1, reconstructed dphi/dx = 0, so however many iterations, 
>> still zero).
>> 
>>  
>> 
>> -Ling
>> 
>>  
>> 
>> From: Mark Lohry <[email protected] <mailto:[email protected]>>
>> Date: Sunday, April 21, 2024 at 12:35 PM
>> To: Zou, Ling <[email protected] <mailto:[email protected]>>
>> Cc: PETSc <[email protected] <mailto:[email protected]>>
>> Subject: Re: [petsc-users] finite difference jacobian errors when given 
>> non-constant initial condition
>> 
>> The coloring I'm fairly confident is correct -- I use the same process for 
>> 3D unstructured grids and everything seems to work. The residual function is 
>> also validated. As a test I did as you suggested -- assume the matrix is 
>> dense -- and 
>> 
>> ZjQcmQRYFpfptBannerStart
>> 
>> This Message Is From an External Sender
>> 
>> This message came from outside your organization.
>> 
>>  
>> 
>> ZjQcmQRYFpfptBannerEnd
>> 
>> The coloring I'm fairly confident is correct -- I use the same process for 
>> 3D unstructured grids and everything seems to work. The residual function is 
>> also validated.
>> 
>>  
>> 
>> As a test I did as you suggested -- assume the matrix is dense -- and I get 
>> the same bad results, just now the zero blocks are filled.
>> 
>>  
>> 
>> Assuming dense, giving it a constant vector, all is good:
>> 
>>  
>> 
>>  4.23516e-16     -1.10266      0.31831   -0.0852909            0            
>> 0     -0.31831      1.18795
>>      1.10266 -4.23516e-16     -1.18795      0.31831            0            
>> 0    0.0852909     -0.31831
>>     -0.31831      1.18795  2.11758e-16     -1.10266      0.31831   
>> -0.0852909            0            0
>>    0.0852909     -0.31831      1.10266 -4.23516e-16     -1.18795      
>> 0.31831            0            0
>>            0            0     -0.31831      1.18795  2.11758e-16     
>> -1.10266      0.31831   -0.0852909
>>            0            0    0.0852909     -0.31831      1.10266 
>> -4.23516e-16     -1.18795      0.31831
>>      0.31831   -0.0852909            0            0     -0.31831      
>> 1.18795  4.23516e-16     -1.10266
>> 
>>  
>> 
>>  
>> 
>> Assuming dense, giving it sin(x), all is bad:
>> 
>>  
>> 
>> -1.76177e+08 -6.07287e+07 -6.07287e+07 -1.76177e+08  1.76177e+08  
>> 6.07287e+07  6.07287e+07  1.76177e+08
>> -1.31161e+08 -4.52116e+07 -4.52116e+07 -1.31161e+08  1.31161e+08  
>> 4.52116e+07  4.52116e+07  1.31161e+08
>>  1.31161e+08  4.52116e+07  4.52116e+07  1.31161e+08 -1.31161e+08 
>> -4.52116e+07 -4.52116e+07 -1.31161e+08
>>  1.76177e+08  6.07287e+07  6.07287e+07  1.76177e+08 -1.76177e+08 
>> -6.07287e+07 -6.07287e+07 -1.76177e+08
>>  1.76177e+08  6.07287e+07  6.07287e+07  1.76177e+08 -1.76177e+08 
>> -6.07287e+07 -6.07287e+07 -1.76177e+08
>>  1.31161e+08  4.52116e+07  4.52116e+07  1.31161e+08 -1.31161e+08 
>> -4.52116e+07 -4.52116e+07 -1.31161e+08
>> -1.31161e+08 -4.52116e+07 -4.52116e+07 -1.31161e+08  1.31161e+08  
>> 4.52116e+07  4.52116e+07  1.31161e+08
>> -1.76177e+08 -6.07287e+07 -6.07287e+07 -1.76177e+08  1.76177e+08  
>> 6.07287e+07  6.07287e+07  1.76177e+08
>> 
>>  
>> 
>> Scratching my head over here... I've been using these routines successfully 
>> for years in much more complex code.
>> 
>>  
>> 
>> On Sun, Apr 21, 2024 at 12:36 PM Zou, Ling <[email protected] 
>> <mailto:[email protected]>> wrote:
>> 
>> Edit:
>> 
>> how do you do the coloring when using PETSc finite differencing? An 
>> incorrect coloring may give you wrong Jacobian. For debugging purpose, the 
>> simplest way to avoid an incorrect coloring is to assume the matrix is dense 
>> (slow but error proofing). If the numeric converges as expected, then fine 
>> tune your coloring to make it right and fast.
>>  
>> 
>>  
>> 
>> From: petsc-users <[email protected] 
>> <mailto:[email protected]>> on behalf of Zou, Ling via 
>> petsc-users <[email protected] <mailto:[email protected]>>
>> Date: Sunday, April 21, 2024 at 11:29 AM
>> To: Mark Lohry <[email protected] <mailto:[email protected]>>, PETSc 
>> <[email protected] <mailto:[email protected]>>
>> Subject: Re: [petsc-users] finite difference jacobian errors when given 
>> non-constant initial condition
>> 
>> Hi Mark, I am working on a project having similar numeric you have, 
>> one-dimensional finite volume method with second-order slope limiter TVD, 
>> and PETSc finite differencing gives perfect Jacobian even for complex 
>> problems.
>> 
>> So, I tend to believe that your implementation may have some problem. Some 
>> lessons I learned during my code development:
>> 
>>  
>> 
>> how do you do the coloring when using PETSc finite differencing? An 
>> incorrect coloring may give you wrong Jacobian. The simplest way to avoid an 
>> incorrect coloring is to assume the matrix is dense (slow but error 
>> proofing).
>> Residual function evaluation not correctly implemented can also lead to 
>> incorrect Jacobian. In your case, you may want to take a careful look at the 
>> order of execution, when to update your unknown vector, when to perform P1 
>> reconstruction, and when to evaluate the residual.
>>  
>> 
>> -Ling
>> 
>>  
>> 
>> From: petsc-users <[email protected] 
>> <mailto:[email protected]>> on behalf of Mark Lohry 
>> <[email protected] <mailto:[email protected]>>
>> Date: Saturday, April 20, 2024 at 1:35 PM
>> To: PETSc <[email protected] <mailto:[email protected]>>
>> Subject: [petsc-users] finite difference jacobian errors when given 
>> non-constant initial condition
>> 
>> I have a 1-dimensional P1 discontinuous Galerkin discretization of the 
>> linear advection equation with 4 cells and periodic boundaries on [-pi,+pi]. 
>> I'm comparing the results from SNESComputeJacobian with a hand-written 
>> Jacobian. Being linear, 
>> 
>> ZjQcmQRYFpfptBannerStart
>> 
>> This Message Is From an External Sender
>> 
>> This message came from outside your organization.
>> 
>>  
>> 
>> ZjQcmQRYFpfptBannerEnd
>> 
>> I have a 1-dimensional P1 discontinuous Galerkin discretization of the 
>> linear advection equation with 4 cells and periodic boundaries on [-pi,+pi]. 
>> I'm comparing the results from SNESComputeJacobian with a hand-written 
>> Jacobian. Being linear, the Jacobian should be constant/independent of the 
>> solution.
>> 
>>  
>> 
>> When I set the initial condition passed to SNESComputeJacobian as some 
>> constant, say f(x)=1 or 0, the petsc finite difference jacobian agrees with 
>> my hand coded-version. But when I pass it some non-constant value, e.g. 
>> f(x)=sin(x), something goes horribly wrong in the petsc jacobian. 
>> Implementing my own rudimentary finite difference approximation (similar to 
>> how I thought petsc computes it) it returns the correct jacobian to expected 
>> error. Any idea what could be going on?
>> 
>>  
>> 
>> Analytically computed Jacobian:
>> 
>>  4.44089e-16     -1.10266      0.31831   -0.0852909            0            
>> 0     -0.31831      1.18795
>>      1.10266 -4.44089e-16     -1.18795      0.31831            0            
>> 0    0.0852909     -0.31831
>>     -0.31831      1.18795  4.44089e-16     -1.10266      0.31831   
>> -0.0852909            0            0
>>    0.0852909     -0.31831      1.10266 -4.44089e-16     -1.18795      
>> 0.31831            0            0
>>            0            0     -0.31831      1.18795  4.44089e-16     
>> -1.10266      0.31831   -0.0852909
>>            0            0    0.0852909     -0.31831      1.10266 
>> -4.44089e-16     -1.18795      0.31831
>>      0.31831   -0.0852909            0            0     -0.31831      
>> 1.18795  4.44089e-16     -1.10266
>>     -1.18795      0.31831            0            0    0.0852909     
>> -0.31831      1.10266 -4.44089e-16
>> 
>>  
>> 
>>  
>> 
>> petsc finite difference jacobian when given f(x)=1:
>> 
>>  4.44089e-16     -1.10266      0.31831   -0.0852909            0            
>> 0     -0.31831      1.18795
>>      1.10266 -4.44089e-16     -1.18795      0.31831            0            
>> 0    0.0852909     -0.31831
>>     -0.31831      1.18795  4.44089e-16     -1.10266      0.31831   
>> -0.0852909            0            0
>>    0.0852909     -0.31831      1.10266 -4.44089e-16     -1.18795      
>> 0.31831            0            0
>>            0            0     -0.31831      1.18795  4.44089e-16     
>> -1.10266      0.31831   -0.0852909
>>            0            0    0.0852909     -0.31831      1.10266 
>> -4.44089e-16     -1.18795      0.31831
>>      0.31831   -0.0852909            0            0     -0.31831      
>> 1.18795  4.44089e-16     -1.10266
>>     -1.18795      0.31831            0            0    0.0852909     
>> -0.31831      1.10266 -4.44089e-16
>> 
>>  
>> 
>> petsc finite difference jacobian when given f(x) = sin(x):
>> 
>> -1.65547e+08 -3.31856e+08 -1.25427e+09   4.4844e+08            0            
>> 0  1.03206e+08  7.86375e+07
>>  9.13788e+07  1.83178e+08  6.92336e+08  -2.4753e+08            0            
>> 0 -5.69678e+07 -4.34064e+07
>>   3.7084e+07  7.43387e+07  2.80969e+08 -1.00455e+08  -5.0384e+07 
>> -2.99747e+07            0            0
>>   3.7084e+07  7.43387e+07  2.80969e+08 -1.00455e+08  -5.0384e+07 
>> -2.99747e+07            0            0
>>            0            0  2.80969e+08 -1.00455e+08  -5.0384e+07 
>> -2.99747e+07 -2.31191e+07 -1.76155e+07
>>            0            0  2.80969e+08 -1.00455e+08  -5.0384e+07 
>> -2.99747e+07 -2.31191e+07 -1.76155e+07
>>  9.13788e+07  1.83178e+08            0            0 -1.24151e+08 
>> -7.38608e+07 -5.69678e+07 -4.34064e+07
>> -1.65547e+08 -3.31856e+08            0            0  2.24919e+08   
>> 1.3381e+08  1.03206e+08  7.86375e+07
>> 

Reply via email to