We have an example in src/ts/tutorials/autodiff on using AD for 
reaction-diffusion equations. It does exactly what Matt said - differentiating 
the stencil kernel to get the Jacobian kernel. More information is available in 
this report: 
https://urldefense.us/v3/__https://arxiv.org/abs/1909.02836__;!!G_uCfscf7eWS!cMXnAlSzQJa8lo5JBEpmUizoHds-gACnH-ecvwbHQpvuta1pc1NbtArflhZa6Td7oV1qIFEndu5eX9P1yjvtqEGquw$
 

Hong (Mr.)

________________________________
From: petsc-users <petsc-users-boun...@mcs.anl.gov> on behalf of Matthew 
Knepley <knep...@gmail.com>
Sent: Friday, January 17, 2025 6:22 AM
To: Zou, Ling <l...@anl.gov>
Cc: petsc-users@mcs.anl.gov <petsc-users@mcs.anl.gov>
Subject: Re: [petsc-users] Auto sparsity detection?

On Thu, Jan 16, 2025 at 10:43 PM Zou, Ling <l...@anl.gov<mailto:l...@anl.gov>> 
wrote:

Thank you, Matt.

Seems that at least the matrix coloring part I am following the ‘best practice’.

Yes, for FD approximations of the Jacobian.

If you have a stencil operation (like FEM or FVM), then AD can be very useful 
because you
only have to differentiate the kernel to get the Jacobian kernel.

  Thanks,

     Matt




-Ling



From: Matthew Knepley <knep...@gmail.com<mailto:knep...@gmail.com>>
Date: Thursday, January 16, 2025 at 9:01 PM
To: Zou, Ling <l...@anl.gov<mailto:l...@anl.gov>>
Cc: petsc-users@mcs.anl.gov<mailto:petsc-users@mcs.anl.gov> 
<petsc-users@mcs.anl.gov<mailto:petsc-users@mcs.anl.gov>>
Subject: Re: [petsc-users] Auto sparsity detection?

On Thu, Jan 16, 2025 at 9: 50 PM Zou, Ling via petsc-users <petsc-users@ mcs. 
anl. gov> wrote: Hi all, Does PETSc has some automatic matrix sparsity 
detection algorithm available? Something like: https: //docs. sciml. 
ai/NonlinearSolve/stable/basics/sparsity_detection/

ZjQcmQRYFpfptBannerStart

This Message Is From an External Sender

This message came from outside your organization.



ZjQcmQRYFpfptBannerEnd

On Thu, Jan 16, 2025 at 9:50 PM Zou, Ling via petsc-users 
<petsc-users@mcs.anl.gov<mailto:petsc-users@mcs.anl.gov>> wrote:

Hi all,



Does PETSc has some automatic matrix sparsity detection algorithm available?

Something like: 
https://urldefense.us/v3/__https://docs.sciml.ai/NonlinearSolve/stable/basics/sparsity_detection/__;!!G_uCfscf7eWS!cMXnAlSzQJa8lo5JBEpmUizoHds-gACnH-ecvwbHQpvuta1pc1NbtArflhZa6Td7oV1qIFEndu5eX9P1yjs2FwvpFg$
 
<https://urldefense.us/v3/__https:/docs.sciml.ai/NonlinearSolve/stable/basics/sparsity_detection/__;!!G_uCfscf7eWS!ccEx6zmuNrVADqtN50hO2N0k4Qs-A70nztAjMLu-JElnjhK5w84BpYC8CAINd6KihSxaS2rx_LgpqUVM49U$>



Sparsity detection would rely on introspection of the user code for 
ComputeFunction(), which is not

possible in C (unless you were to code up your evaluation in some symbolic 
framework).



The background is that I use finite differencing plus matrix coloring to 
(efficiently) get the Jacobian.

For the matrix coloring part, I color the matrix based on mesh connectivity and 
variable dependencies, which is not bad, but just try to be lazy to even 
eliminating this part.



This is how the automatic frameworks also work. This is how we compute the 
sparsity pattern for PetscFE and PetscFV.



A related but different question, how much does PETSc support automatic 
differentiation?

I see some old paper:

https://ftp.mcs.anl.gov/pub/tech_reports/reports/P922.pdf

and discussion in the roadmap:

https://urldefense.us/v3/__https://petsc.org/release/community/roadmap/__;!!G_uCfscf7eWS!cMXnAlSzQJa8lo5JBEpmUizoHds-gACnH-ecvwbHQpvuta1pc1NbtArflhZa6Td7oV1qIFEndu5eX9P1yjtgW1TJuw$
 
<https://urldefense.us/v3/__https:/petsc.org/release/community/roadmap/__;!!G_uCfscf7eWS!ccEx6zmuNrVADqtN50hO2N0k4Qs-A70nztAjMLu-JElnjhK5w84BpYC8CAINd6KihSxaS2rx_Lgpw6v6hKE$>

I am thinking that if AD works so I don’t even need to do finite differencing 
Jacobian, or have it as another option.



Other people understand that better than I do.



  Thanks,



     Matt



Best,



-Ling




--

What most experimenters take for granted before they begin their experiments is 
infinitely more interesting than any results to which their experiments lead.
-- Norbert Wiener



https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!cMXnAlSzQJa8lo5JBEpmUizoHds-gACnH-ecvwbHQpvuta1pc1NbtArflhZa6Td7oV1qIFEndu5eX9P1yjtIbKZadg$
 
<https://urldefense.us/v3/__http:/www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!d-7O5V0pNvm_fDSKhNk_ilXP0jG-_MBnectBJ0BfVPOSzARXvYWAahGyRNf1cKCh9dJKEiFt2caV$>


--
What most experimenters take for granted before they begin their experiments is 
infinitely more interesting than any results to which their experiments lead.
-- Norbert Wiener

https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!cMXnAlSzQJa8lo5JBEpmUizoHds-gACnH-ecvwbHQpvuta1pc1NbtArflhZa6Td7oV1qIFEndu5eX9P1yjtIbKZadg$
 
<https://urldefense.us/v3/__http://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!cWyHnKq-Gzasz3ooIUAgTl0RTGrzg0fW8jwVOi0AHE_Ydv4dnayXiG06EPQYvp6guWhXYTv8DMnOu7xNNzJR$>

Reply via email to