Thank you, Matt.
Seems that at least the matrix coloring part I am following the ‘best practice’.

-Ling

From: Matthew Knepley <knep...@gmail.com>
Date: Thursday, January 16, 2025 at 9:01 PM
To: Zou, Ling <l...@anl.gov>
Cc: petsc-users@mcs.anl.gov <petsc-users@mcs.anl.gov>
Subject: Re: [petsc-users] Auto sparsity detection?
On Thu, Jan 16, 2025 at 9: 50 PM Zou, Ling via petsc-users <petsc-users@ mcs. 
anl. gov> wrote: Hi all, Does PETSc has some automatic matrix sparsity 
detection algorithm available? Something like: https: //docs. sciml. 
ai/NonlinearSolve/stable/basics/sparsity_detection/
ZjQcmQRYFpfptBannerStart
This Message Is From an External Sender
This message came from outside your organization.

ZjQcmQRYFpfptBannerEnd
On Thu, Jan 16, 2025 at 9:50 PM Zou, Ling via petsc-users 
<petsc-users@mcs.anl.gov<mailto:petsc-users@mcs.anl.gov>> wrote:
Hi all,

Does PETSc has some automatic matrix sparsity detection algorithm available?
Something like: 
https://urldefense.us/v3/__https://docs.sciml.ai/NonlinearSolve/stable/basics/sparsity_detection/__;!!G_uCfscf7eWS!e2UJSeesJYrQcCcqAr_ecKtOzfunVxto3kBxGHMZUSLdwstXEZhtFmKA8_fBeRE19FoFChtexPD3ya-IFZs$
 
<https://urldefense.us/v3/__https:/docs.sciml.ai/NonlinearSolve/stable/basics/sparsity_detection/__;!!G_uCfscf7eWS!ccEx6zmuNrVADqtN50hO2N0k4Qs-A70nztAjMLu-JElnjhK5w84BpYC8CAINd6KihSxaS2rx_LgpqUVM49U$>

Sparsity detection would rely on introspection of the user code for 
ComputeFunction(), which is not
possible in C (unless you were to code up your evaluation in some symbolic 
framework).

The background is that I use finite differencing plus matrix coloring to 
(efficiently) get the Jacobian.
For the matrix coloring part, I color the matrix based on mesh connectivity and 
variable dependencies, which is not bad, but just try to be lazy to even 
eliminating this part.

This is how the automatic frameworks also work. This is how we compute the 
sparsity pattern for PetscFE and PetscFV.

A related but different question, how much does PETSc support automatic 
differentiation?
I see some old paper:
https://ftp.mcs.anl.gov/pub/tech_reports/reports/P922.pdf
and discussion in the roadmap:
https://urldefense.us/v3/__https://petsc.org/release/community/roadmap/__;!!G_uCfscf7eWS!e2UJSeesJYrQcCcqAr_ecKtOzfunVxto3kBxGHMZUSLdwstXEZhtFmKA8_fBeRE19FoFChtexPD37cms1Jc$
 
<https://urldefense.us/v3/__https:/petsc.org/release/community/roadmap/__;!!G_uCfscf7eWS!ccEx6zmuNrVADqtN50hO2N0k4Qs-A70nztAjMLu-JElnjhK5w84BpYC8CAINd6KihSxaS2rx_Lgpw6v6hKE$>
I am thinking that if AD works so I don’t even need to do finite differencing 
Jacobian, or have it as another option.

Other people understand that better than I do.

  Thanks,

     Matt

Best,

-Ling


--
What most experimenters take for granted before they begin their experiments is 
infinitely more interesting than any results to which their experiments lead.
-- Norbert Wiener

https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!e2UJSeesJYrQcCcqAr_ecKtOzfunVxto3kBxGHMZUSLdwstXEZhtFmKA8_fBeRE19FoFChtexPD34IEVG4s$
 
<https://urldefense.us/v3/__http:/www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!d-7O5V0pNvm_fDSKhNk_ilXP0jG-_MBnectBJ0BfVPOSzARXvYWAahGyRNf1cKCh9dJKEiFt2caV$>

Reply via email to