For a bit of assistance, you can use DMComposite and DMRedundantCreate; see
src/snes/tutorials/ex21.c and ex22.c.
Note that when computing redundantly, it's critical that the computation be
deterministic (i.e., not using atomics or randomness without matching seeds) so
the logic stays
This is a problem with MPI programming and optimization; I am unaware of a
perfect solution.
Put the design variables into the solution vector on MPI rank 0, and when
doing your objective/gradient, send the values to all the MPI processes where
you use them. You can use a VecScatter to
Hi Petsc team,
I have a question regarding parallel layout of a Petsc vector to be used in TAO
optimizers for cases where the optimization variables split into ‘design’ and
‘state’ variables (e.g. such as in PDE-constrained optimization as in tao_lcl).
In our case, the state variable naturally