> What happens if you call TSCreateAdjointsTS() on a TS obtained with > TSCreateAdjointsTS()? Is the resulting TS useful? > > I don't think so. Theoretically, it should be a tangent linear model with a different forcing function
> Barry > > > > On Oct 17, 2017, at 12:37 AM, Stefano Zampini <[email protected]> > wrote: > > > > > > > > > > > > > > > > >> In case of multiple objectives, there may be a performance reason to > > >> amortize evaluation of several at once, though the list interface is > > >> convenient. Consider common objectives being quantities like lift and > > >> drag on different surfaces of a fluids simulation or stress/strain at > > >> certain critical joints in a structure. Although these have some > > >> locality, it's reasonable to assume that state dependence will have > > >> quickly become global, thus make no attempt to handle sparse > > >> representations of the adjoint vectors lambda. > > >> > > >> > > > I don't get this comment. Is it related with multi-objective > optimization > > > (e.g. Pareto)? > > > > Adjoints are usually preferred any time you are differentiating a small > > number of output variables with respect to a large number of inputs. It > > could be for multi-objective optimization, but it's every bit as > > relevant for physical sensitivities. > > > > > > If we are not talking about Pareto optimization, and thus we don't need > a separate output from each function, then users can pass a single function > that computes all the quantities they need at the same time. Anyway, I > don't mind having a single callback for multiple functions. > > > > >> How are parameters accessed in TSComputeRHSFunction? It looks like > > >> they're coming out of the context. Why should this be different? (If > > >> parameters need to go into a Vec, we could do that, but it comes at a > > >> readability and possibly parallel cost if the global Vec needs to be > > >> communicated to local vectors.) > > >> > > >> > > > design paramaters are fixed troughout an adjoint/tlm run. They can be > > > communicated locally once at the beginning of the run. > > > This is what TSSetUpFromDesign and TSSetSetUpFromDesign are supposed to > > > handle, if I get your comment. > > > > My point is that users currently get design parameters out of the > > context when evaluating their RHSFunction and friends. If that is the > > endorsed way to access design variables, then your new function doesn't > > need to pass the vector. If you need to pass the parameter vector in > > that one function, instead of obtaining them from the context, then > > you'd need to pass the parameter vector everywhere and discourage using > > the context for active design variables. I think there are merits to > > both approaches, but it absolutely needs to be consistent. > > > > So, to be consistent, we have to force users to perform an operation in > a single way? > > Yes, TSSetUpFromDesign is among the last things I have added, and allows > to update the application context (among other things, as it is very > general). I can remove the parameter vector from TSEvalGradientDAE and > TSEvalGradientIC; however, having these vectors there makes it clear that > we allow non-linear dependency on the parameters too. I can add a comment > on the man pages that the vectors are guaranteed to be the same one passed > in my TSSetUpFromDesign, or remove them. Your call. > > > > Users can do anything they want with the forward model context, but they > are not free to change the application context of the adjoints TS. Maybe > this should be improved by adding and extra slot to AdjointTSCtx to carry > over the user context (for the adjoint I mean)? > > > > > > > https://bitbucket.org/petsc/petsc/src/c2e9112e7fdfd89985f9ffc4d68b0d > 46cf7cad52/src/ts/interface/tspdeconstrainedutils.c?at= > stefano_zampini%2Ffeature-continuousadjoint&fileviewer=file-view-default# > tspdeconstrainedutils.c-579 > > > > > > Here is how ex23.c uses it > > > > > > https://bitbucket.org/petsc/petsc/src/c2e9112e7fdfd89985f9ffc4d68b0d > 46cf7cad52/src/ts/examples/tutorials/ex23.c?at=stefano_zampini%2Ffeature- > continuousadjoint&fileviewer=file-view-default#ex23.c-677 > > > > And yet you redo the scatter here instead of using what you stuffed into > > the context. If you needed to redo it for correctness, you'd also need > > to in every other function that accesses design parameters. > > https://bitbucket.org/petsc/petsc/src/c2e9112e7fdfd89985f9ffc4d68b0d > 46cf7cad52/src/ts/examples/tutorials/ex23.c?at=stefano_zampini%2Ffeature- > continuousadjoint&fileviewer=file-view-default#ex23.c-274 > > > > > > This is a leftover from a previous version of the code (there's also a > comment) that was not using TSSetSetUpFromDesign and it's definitely not > needed. > > > > > > >> > Both methods need the Jacobian of the DAE wrt the parameters: H > > >> > TSAdjointSetRHSJacobian(), S TSSetGradientDAE() > > >> > > > >> > Initial condition dependence on the parameters is implicitly > computed in > > >> > Hong's code (limited to linear dependence on all the variables); > > >> > > >> How so? Once the user gets \lambda(time=0), they can apply the chain > > >> rule to produce any dependency on the parameter vector? > > >> > > >> Yes, the chain rule is implemented here > > > > > > https://bitbucket.org/petsc/petsc/src/c2e9112e7fdfd89985f9ffc4d68b0d > 46cf7cad52/src/ts/interface/tspdeconstrainedutils.c?at= > stefano_zampini%2Ffeature-continuousadjoint&fileviewer=file-view-default# > tspdeconstrainedutils.c-254 > > > > I know you have a callback for it, but Hong's interface is plenty > > functional for such systems, they just call that derivative instead of > > writing and registering a function to do that. > > > > I'm not saying HOng's interface is not functional. The initial condition > gradients allow to automatize the process in TSComputeObjectiveAndGradient(), > TSComputeHessian(), and MatMult_Propagator(). Anyway, the lambda variables > are not modified by AdjointTSComputeFinalGradient() (that adds the IC > dependency) and users can do whatever they want with them. > > > > I know you don't like TSComputeObjectiveAndGradient(), but it's a code > that anyway users have to write to compute a gradient of the DAE not > arising from a PDAE. How can this be automatized for PDAEs? Should we store > the AdjointTS inside the model TS and use TSGetAdjointTS(ts,&ts->adjts) with > > > > TSGetAdjointTS(TS f,TS* a) { > > if (!f->adjtts) TSCreateAdjointsTS(f,&f->adjts); > > *a = f->adjts: > > } > > > > and the corresponding Setter to allow users to do > > > > TSCreateAdjointTS(ts,&atts) > > TSSetRHSJacobian(ats,...) > > TSSetAdjointTS(ts,ats) > > > > or > > > > TSGetAdjointTS(ts,&atts) > > TSSetRHSJacobian(ats,...) > > > > > > -- > > Stefano > > -- Stefano
