Hi,
Let me include the pets-dev team in the discussion. TAO uses the PETSc matrices, vectors, and linear solvers our operations. Some of the operations already have GPU implementations. In terms of the unconstrained solvers, the Newton methods are based on the linear conjugate gradient methods, so the key operations are matrix-vector products. The quasi-Newton methods end up using a lot of vector operations to apply the inverse of a rank-k matrix. There may be an opportunity to implement this operation more efficiently on a GPU using the a matrix representation. The nonlinear conjugate gradient methods basically use a "small" number of vector operations; I am not sure how much benefit could be derived from a GPU implementation of those. Probably what I would do is to profile your problem and find out which operations you need to make run faster and then see if if it makes sense to do them on the GPU. Note: I suspect a GPU only version of the Newton, quasi-Newton, or nonlinear conjugate gradient methods would be difficult; the line search and function evaluations would seem to be the biggest issue to put on a GPU. Todd. > On Dec 3, 2018, at 3:27 PM, Weston, Brian Thomas <[email protected]> wrote: > > Hello, > > We are currently developing a new code at LLNL and we plan to leverage > PETSc/TAO for the Newton-Krylov solvers and minimization methods. We plan on > developing the code to be compatible with the GPU and we were wondering if > many of the methods in TAO are compatible and performant on the GPU. In > particular, the unconstrained minimization routines. Thanks. > > Best, > Brian >
