I was playing around with my example last night and now I'm curious.
When I switched to using TSSetIFunction/TSSetIJacobian, the problem
seemed to go away completely. The integration finishes, and SNES says
the Jacobian is correct. I haven't yet plotted the results to make sure
the answer is right, but things seem promising. Does anyone know if this
is also a workaround for the problem? I could see how doing the scaling
and shifting manually could be helpful in this instance.
Ellen
On 10/18/19 2:33 AM, Smith, Barry F. wrote:
>
> int jac(TS ts, double t, Vec y, Mat J, Mat B, void *ptr)
> {
> MatAssemblyBegin(B,MAT_FINAL_ASSEMBLY);
> MatAssemblyEnd(B,MAT_FINAL_ASSEMBLY);
> VecCopy(y, (Vec) ptr);
> return 0;
> }
>
> There is a "feature" of TS that is shifts and scales the Jacobian you provide
> in order to define the nonlinear problem for each particular ODE algorithm.
> But when it finishes the nonlinear solve it does not "fix" the shift and
> scale; thus you need to call the MatAssemblyBegin() to put the Jacobian back
> in the original state. This has been an issue for years
> https://gitlab.com/petsc/petsc/issues/293 but no one seems to care so it
> remains to haunt users. Sorry about this.
>
> Barry
>
>
>
>
>> On Oct 17, 2019, at 5:47 PM, Ellen Price via petsc-users
>> <[email protected]> wrote:
>>
>> I like Barry's idea of going matrix-free here. Unfortunately, I just can't
>> seem to make MatShell and TS work together, even using the technique that I
>> found on the mailing list from a while ago. From what I understand, I should:
>>
>> 1. When the Jacobian function is called, save the state vector into the
>> MatShell context.
>> 2. When the MatTimes function is called, compute the Jacobian at that state
>> vector and output the Jacobian-times-vector.
>>
>> Yet, when I do this for a stupidly simple problem (not my SPH problem, see
>> attached minimal example) with dy/dt = -y, I can't seem to make it work, and
>> I can't figure out why. TS says the nonlinear solve diverged, and SNES says
>> the Jacobian is wrong, but I know what the Jacobian of this function should
>> be. I'm kind of at a loss here. Can anyone tell me what I'm doing wrong?
>>
>> Compile with:
>> mpicc -Wall -Wextra -W -ggdb -isystem /path/to/petsc-3.10.3-dbg-gcc/include
>> mwe.c -L /path/to/petsc-3.10.3-dbg-gcc/lib -lpetsc -o mwe
>>
>> Run with:
>> ./mwe -ts_monitor -snes_test_jacobian
>>
>> Ellen
>>
>> On Wed, Oct 16, 2019 at 10:07 AM Matthew Knepley via petsc-users
>> <[email protected]> wrote:
>> On Wed, Oct 16, 2019 at 9:32 AM Dave May <[email protected]> wrote:
>> What Ellen wants to do seems exactly the same use case as required by
>> dynamic AMR.
>>
>> Some thoughts:
>> * If the target problem is nonlinear, then you will need to evaluate the
>> Jacobian more than once (with the same nonzero pattern) per time step. You
>> would also have to solve a linear problem at each Newton iterate.
>> Collectively I think both tasks will consume much more time than that
>> required to create a new matrix and determine / set the nonzero pattern
>> (which is only required once per time step).
>>
>> * If you are using an incompressible SPH method (e.g. you use a kernel with
>> a constant compact support) then you will have code which allows you to
>> efficiently determine which particles interact, e.g. via a background cell
>> structure, thus you have a means to infer the the nonzero structure. However
>> computing the off-diagonal counts can be a pain...
>>
>> * Going further, provided you have a unique id assigned to each particle,
>> you can use MatPreallocator
>> (https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatPreallocatorPreallocate.html)
>> to easily define the exact nonzero pattern.
>>
>> Given all the above, I don’t see why you shouldn’t try your idea of creating
>> a new matrix at each step and assembling the Jacobian.
>> Why not just try using MatPreallocator and profile the time required to
>> define the nonzero structure?
>>
>> I like Barry’s idea of defining the preconditioner for the Jacobian using an
>> operator defined via kernels with smaller compact support. I’d be interested
>> to know how effective that is as a preconditioner.
>>
>> There is potentially a larger issue to consider (if your application runs in
>> parallel). Whilst the number of particles is constant in time, the number of
>> particles per MPI rank will likely change as particles advect (I'm assuming
>> you decomposed the problem using the background search cell grid and do not
>> load balance the particles which is a commonly done with incompressible SPH
>> implementations). As a result, the local size of the Vec object which stores
>> the solution will change between time steps. Vec cannot be re-sized, hence
>> you will not only need to change the nonzero structure of the Jacobian but
>> you will also need to destroy/create all Vec's objects and all objects
>> associated with the nonlinear solve. Given this, I'm not even sure you can
>> use TS for your use case (hopefully a TS expert will comment on this).
>>
>> My experience has been that creating new objects (Vec, Mat, KSP, SNES) in
>> PETSc is fast (compared to a linear solve). You might have to give up on
>> using TS, and instead roll your own time integrator. By doing this you will
>> have control of only a SNES object (plus Mat's for J Vec's for the residual
>> and solution) which you can create and destroy within each time step. To use
>> FD coloring you would need to provide this function
>> SNESComputeJacobianDefaultColor to SNESComputeJacobian(), along with a
>> preallocated matrix (which you define using MatPreallocator).
>>
>> If rolling your own TS would work, then TSStep() will definitely work, and
>> saves a lot of coding for multistep methods.
>>
>> Thanks,
>>
>> Matt
>>
>> Thanks
>> Dave
>> Dave
>>
>>
>>
>> On Wed 16. Oct 2019 at 13:25, Matthew Knepley via petsc-users
>> <[email protected]> wrote:
>> On Tue, Oct 15, 2019 at 4:56 PM Smith, Barry F. via petsc-users
>> <[email protected]> wrote:
>>
>> Because of the difficulty of maintaining a nonzero matrix for such
>> problems aren't these problems often done "matrix-free"?
>>
>> So you provide a routine that computes the action of the operator but
>> doesn't form the operator explicitly (and you hope that you don't need a
>> preconditioner). There are simple ways (but less optimal) to do this as well
>> as more sophisticated (such as multipole methods).
>>
>> If the convergence of the linear solver is too slow (due to lack of
>> preconditioner) you might consider continuing with matrix free but at each
>> new Newton solve (or several Newton solves) construct a very sparse matrix
>> that captures just the very local coupling in the problem. Once particles
>> have moved around a bit you would throw away the old matrix and construct a
>> new one. For example the matrix might just captures interactions between
>> particles that are less than some radius away from themselves. You could use
>> a direct solver or iterative solver to solve this very sparse system.
>>
>> I tried to do this with Dan Negrut many years ago and had the same problems.
>> That is part of why incompressible SPH never works,
>> since you need global modes.
>>
>> Thanks,
>>
>> Matt
>>
>> Barry
>>
>> This is why KSPSetOperators and SNESSetJacobian and TSSetRHS/IJacobain take
>> two Jacobian matrices, the first holds the matrix free Jacobian and the
>> second holds the approximation used inside the preconditioner.
>>
>>
>>
>>> On Oct 15, 2019, at 3:29 PM, Ellen M. Price <[email protected]>
>>> wrote:
>>>
>>> Thanks for the reply, Barry! Unfortunately, this is a particle code
>>> (SPH, specifically), so the particle neighbors, which influence the
>>> properties, change over time; the degrees of freedom is a constant, as
>>> is the particle number. Any thoughts, given the new info? Or should I
>>> stick with explicit integration? I've seen explicit used most commonly,
>>> but, as I mentioned before, the optimal timestep that gives the error
>>> bounds I need is just too small for my application, and the only other
>>> thing I can think to try is to throw a lot more cores at the problem and
>>> wait.
>>>
>>> Ellen
>>>
>>>
>>> On 10/15/19 4:14 PM, Smith, Barry F. wrote:
>>>>
>>>> So you have a fixed "mesh" and fixed number of degrees of freedom for the
>>>> entire time but the dependency on the function value at each particular
>>>> point on the neighbor points changes over time?
>>>>
>>>> For example, if you are doing upwinding and the direction changes when
>>>> you used to use values from the right you now use values from the left?
>>>>
>>>> Independent of the coloring, just changing the locations in the matrix
>>>> where the entries are nonzero is expensive and painful. So what I would do
>>>> is build the initial Jacobian nonzero structure to contain all possible
>>>> connections, color that and then use that for the entire computation. At
>>>> each time step you will be working with some zero entries in the Jacobian
>>>> but that is ok, it is simpler and ultimately probably faster than trying
>>>> to keep changing the nonzero structure to "optimize" and only treat truly
>>>> nonzero values.
>>>>
>>>> For "stencil" type discretizations (finite difference, finite value)
>>>> what I describe should be fine. But if you problem is completely different
>>>> (I can't imagine how) and the Jacobian changes truly dramatically in
>>>> structure then what I suggest may not be appropriate but you'll need to
>>>> tell us your problem in that case so we can make suggestions.
>>>>
>>>> Barry
>>>>
>>>>
>>>>
>>>>> On Oct 15, 2019, at 2:56 PM, Ellen M. Price via petsc-users
>>>>> <[email protected]> wrote:
>>>>>
>>>>> Hi PETSc users!
>>>>>
>>>>> I have a problem that I am integrating with TS, and I think an implicit
>>>>> method would let me take larger timesteps. The Jacobian is difficult to
>>>>> compute, but, more importantly, the nonzero structure is changing with
>>>>> time, so even if I use coloring and finite differences to compute the
>>>>> actual values, I will have to update the coloring every time the
>>>>> Jacobian recomputes.
>>>>>
>>>>> Has anyone done this before, and/or is there a correct way to tell TS to
>>>>> re-compute the coloring of the matrix before SNES actually computes the
>>>>> entries? Is this even a good idea, or is the coloring so expensive that
>>>>> this is foolish (in general -- I know the answer depends on the problem,
>>>>> but there may be a rule of thumb)?
>>>>>
>>>>> Thanks,
>>>>> Ellen Price
>>>>
>>
>>
>>
>> --
>> What most experimenters take for granted before they begin their experiments
>> is infinitely more interesting than any results to which their experiments
>> lead.
>> -- Norbert Wiener
>>
>> https://www.cse.buffalo.edu/~knepley/
>>
>>
>> --
>> What most experimenters take for granted before they begin their experiments
>> is infinitely more interesting than any results to which their experiments
>> lead.
>> -- Norbert Wiener
>>
>> https://www.cse.buffalo.edu/~knepley/
>> <mwe.c>
>