Dario,
Your discussion below (SOR, ILU(200)) seems to imply that you are
providing some actual explicit representation of the Jacobian, not just doing
something completely matrix free. Is this correct? But the PETSc MatMFFD() is
completely matrix free, it provides only a matrix-vector product and no access
to the matrix entries, hence I am slightly confused.
If you wish to use for the Jacobian something like D + J and do it
completely matrix free then rather than than monkeying with “changing the
function” I would
use the “correct" function to compute J x using matrix free multiply and then
apply the D to as an additional operation. Hence you would do something like
typedef struct { /* data structure to store the usual matrix free matrix
and the additional diagonal matrix */
Mat mf;
Vec D;
Vec work;
} MyJContext;
MyJContext myJctx;
MatCreateSNESMF(SNES,&myJctx.mf); /* create the usual MFFD matrix using
the real nonlinear function */
MatMFFDSetFunction(myJctx.mf,yournonlinearfunction,nonlinearfunctionctx);
VecCreate(comm,&myJctx.D);
/* set the correct sizes for D and fill up with your diagonal matrix
entries */
VecDuplicate(&myJctx.D,&myJCtx.work);
MatCreateShell(comm,…. &myJ);
MatShellSetOperation(myJ,MATOP_MULT, mymultiply);
MatShellSetContext(myJ,&myJctx);
SNESSetJacobian(snes,myJ,myJ, myJFunction,NULL);
where
PetscErrorCode mymultply(Mat A,Vec x,Vec y) /* computes y = J x + D x
{
MyJContext *myJctx;
MatShellGetContext(A,&myJctx);
MatMult(myJctx->mf,x,y);
VecPointwiseMult(myJctx->D,x,myJctx->work);
VecAXPY(y,1.myJctx->work);
}
and
PetscErrorCode myJFunction(SNES snes,Vec x,Mat *A,Mat *B,MatStructure
*str,void* ctx)
/* this is called for each new “Jacobian” to set the point at which it is
computed */
{
MyJContext *myJctx;
Vec f;
MatShellGetContext(*A,&myJctx);
SNESGetFunction(snes,&f);
MatMFFDSetBase(myJctx->mf,x,f);
/* change the D entries if they depend on the current solution etc */
return 0;
}
Sorry now that I have typed it out it looks a bit more complicated then
it really is. It does what you want but without any trickery or confusing code.
But, of course, since it is completely matrix free you cannot use SOR
with it. Of course by making D suitably large you can make it as well
conditioned as you want and thus get rapid linear convergence (though that may
slow down or ruin the nonlinear convergence).
Hope this helps,
Barry
On Jan 3, 2014, at 12:18 PM, Dario Isola <[email protected]> wrote:
> Dear, Barry and Jed,
>
> Thanks for your replies.
>
> We understand your doubts, so let me to put our question into context. In CFD
> it is standard practice to solve non-linear equations of conservation for
> steady flows by means of a inexact Newton method. The original Jacobian
> matrix is modified by adding terms on the diagonal which are proportional to
> the Courant number and to the lumped mass matrix. This allows us to obtain
> two things, "relax" the solution update and increase the diagonal dominance
> of the matrix itself.
>
> The latter is key when simple preconditioners are adopted, in our case point
> Jacobi or SOR. Indeed, if the original matrix was to be used, the GMRES
> method would converge only on very regular meshes and only when adopting ILU
> preconditioners with a very high level of fill-in. As result a higher number
> of non-linear iterations is traded with a simpler linear system to be solved.
>
> While exploring the SNES+MF capabilities we found out that we could
> successfully solve the linear system only with ILU(200) or so. Of course we
> do not want to touch the function used to evaluate the residual, which
> determines the final solution. However we think that a suitable modification
> of the function that Petsc differences to compute the matrix vector product
> would allow us to obtain a behavior similar to the inexact Newton method.
>
> Best regards,
> Dario
>
>
> On 01/03/2014 12:32 PM, Song Gao wrote:
>>
>>
>> ---------- Forwarded message ----------
>> From: Jed Brown <[email protected]>
>> Date: Thu, Jan 2, 2014 at 10:20 AM
>> Subject: Re: [petsc-users] SNESSetFunction and MatMFFDSetFunction
>> To: Song Gao <[email protected]>, Barry Smith <[email protected]>
>> Cc: petsc-users <[email protected]>
>>
>>
>> Song Gao <[email protected]> writes:
>>
>> > Thanks, Barry.
>> >
>> > I mean 2) providing a function that I want PETSc to difference to evaluate
>> > the matrix vector product.
>> >
>> > I want to make a slight modification of the matrix after PETSc evaluate the
>> > matrix vector product.
>>
>> Performing a matrix-vector product is not supposed to modify the matrix.
>> It's unlikely that you really want this.
>>
>>
>> On Wed, Jan 1, 2014 at 3:01 PM, Barry Smith <[email protected]> wrote:
>>
>> On Jan 1, 2014, at 11:09 AM, Song Gao <[email protected]> wrote:
>>
>> > Dear all,
>> >
>> > Happy new year!
>> >
>> > I'm using the matrix-free method to solve NS equations. I call the
>> > SNESSetFunction to set the RHS function. I think SNES also uses that RHS
>> > function to evaluate the matrix vector product.
>>
>> Yes, PETSc differences this function to evaluate the matrix vector
>> product.
>>
>> >
>> > But I want to set one function to evaluate the residual, and another
>> > different function to evaluate the matrix vector product.
>>
>> Are you providing a function that
>>
>> 1) actually evaluates the matrix vector product or are you
>>
>> 2) providing a function that you want PETSc to difference to evaluate
>> the matrix vector product?
>>
>> > How can I do that? Does MatMFFDSetFunction do this job?
>>
>> For 2) yes, but if the function you provide is different than the
>> function provided with SNESSetFunction then the matrix-vector product
>> obtained from differencing it will not be “correct” Jacobian for the
>> SNESSetFunction() you are providing so I don’t see why you would do it.
>>
>> For 1) you should use MatCreateShell() and
>> MatShellSetOperation(mat,MATOP_MULT, ….) and then pass that matrix to
>> SNESSetJacobian(), then PETSc will use that “matrix” to do its matrix-vector
>> products.
>>
>> Barry
>>
>>
>>
>> >
>> > Any suggestion is appreciated. Thank you.
>> >
>> >
>> > Song
>>
>