On Mon, 29 May 2017 04:38:39 -0600, Jed Brown wrote:
Pierre Jolivet <[email protected]> writes:
On 29 May 2017, at 06:49, Jed Brown <[email protected]> wrote:
You can't get a callback from this "external Krylov solver"?
You mean, a callback to let the external solver use the memory
allocated by PETSc for the Mats? Unfortunately, I need the underlying
memory to be contiguous between all Mats, and I don't think it is
possible with the current PETSc operators for matrix creation.
You can allocate the full block, then pass portions of the
(column-aligned) array in MatCreateDense().
So I was doing:
MatCreate();
MatSetSizes();
MatSetType();
MatMPIDenseSetPreallocation();
MatAssemblyBegin();
MatAssemblyEnd();
MatMatMult();
and what you are suggesting is:
MatCreateDense(); // data != nullptr as provided by the external solver
so that there is no need for MatAssemblyBegin/MatAssemblyEnd
http://www.mcs.anl.gov/petsc/petsc-current/src/mat/impls/dense/mpi/mpidense.c.html#line1473
?
MatMatMult();
That makes perfect sense (assuming I'm not wrong about the
MatAssemblyBegin/MatAssemblyEnd). MatSetUpMultiply_MPIDense would still
be called at each iteration but I doubt this is too costly.
Thanks for the help.