Also, is there something like MatDensePlaceArray in a branch somewhere, or is it possible to wrap a MatDense around a Vec? I'm calling MatMatMult from on external Krylov solver (which does its own memory allocation), so at each iteration I either need to create new Mats with MatMPIDenseSetPreallocation, or use buffer Mats and use MatDenseGetArray/MatDenseRestoreArray. Both approaches have drawbacks I believe (the former needs MatAssemblyBegin/MatAssemblyEnd which does a global reduction, the latter is a waste of memory and needs copying from/to the memory allocated by PETSc) that would be avoided with something like MatDensePlaceArray.

Thanks.

On Mon, 22 May 2017 08:11:30 +0200, Pierre Jolivet wrote:
On Sun, 21 May 2017 17:10:30 -0500, Barry Smith wrote:
On May 21, 2017, at 3:17 PM, Pierre Jolivet <[email protected]> wrote:

Hello,
I’m wondering if any branch has one or more of the following features (or if this is in the pipeline somehow):
1) MKL PARDISO support for MPIAIJ
2) MKL PARDISO support for MPIBAIJ using iparm(37) https://software.intel.com/en-us/mkl-developer-reference-fortran-pardiso-iparm-parameter#IPARM37

  From:



https://software.intel.com/en-us/mkl-developer-reference-fortran-sparse-blas-bsr-matrix-storage-format#GUID-9BD4DADE-6059-4042-BC2A-F41048A47953

values
A real array that contains the elements of the non-zero blocks of a
sparse matrix. The elements are stored block by block in row-major
order. A non-zero block is the block that contains at least one
non-zero element. All elements of non-zero blocks are stored, even if some of them is equal to zero. Within each non-zero block the elements
are stored in column major order in the case of the one-based
indexing, and in row major order in the case of the zero-based
indexing.

columns
Element i of the integer array columns is the number of the column in
the block matrix that contains the i-th non-zero block.

rowIndex
Element j of this integer array gives the index of the element in the
columns array that is first non-zero block in a row j of the block
matrix.

Unfortunately PETSc always stores "Within each non-zero block the
elements are stored in column major order" even though this it uses
zero based indexing. This means the matrix storage would need to be
monkeyed with before using MKL PARDISO. In order to reduce memory
usage and excess copying I would make copies of the columns and
rowIndex integers and shift them to one based indices and then you can
pass these to MKL Pardiso using the BCSR format; this is
straightforward for SeqBAIJ matrices. Like Matt said I don't know if
MKL PARDISO is appropriate for MPI matrices.

OK, I see. That is true that MKL storage layout is somehow confusing
as it depends on the numbering used.
Concerning PARDISO, I got confused because Google only redirects to

http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MATSOLVERMKL_PARDISO.html#MATSOLVERMKL_PARDISO
There is no such page as

http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MATSOLVERMKL_CPARDISO.html#MATSOLVERMKL_CPARDISO
But MKL_CPARDISO is indeed defined there

http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatSolverPackage.html#MatSolverPackage,
thanks for the pointer Stefano.


3) MatMatMult_MPIBAIJ_MPIDense

   We don't have any of the MatMatMult_XXXBAIJ_MPIDense() multiple
routines but they should be pretty easy to add, just copy the
corresponding AIJ routines and make the appropriate adjustments to
take into account the blocks.

Yes, I've started copying most of the MatMatMult_XXXAIJ_XXXDense()
routines. I was just wondering if someone already did that or tried to use specialized libraries like libxsmm, but it looks like I'll have to
do it myself.

Thanks for your inputs!


Thanks in advance,
Pierre

Reply via email to