Marius:
> Thanks a lot, I did not test it thoroughly but it seems to work well and
> really helpful. I have one question, is it necessary to do the MatTranspose
> step as in ex214.c, I thought this is handled internally by petsc?
>
MUMPS requires sparse compressed COLUMN format in the host for
Thanks a lot, I did not test it thoroughly but it seems to work well and really helpful. I have one question, is it necessary to do the MatTranspose step as in ex214.c, I thought this is handled internally by petsc?
Marius:
I added support for parallel sparse RHS (in host) for
Marius:
I added support for parallel sparse RHS (in host) for MatMatSolve_MUMPS()
https://bitbucket.org/petsc/petsc/commits/2b691707dd0cf456c808def006e14b6f56b364b6
It is in the branch hzhang/mumps-spRHS.
You may test it.
I'll further cleanup the routine, test it, then merge it to petsc.
Hong
On Mon, Jun 4, 2018 at 1:03 PM, Jean-Yves LExcellent <
jean-yves.l.excell...@ens-lyon.fr> wrote:
>
> Thanks for the details of your needs.
>
> For the first application, the sparse RHS feature with distributed
> solution should effectively be fine.
>
I'll add parallel support of this feature in
Thanks for the details of your needs.
For the first application, the sparse RHS feature with distributed
solution should effectively be fine.
For the second one, a future distributed RHS feature (not currently
available in MUMPS) might help if the centralized sparse RHS is too
memory
and the second application I have in mind is solving a system of the from AX=B where A and B are sparse and B is given by a block matrix of the form B=[B1 0, 0 0] where B1 is dense but the dimension is (much) smaller than that of the whole matrix B.
Marius:
Current PETSc interface supports
I want to invert a rather large sparse matrix for this using a sparse rhs with centralized input would be ok as long as the solution is distributed.
Marius:
Current PETSc interface supports sequential sparse multiple right-hand side, but not distributed.
It turns out that mumps does
Marius:
Current PETSc interface supports sequential sparse multiple right-hand
side, but not distributed.
It turns out that mumps does not support distributed sparse multiple
right-hand sides at
the moment (see attached email).
Jean-Yves invites you to communicate with him directly.
Let me know
Thanks a lot guys, very helpful.
I see MUMPS http://mumps.enseeiht.fr/
Sparse multiple right-hand side, distributed solution; Exploitation of sparsity
in the right-hand sidesPETSc interface computes mumps distributed solution as
default (this is not new) (ICNTL(21) = 1)
I will add
I see MUMPS http://mumps.enseeiht.fr/
- *Sparse multiple right-hand side, distributed solution*; Exploitation
of sparsity in the right-hand sides
PETSc interface computes mumps *distributed solution *as default (this is
not new) (ICNTL(21) = 1)
I will add support for *Sparse multiple
Hong,
Can you see about adding support for distributed right hand side?
Thanks
Barry
> On May 31, 2018, at 2:37 AM, Marius Buerkle wrote:
>
> The fix for MAT_NEW_NONZERO_LOCATIONS, thanks again.
>
> I have yet another question, sorry. The recent version of MUMPS supports
The current version of the MUMPS code in PETSc supports sparse right hand
sides for sequential solvers. You can call MatMatSolve(A,X,B) with B of
type MATTRANSPOSE, with the inner matrix being a MATSEQAIJ
2018-05-31 10:37 GMT+03:00 Marius Buerkle :
> The fix for MAT_NEW_NONZERO_LOCATIONS,
The fix for MAT_NEW_NONZERO_LOCATIONS, thanks again.
I have yet another question, sorry. The recent version of MUMPS supports
distributed and sparse RHS is there any chance that this will be supported in
PETSc in the near future?
> On May 30, 2018, at 6:55 PM, Marius Buerkle wrote:
>
>
> On May 30, 2018, at 6:55 PM, Marius Buerkle wrote:
>
> Thanks for the quick fix, I will test it and report back.
> I have another maybe related question, if MAT_NEW_NONZERO_LOCATIONS is true
> and let's say 1 new nonzero position is created it does not allocated 1 but
> several new
Thanks for the quick fix, I will test it and report back.
I have another maybe related question, if MAT_NEW_NONZERO_LOCATIONS is true and
let's say 1 new nonzero position is created it does not allocated 1 but several
new nonzeros but only use 1. I think that is normal, right? But, at least as
Fixed in the branch barry/fix-mat-new-nonzero-locations/maint
Once this passes testing it will go into the maint branch and then the next
patch release but you can use it now in the branch
barry/fix-mat-new-nonzero-locations/maint
Thanks for the report and reproducible example
Please send complete error message; type of matrix used etc. Ideally code
that demonstrates the problem.
Barry
> On May 29, 2018, at 3:31 AM, Marius Buerkle wrote:
>
>
> Hi,
>
> I tried to set MAT_NEW_NONZERO_LOCATIONS to false, as far as I understood
> MatSetValues should simply
Hi,
I tried to set MAT_NEW_NONZERO_LOCATIONS to false, as far as I understood
MatSetValues should simply ignore entries which would give rise to new nonzero
values not creating a new entry and not cause an error, but I get "[1]PETSC
ERROR: Inserting a new nonzero at global row/column". Is
18 matches
Mail list logo