Barry Smith a ?crit :
1) The PETSc LU and Cholesky solvers only run sequentially.
2) The parallel LU and Cholesky solvers PETSc interfaces to, SuperLU_dist,
MUMPS, Spooles, DSCPACK do NOT accept an external ordering provided for
them.
Hence we do not have any setup for
Barry Smith a ?crit :
Dimitri,
No, I think this is not the correct way to look at things. Load
balancing the original matrix is not neccessarily a good thing for
doing an LU factorization (in fact it is likely just to make the LU
factorization have much more fill and require much more
On Thu, 11 Jan 2007, Dimitri Lecas wrote:
Barry Smith a ?crit :
Dimitri,
No, I think this is not the correct way to look at things. Load
balancing the original matrix is not neccessarily a good thing for
doing an LU factorization (in fact it is likely just to make the LU
Barry Smith a ?crit :
On Thu, 11 Jan 2007, Dimitri Lecas wrote:
Barry Smith a ?crit :
Dimitri,
No, I think this is not the correct way to look at things. Load
balancing the original matrix is not neccessarily a good thing for
doing an LU factorization (in fact it is
Reordering a matrix can result in fewer iterations for an iterative solver.
Matt
On 1/11/07, Dimitri Lecas dimitri.lecas at free.fr wrote:
Barry Smith a ?crit :
On Thu, 11 Jan 2007, Dimitri Lecas wrote:
Barry Smith a ?crit :
Dimitri,
No, I think this is not the
In parallel matrix-vector products (used by all the KSP methods)
the amount of communication is the number of cut-edges of the
graph of the matrix. By repartitioning with metis this reduces the
number of cut edges.
Note: we don't actually advocate doing it this way. One should
partition the
Dimitri,
No, I think this is not the correct way to look at things. Load
balancing the original matrix is not neccessarily a good thing for
doing an LU factorization (in fact it is likely just to make the LU
factorization have much more fill and require much more floating
operations).
Hello,
I have to test the ParMetis ordering for factorization and i would like
to known if it's possible to use a user ordering ?
If i understand the manual correctly, i have to use
MatOrderingRegisterDynamic and PCFactorSetMatOrdering but the sentence
Currently we support orderings only for
1) The PETSc LU and Cholesky solvers only run sequentially.
2) The parallel LU and Cholesky solvers PETSc interfaces to, SuperLU_dist,
MUMPS, Spooles, DSCPACK do NOT accept an external ordering provided for
them.
Hence we do not have any setup for doing parallel matrix