Using parMetis in petsc for ordering
Barry Smith a ?crit : 1) The PETSc LU and Cholesky solvers only run sequentially. 2) The parallel LU and Cholesky solvers PETSc interfaces to, SuperLU_dist, MUMPS, Spooles, DSCPACK do NOT accept an external ordering provided for them. Hence we do not have any setup for doing parallel matrix orderings for factorizations, since we cannot use them. We could allow calling a parallel ordering but I'm not sure what it would be useful for. Barry Ok i see that i was looking for a wrong direction. Just in ksp/examples/tutorials/ex10.c, Partitioning is used on the linear system matrix. I don't understand why ? What i understand it's, with MatPartitioning we try to partitioning the graph build from the matrix (vertices is the row/columns and edge between i and j if aij or aji is non zero value). But in my mind, a good partitioning for solving linear system with iterative algorithm is to load balance the non zero value between processors, so we have to use weight, number of non zero value in the row, to have a good partitioning. Do i have it right ? -- Dimitri Lecas
Using parMetis in petsc for ordering
Barry Smith a ?crit : Dimitri, No, I think this is not the correct way to look at things. Load balancing the original matrix is not neccessarily a good thing for doing an LU factorization (in fact it is likely just to make the LU factorization have much more fill and require much more floating operations). Packages like SuperLU_dist and Mumps have their own internal ordering routines that are specifically for getting a good ordering for doing the parallel LU factorization, you should just have these solvers use them (which they do automatically). Barry On Thu, 11 Jan 2007, Dimitri Lecas wrote: I'am no longer talking about doing LU factorization. But use iterative method for solving a linear system, like bicg. Like in the ex10. In this example i don't understand why using MatPartitioning. -- Dimitri Lecas
Using parMetis in petsc for ordering
On Thu, 11 Jan 2007, Dimitri Lecas wrote: Barry Smith a ?crit : Dimitri, No, I think this is not the correct way to look at things. Load balancing the original matrix is not neccessarily a good thing for doing an LU factorization (in fact it is likely just to make the LU factorization have much more fill and require much more floating operations). Packages like SuperLU_dist and Mumps have their own internal ordering routines that are specifically for getting a good ordering for doing the parallel LU factorization, you should just have these solvers use them (which they do automatically). Barry On Thu, 11 Jan 2007, Dimitri Lecas wrote: I'am no longer talking about doing LU factorization. But use iterative method for solving a linear system, like bicg. Like in the ex10. In this example i don't understand why using MatPartitioning. Please rephrase the question. Are you asking why one should do the partition or why one should not? Are you asking in the case where the matrix is read from disk or generated in a parallel program?
Using parMetis in petsc for ordering
Barry Smith a ?crit : On Thu, 11 Jan 2007, Dimitri Lecas wrote: Barry Smith a ?crit : Dimitri, No, I think this is not the correct way to look at things. Load balancing the original matrix is not neccessarily a good thing for doing an LU factorization (in fact it is likely just to make the LU factorization have much more fill and require much more floating operations). Packages like SuperLU_dist and Mumps have their own internal ordering routines that are specifically for getting a good ordering for doing the parallel LU factorization, you should just have these solvers use them (which they do automatically). Barry On Thu, 11 Jan 2007, Dimitri Lecas wrote: I'am no longer talking about doing LU factorization. But use iterative method for solving a linear system, like bicg. Like in the ex10. In this example i don't understand why using MatPartitioning. Please rephrase the question. Are you asking why one should do the partition or why one should not? Are you asking in the case where the matrix is read from disk or generated in a parallel program? I try to understand the interest to call MatPartitioning before solving the linear system with the same matrix. (Like ksp/examples/tutorials/ex10.c). -- Dimitri Lecas
Using parMetis in petsc for ordering
Reordering a matrix can result in fewer iterations for an iterative solver. Matt On 1/11/07, Dimitri Lecas dimitri.lecas at free.fr wrote: Barry Smith a ?crit : On Thu, 11 Jan 2007, Dimitri Lecas wrote: Barry Smith a ?crit : Dimitri, No, I think this is not the correct way to look at things. Load balancing the original matrix is not neccessarily a good thing for doing an LU factorization (in fact it is likely just to make the LU factorization have much more fill and require much more floating operations). Packages like SuperLU_dist and Mumps have their own internal ordering routines that are specifically for getting a good ordering for doing the parallel LU factorization, you should just have these solvers use them (which they do automatically). Barry On Thu, 11 Jan 2007, Dimitri Lecas wrote: I'am no longer talking about doing LU factorization. But use iterative method for solving a linear system, like bicg. Like in the ex10. In this example i don't understand why using MatPartitioning. Please rephrase the question. Are you asking why one should do the partition or why one should not? Are you asking in the case where the matrix is read from disk or generated in a parallel program? I try to understand the interest to call MatPartitioning before solving the linear system with the same matrix. (Like ksp/examples/tutorials/ex10.c). -- Dimitri Lecas -- One trouble is that despite this system, anyone who reads journals widely and critically is forced to realize that there are scarcely any bars to eventual publication. There seems to be no study too fragmented, no hypothesis too trivial, no literature citation too biased or too egotistical, no design too warped, no methodology too bungled, no presentation of results too inaccurate, too obscure, and too contradictory, no analysis too self-serving, no argument too circular, no conclusions too trifling or too unjustified, and no grammar and syntax too offensive for a paper to end up in print. -- Drummond Rennie
Using parMetis in petsc for ordering
In parallel matrix-vector products (used by all the KSP methods) the amount of communication is the number of cut-edges of the graph of the matrix. By repartitioning with metis this reduces the number of cut edges. Note: we don't actually advocate doing it this way. One should partition the underlying grid (finite element etc) then generate the matrix. If one does this then you do not repartition the matrix. Barry On Thu, 11 Jan 2007, Dimitri Lecas wrote: Barry Smith a ?crit : On Thu, 11 Jan 2007, Dimitri Lecas wrote: Barry Smith a ?crit : Dimitri, No, I think this is not the correct way to look at things. Load balancing the original matrix is not neccessarily a good thing for doing an LU factorization (in fact it is likely just to make the LU factorization have much more fill and require much more floating operations). Packages like SuperLU_dist and Mumps have their own internal ordering routines that are specifically for getting a good ordering for doing the parallel LU factorization, you should just have these solvers use them (which they do automatically). Barry On Thu, 11 Jan 2007, Dimitri Lecas wrote: I'am no longer talking about doing LU factorization. But use iterative method for solving a linear system, like bicg. Like in the ex10. In this example i don't understand why using MatPartitioning. Please rephrase the question. Are you asking why one should do the partition or why one should not? Are you asking in the case where the matrix is read from disk or generated in a parallel program? I try to understand the interest to call MatPartitioning before solving the linear system with the same matrix. (Like ksp/examples/tutorials/ex10.c).
Using parMetis in petsc for ordering
Dimitri, No, I think this is not the correct way to look at things. Load balancing the original matrix is not neccessarily a good thing for doing an LU factorization (in fact it is likely just to make the LU factorization have much more fill and require much more floating operations). Packages like SuperLU_dist and Mumps have their own internal ordering routines that are specifically for getting a good ordering for doing the parallel LU factorization, you should just have these solvers use them (which they do automatically). Barry On Thu, 11 Jan 2007, Dimitri Lecas wrote: Barry Smith a ?crit : 1) The PETSc LU and Cholesky solvers only run sequentially. 2) The parallel LU and Cholesky solvers PETSc interfaces to, SuperLU_dist, MUMPS, Spooles, DSCPACK do NOT accept an external ordering provided for them. Hence we do not have any setup for doing parallel matrix orderings for factorizations, since we cannot use them. We could allow calling a parallel ordering but I'm not sure what it would be useful for. Barry Ok i see that i was looking for a wrong direction. Just in ksp/examples/tutorials/ex10.c, Partitioning is used on the linear system matrix. I don't understand why ? What i understand it's, with MatPartitioning we try to partitioning the graph build from the matrix (vertices is the row/columns and edge between i and j if aij or aji is non zero value). But in my mind, a good partitioning for solving linear system with iterative algorithm is to load balance the non zero value between processors, so we have to use weight, number of non zero value in the row, to have a good partitioning. Do i have it right ?
Using parMetis in petsc for ordering
Hello, I have to test the ParMetis ordering for factorization and i would like to known if it's possible to use a user ordering ? If i understand the manual correctly, i have to use MatOrderingRegisterDynamic and PCFactorSetMatOrdering but the sentence Currently we support orderings only for sequential matrices in 16.2 of manual, disturb me. What does it means ? Best regards -- Dimitri Lecas cid:part1.07060808.05080906 at free.fr**
Using parMetis in petsc for ordering
1) The PETSc LU and Cholesky solvers only run sequentially. 2) The parallel LU and Cholesky solvers PETSc interfaces to, SuperLU_dist, MUMPS, Spooles, DSCPACK do NOT accept an external ordering provided for them. Hence we do not have any setup for doing parallel matrix orderings for factorizations, since we cannot use them. We could allow calling a parallel ordering but I'm not sure what it would be useful for. Barry On Mon, 8 Jan 2007, Dimitri Lecas wrote: Hello, I have to test the ParMetis ordering for factorization and i would like to known if it's possible to use a user ordering ? If i understand the manual correctly, i have to use MatOrderingRegisterDynamic and PCFactorSetMatOrdering but the sentence Currently we support orderings only for sequential matrices in 16.2 of manual, disturb me. What does it means ? Best regards