Using parMetis in petsc for ordering

2007-01-11 Thread Dimitri Lecas
Barry Smith a ?crit :
   1) The PETSc LU and Cholesky solvers only run sequentially.
   2) The parallel LU and Cholesky solvers PETSc interfaces to, SuperLU_dist,
  MUMPS, Spooles, DSCPACK do NOT accept an external ordering provided for
  them.
  
Hence we do not have any setup for doing parallel matrix orderings for
 factorizations, since we cannot use them. We could allow calling a parallel
 ordering but I'm not sure what it would be useful for.

Barry

   
Ok i see that i was looking for a wrong direction.

Just in ksp/examples/tutorials/ex10.c, Partitioning is used on the 
linear system matrix. I don't understand why ?

What i understand it's, with MatPartitioning we try to partitioning the 
graph build  from the matrix (vertices is the row/columns and edge 
between i and j if aij or aji is non zero value). But in my mind, a good 
partitioning for solving linear system with iterative algorithm is to 
load balance the non zero value between processors, so we have to use 
weight, number of non zero value in the row, to have a good partitioning.
Do i have it right ?

-- 
Dimitri Lecas




undefined reference to ....

2007-01-11 Thread Ben Tay
Yes it ran successfully. I've attached the output.

thank you very much.


On 1/11/07, Satish Balay balay at mcs.anl.gov wrote:

 Do PETSc examples work?

 Send us the output from

 make test

 Staish

 On Wed, 10 Jan 2007, Ben Tay wrote:

  Hi,
 
  I have a very simple fortran code. It compiles on a 32bit system with
 mkl
  with no errors but on em64t, it gives undefined reference to 
 error.
 
  It works when I compiled with the supplied blas/lapack. However if I use
  Intel mkl, it gives the error as stated above.
 
  My code is
 
  global.F
 
module global_data
 
 implicit none
 
 save
 
  #include include/finclude/petsc.h
  #include include/finclude/petscvec.h
  #include include/finclude/petscmat.h
  #include include/finclude/petscksp.h
  #include include/finclude/petscpc.h
  #include include/finclude/petscmat.h90
 
 integer :: i,j,k
 
 Vecxx,b_rhs,xx_uv,b_rhs_uv   !   /* solution vector, right
 hand
  side vector and work vector */
 
 MatA_mat,A_mat_uv  !  /* sparse matrix */
 
 end module global_data
 
  main.f90
 
  program ns2d_c
 
  use global_data
 
  implicit none
 
  integer :: ierr
 
  i=1
 
  call PetscInitialize(PETSC_NULL_CHARACTER,ierr)
 
  call
 MatCreateSeqAIJ(PETSC_COMM_SELF,9,9,9,PETSC_NULL_INTEGER,A_mat,ierr)
 
  end program ns2d_c
 
 
  The error msg is
 
  /tmp/ifort0JBYUf.o(.text+0x46): In function `ns2d_c':
  /nfs/home/enduser/g0306332/test/main.F:11: undefined reference to
  `petscinitialize_'
 
 /tmp/ifort0JBYUf.o(.text+0xaf):/nfs/home/enduser/g0306332/test/main.F:13:
  undefined reference to `matcreateseqaij_'
 
  The compiling commands, which I rephrase from the make ex1f are
 
  ifort -132 -fPIC -g -c
  -I/nfs/lsftmp/g0306332/petsc-2.3.2-p8-I/nfs/lsftmp/g0306332/petsc-
  2.3.2-p8/bmake/l64-nompi-noshared
  -I/nfs/lsftmp/g0306332/petsc-2.3.2-p8/include
  -I/nfs/lsftmp/g0306332/petsc-2.3.2-p8/include/mpiuni global.F
 
  ifort  -fPIC -g
  -Wl,-rpath,/nfs/lsftmp/g0306332/petsc-2.3.2-p8/lib/l64-nompi-noshared
  -L/nfs/lsftmp/g0306332/petsc-2.3.2-p8/lib/l64-nompi-noshared -lpetscksp
  -lpetscdm -lpetscmat -lpetscvec -lpetsc
  -Wl,-rpath,/nfs/lsftmp/g0306332/petsc-2.3.2-p8/lib/l64-nompi-noshared
  -L/nfs/lsftmp/g0306332/petsc-2.3.2-p8/lib/l64-nompi-noshared -lmpiuni
  -Wl,-rpath,/lsftmp/g0306332/inter/mkl/lib/em64t
  -L/lsftmp/g0306332/inter/mkl/lib/em64t -lmkl_lapack -lmkl_em64t -lguide
  -lpthread -ldl -Wl,-rpath,/usr/local/intel/cce9.0/lib
  -L/usr/local/intel/cce9.0/lib
  -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6/
  -L/usr/lib/gcc/x86_64-redhat-linux/3.4.6/
  -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../../../lib64
  -L/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../../../lib64 -lsvml -limf
  -lirc -lgcc_s -lirc_s -Wl,-rpath,/usr/local/intel/cce9.0/lib
  -Wl,-rpath,/usr/local/intel/cce9.0/lib -L/usr/local/intel/cce9.0/lib
  -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6/
  -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6/
  -L/usr/lib/gcc/x86_64-redhat-linux/3.4.6/
  -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../../../lib64
  -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../../../lib64
  -L/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../../../lib64
  -Wl,-rpath,/usr/local/intel/fc9.0/lib -L/usr/local/intel/fc9.0/lib
 -lifport
  -lifcore -lm -Wl,-rpath,/usr/local/intel/cce9.0/lib
  -Wl,-rpath,/usr/local/intel/cce9.0/lib -L/usr/local/intel/cce9.0/lib
  -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6/
  -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6/
  -L/usr/lib/gcc/x86_64-redhat-linux/3.4.6/
  -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../../../lib64
  -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../../../lib64
  -L/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../../../lib64 -lm  -ldl
  -Wl,-rpath,/usr/local/intel/cce9.0/lib -L/usr/local/intel/cce9.0/lib
  -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6/
  -L/usr/lib/gcc/x86_64-redhat-linux/3.4.6/
  -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../../../lib64
  -L/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../../../lib64 -lsvml -limf
  -lirc -lgcc_s -lirc_s -ldl  -o a.out global.o  main.f90
 
  I have used shared,static library. I wonder if it is a problem with mkl
  em64t or there's something wrong with my code/compilation.
 
 
 
  Thank you.
 


-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20070111/71ebd003/attachment.htm
-- next part --
An embedded and charset-unspecified text was scrubbed...
Name: ex5f_1.out
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20070111/71ebd003/attachment.diff
-- next part --
An embedded and charset-unspecified text was scrubbed...
Name: ex5f_1.testout
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20070111/71ebd003/attachment-0001.diff


undefined reference to ....

2007-01-11 Thread Ben Tay
/lib
-Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6/
-L/usr/lib/gcc/x86_64-redhat-linux/3.4.6/
-Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../../../lib64
-L/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../../../lib64 -lsvml
 -limf
-lirc -lgcc_s -lirc_s -ldl  -o a.out global.o  main.f90
   
I have used shared,static library. I wonder if it is a problem with
 mkl
em64t or there's something wrong with my code/compilation.
   
   
   
Thank you.
   
  
  
 


-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20070111/99b049bd/attachment.htm


Using parMetis in petsc for ordering

2007-01-11 Thread Dimitri Lecas
Barry Smith a ?crit :
   Dimitri,

No, I think this is not the correct way to look at things. Load
 balancing the original matrix is not neccessarily a good thing for
 doing an LU factorization (in fact it is likely just to make the LU
 factorization have much more fill and require much more floating
 operations). 

   Packages like SuperLU_dist and Mumps have their own internal ordering
 routines that are specifically for getting a good ordering for doing
 the parallel LU factorization, you should just have these solvers
 use them (which they do automatically).

Barry


 On Thu, 11 Jan 2007, Dimitri Lecas wrote:

   
I'am no longer talking about doing LU factorization. But use iterative 
method for solving a linear system, like bicg. Like in the ex10. In this 
example i don't understand why using MatPartitioning.

-- 
Dimitri Lecas




Using parMetis in petsc for ordering

2007-01-11 Thread Barry Smith

  

On Thu, 11 Jan 2007, Dimitri Lecas wrote:

 Barry Smith a ?crit :
Dimitri,
  
 No, I think this is not the correct way to look at things. Load
  balancing the original matrix is not neccessarily a good thing for
  doing an LU factorization (in fact it is likely just to make the LU
  factorization have much more fill and require much more floating
  operations). 
Packages like SuperLU_dist and Mumps have their own internal ordering
  routines that are specifically for getting a good ordering for doing
  the parallel LU factorization, you should just have these solvers
  use them (which they do automatically).
  
 Barry
  
  
  On Thu, 11 Jan 2007, Dimitri Lecas wrote:
  

 I'am no longer talking about doing LU factorization. But use iterative method
 for solving a linear system, like bicg. Like in the ex10. In this example i
 don't understand why using MatPartitioning.

  Please rephrase the question. Are you asking why one should do the partition 
or why one should not? Are you asking in the case where the matrix is read from
disk or generated in a parallel program?

 
 


Using parMetis in petsc for ordering

2007-01-11 Thread Dimitri Lecas
Barry Smith a ?crit :
   

 On Thu, 11 Jan 2007, Dimitri Lecas wrote:

   
 Barry Smith a ?crit :
 
   Dimitri,

No, I think this is not the correct way to look at things. Load
 balancing the original matrix is not neccessarily a good thing for
 doing an LU factorization (in fact it is likely just to make the LU
 factorization have much more fill and require much more floating
 operations). 
   Packages like SuperLU_dist and Mumps have their own internal ordering
 routines that are specifically for getting a good ordering for doing
 the parallel LU factorization, you should just have these solvers
 use them (which they do automatically).

Barry


 On Thu, 11 Jan 2007, Dimitri Lecas wrote:

   
   
 I'am no longer talking about doing LU factorization. But use iterative method
 for solving a linear system, like bicg. Like in the ex10. In this example i
 don't understand why using MatPartitioning.
 

   Please rephrase the question. Are you asking why one should do the 
 partition 
 or why one should not? Are you asking in the case where the matrix is read 
 from
 disk or generated in a parallel program?

   
I try to understand the interest to call MatPartitioning before solving 
the linear system with the same matrix. (Like 
ksp/examples/tutorials/ex10.c).

-- 
Dimitri Lecas




Visual Studio compiler and PETSc

2007-01-11 Thread Satish Balay
We don't have prebuild binaries..

Sugets configuring with:

config/configure.py --with-cc='win32fe cl' --with-cxx='win32fe cl' --with-fc=0 
--with-clanguage=cxx --download-c-blas-lapack=1

If you encounter problems - send us configure.log at
petsc-maint at mcs.anl.gov

Satish


On Fri, 12 Jan 2007, Tan Meng YUE wrote:

 Hi,
 
   I have tried out PETSc on OS X and Linux in the past.
 
   My current place of work is a Windows only environment with a small
 (less than 1000) cluster of PCs/blade.
 
   I like to demostrate PETSc with MPICH2 on Windows to the developers
 here working on some fluid simulation code for digital visual effects.
 
   Are there any deployable binaries for PETSc (we only use C++, no
 Fortran)?
 
   I have tried compiling PETSc with Cygwin but was getting into
 various difficult like BLAS/LAPACK and when that was fixed, other
 problems cropped up.
 
   I have already installed MPICH2.
 
 Regards
 
 Cheers
 --
 http://www.proceduralinsight.com/
 
 




Using parMetis in petsc for ordering

2007-01-11 Thread Matthew Knepley
Reordering a matrix can result in fewer iterations for an iterative solver.

  Matt

On 1/11/07, Dimitri Lecas dimitri.lecas at free.fr wrote:
 Barry Smith a ?crit :
 
 
  On Thu, 11 Jan 2007, Dimitri Lecas wrote:
 
 
  Barry Smith a ?crit :
 
Dimitri,
 
 No, I think this is not the correct way to look at things. Load
  balancing the original matrix is not neccessarily a good thing for
  doing an LU factorization (in fact it is likely just to make the LU
  factorization have much more fill and require much more floating
  operations).
Packages like SuperLU_dist and Mumps have their own internal ordering
  routines that are specifically for getting a good ordering for doing
  the parallel LU factorization, you should just have these solvers
  use them (which they do automatically).
 
 Barry
 
 
  On Thu, 11 Jan 2007, Dimitri Lecas wrote:
 
 
 
  I'am no longer talking about doing LU factorization. But use iterative 
  method
  for solving a linear system, like bicg. Like in the ex10. In this example i
  don't understand why using MatPartitioning.
 
 
Please rephrase the question. Are you asking why one should do the 
  partition
  or why one should not? Are you asking in the case where the matrix is read 
  from
  disk or generated in a parallel program?
 
 
 I try to understand the interest to call MatPartitioning before solving
 the linear system with the same matrix. (Like
 ksp/examples/tutorials/ex10.c).

 --
 Dimitri Lecas




-- 
One trouble is that despite this system, anyone who reads journals widely
and critically is forced to realize that there are scarcely any bars to eventual
publication. There seems to be no study too fragmented, no hypothesis too
trivial, no literature citation too biased or too egotistical, no design too
warped, no methodology too bungled, no presentation of results too
inaccurate, too obscure, and too contradictory, no analysis too self-serving,
no argument too circular, no conclusions too trifling or too unjustified, and
no grammar and syntax too offensive for a paper to end up in print. --
Drummond Rennie




Using parMetis in petsc for ordering

2007-01-11 Thread Barry Smith

  In parallel matrix-vector products (used by all the KSP methods)
the amount of communication is the number of cut-edges of the 
graph of the matrix. By repartitioning with metis this reduces the
number of cut edges.

  Note: we don't actually advocate doing it this way. One should
partition the underlying grid (finite element etc) then generate
the matrix. If one does this then you do not repartition the matrix.

  Barry


On Thu, 11 Jan 2007, Dimitri Lecas wrote:

 Barry Smith a ?crit :

  On Thu, 11 Jan 2007, Dimitri Lecas wrote:
  

   Barry Smith a ?crit :
   
  Dimitri,

   No, I think this is not the correct way to look at things. Load
balancing the original matrix is not neccessarily a good thing for
doing an LU factorization (in fact it is likely just to make the LU
factorization have much more fill and require much more floating
operations).   Packages like SuperLU_dist and Mumps have their own
internal ordering
routines that are specifically for getting a good ordering for doing
the parallel LU factorization, you should just have these solvers
use them (which they do automatically).

   Barry


On Thu, 11 Jan 2007, Dimitri Lecas wrote:


   I'am no longer talking about doing LU factorization. But use iterative
   method
   for solving a linear system, like bicg. Like in the ex10. In this example
   i
   don't understand why using MatPartitioning.
   
  
Please rephrase the question. Are you asking why one should do the
  partition or why one should not? Are you asking in the case where the matrix
  is read from
  disk or generated in a parallel program?
  

 I try to understand the interest to call MatPartitioning before solving the
 linear system with the same matrix. (Like ksp/examples/tutorials/ex10.c).