PaStiX crash

2009-10-22 Thread Andreas Grassl
Hi Barry,

here again with line numbers:

http://pastebin.com/m630324e

I noticed, the Pastix-version with only '-g' gives no errors.

Hope this output now helps for debugging.

Cheers,

ando

Barry Smith schrieb:
 
I think if you compile all the code (including Scotch) with the -g
 option as Satish suggested then it should show exact line numbers in the
 source code where the corruption occurs and you could report it to the
 Scotch developers. As it is without the line numbers it may be difficult
 for the Scotch developers to determine the problem.
 
 
Barry
 
 On Oct 21, 2009, at 10:49 AM, Andreas Grassl wrote:
 
 Satish Balay schrieb:
 Perhaps you can try running in valgrind to see here the problem is.

 You can also try --with-debugging=0 COPTFLAGS='-g -O' - and see if it
 crashes.  If so - run in a debugger to determine the problem.

 here you find the output of valgrind:

 http://pastebin.com/m16478dcf

 It seems the problem is around the scotch library. Trying to
 substitute the
 library with the working version from the debugging-branch did not
 work and I
 found no options to change the ordering algorithm to e.g. (par)metis
 installed
 for mumps

 Any ideas?

 Cheers,

 ando


 Satish

 On Tue, 20 Oct 2009, Andreas Grassl wrote:

 Hello,

 I wanted to use PaStix and have the problem, that the debugging
 version works
 and PETSc compiled with option --with-debugging=0 gives following
 error:

 what could be wrong?


 ++
 +  PaStiX : Parallel Sparse matriX package   +
 ++
 Matrix size  7166 x 7166
 Number of nonzeros   177831
 ++
 +  Options   +
 ++
   Version :  exported
   SMP_SOPALIN :  Defined
   VERSION MPI :  Defined
   PASTIX_BUBBLE   :  Not defined
   STATS_SOPALIN   :  Not defined
   NAPA_SOPALIN:  Defined
   TEST_IRECV  :  Not defined
   TEST_ISEND  :  Defined
   THREAD_COMM :  Not defined
   THREAD_FUNNELED :  Not defined
   TAG :  Exact Thread
   FORCE_CONSO :  Not defined
   RECV_FANIN_OR_BLOCK :  Not defined
   OUT_OF_CORE :  Not defined
   DISTRIBUTED :  Not defined
   FLUIDBOX:  Not defined
   METIS   :  Not defined
   INTEGER TYPE:  int32_t
   FLOAT TYPE  :  double
 ++
 Check : orderingOK
 Check : Sort CSCOK
 [0]PETSC ERROR:
 

 [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation
 Violation, probably
 memory access out of range
 [0]PETSC ERROR: Try option -start_in_debugger or
 -on_error_attach_debugger
 [0]PETSC ERROR: or see
 http://www.mcs.anl.gov/petsc/petsc-as/documentation/troubleshooting.html#Signal[0]PETSC

 ERROR: or try http://valgrind.org on linux or man libgmalloc on
 Apple to find
 memory corruption errors
 [0]PETSC ERROR: configure using --with-debugging=yes, recompile,
 link, and run
 [0]PETSC ERROR: to get more information on the crash.
 [0]PETSC ERROR: - Error Message
 
 [0]PETSC ERROR: Signal received!
 [0]PETSC ERROR:
 

 [0]PETSC ERROR: Petsc Release Version 3.0.0, Patch 8, Fri Aug 21
 14:02:12 CDT 2009
 [0]PETSC ERROR: See docs/changes/index.html for recent updates.
 [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
 [0]PETSC ERROR: See docs/index.html for manual pages.
 [0]PETSC ERROR:
 

 [0]PETSC ERROR: ./standalonesolver on a linux32-i named login.leo1
 by c702174
 Tue Oct 20 11:55:24 2009
 [0]PETSC ERROR: Libraries linked from
 /mnt/x4540/hpc-scratch/c702174/leo1/petsc/petsc-3.0.0-p8/linux32-intel-c-leo1/lib

 [0]PETSC ERROR: Configure run at Tue Oct 20 00:39:27 2009
 [0]PETSC ERROR: Configure options --with-scalar-type=real
 --with-debugging=0
 --with-precision=double --with-shared=0 --with-mpi=1
 --with-mpi-dir=/usr/site/hpc/x86_64/glibc-2.5/italy/openmpi/1.3.3/intel-11.0

 --with-scalapack=1 --download-scalapack=ifneeded
 

GMRES performance

2009-10-22 Thread jaru...@ascomp.ch


Hello,

I followed the suggestion in PETSc previous threads by adding options

-pc_type asm -sub_pc_type lu

Now the solver is really fast!

Thanks you
Jarunan




Quoting Barry Smith bsmith at mcs.anl.gov:


 On Oct 20, 2009, at 9:52 AM, jarunan at ascomp.ch wrote:

 Hello,

 I would like to know some information about GMRES performance in   
 PETSc if you have any experience.

 I am running a parallel test case(9300 cells) comparing cpu time   
 using by solvers in Petsc. While BICGSTAB was doing 0.9 sec, GMRES   
 15 sec with the same preconditioner(Additive Schwarz). I did not   
 expect that GMRES would be that much slower. Everything is default.

You need much more information than runtimes of the two cases to
 understand why one is faster than the other.

 Please share your experiences, how the performance of solvers are   
 in your test cases. Which option I should set to improve GMRES   
 performance? Is there the best combination of Preconditioner and   
 solver?

Run both (with debugging turned off, hence optimized) using the
 option -log_summary and look at where each is code is spending its
 time. You can also see how many iterations (MatMult) each solver is
 requiring. Feel free to send the -log_summary output to
 petsc-maint at mcs.anl.gov if you do not understand it.

Barry


 Thank you very much
 Jarunan

 -- 
 Jarunan Panyasantisuk
 Development Engineer
 ASCOMP GmbH, Technoparkstr. 1
 CH-8005 Zurich, Switzerland
 Phone : +41 44 445 4072
 Fax   : +41 44 445 4075
 E-mail: jarunan at ascomp.ch
 www.ascomp.ch



-- 
Jarunan Panyasantisuk
Development Engineer
ASCOMP GmbH, Technoparkstr. 1
CH-8005 Zurich, Switzerland
Phone : +41 44 445 4072
Fax   : +41 44 445 4075
E-mail: jarunan at ascomp.ch
www.ascomp.ch


KSP different results with the default and direct LU

2009-10-22 Thread Umut Tabak
Dear all,

I am trying get myself more acquainted with PETSc.

I am trying to solve a linear equation system which is conditioned 
badly. If I use the approach as given in ex1.c of the KSP section(by 
default, I read from the manual that it uses restarted GMRES , 
preconditioned with JACOBI in ex1.c which I am not very aware of these 
numerical procedures that in deep). I get some wrong results for the 
solution.

I checked my results with Matlab, If I use the command line options as 
suggested on page 68 of the manual, to solve it directly, -ksp_type 
preonly -pc_type lu, I can both get the right solution and it is faster 
than the iterative solution which is used by default. So as far as I can 
follow from the mailing list, iterative methods and preconditioners are 
problem dependent, so my question would be, should I find the right 
approach/solver combination by some trial and error by using different 
combinations that are outlined in the manual?(This can also be dependent 
on the problem types that I would like to solve as well I suppose, for 
the moment, quite badly conditioned.)

Any pointers are appreciated.

Best regards,
Umut




KSP different results with the default and direct LU

2009-10-22 Thread Barry Smith

On Oct 22, 2009, at 6:08 AM, Umut Tabak wrote:

 Dear all,

 I am trying get myself more acquainted with PETSc.

 I am trying to solve a linear equation system which is conditioned  
 badly. If I use the approach as given in ex1.c of the KSP section(by  
 default, I read from the manual that it uses restarted GMRES ,  
 preconditioned with JACOBI in ex1.c which I am not very aware of  
 these numerical procedures that in deep). I get some wrong results  
 for the solution.

When testing always run with -ksp_converged_reason or call  
KSPGetConvergedReason() after KSP solve to determine if PETSc thinks  
it has actually solved the system. Also since iterative solvers only  
solve to some tolerance the answer may not be wrong it may just be  
accurate up to the tolerance and with ill-conditioned matrices a tight  
tolerance on the residual norm may still be a loose tolerance on the  
norm of the error.


 I checked my results with Matlab, If I use the command line options  
 as suggested on page 68 of the manual, to solve it directly, - 
 ksp_type preonly -pc_type lu, I can both get the right solution and  
 it is faster than the iterative solution which is used by default.  
 So as far as I can follow from the mailing list, iterative methods  
 and preconditioners are problem dependent, so my question would be,  
 should I find the right approach/solver combination by some trial  
 and error by using different combinations that are outlined in the  
 manual?(This can also be dependent on the problem types that I would  
 like to solve as well I suppose, for the moment, quite badly  
 conditioned.)

 General purpose iterative solvers, like in PETSc, used willy- 
nilly for very ill-conditioned linear systems, are basically  
worthless. You need to either stick to direct solvers or understand  
the types of iterative solvers that are used in the field of expertise  
for your class of problems.  For example, if your problems come from  
semi-conductor simulations then you need to understand the literature  
of iterative solvers for semi-conductors before proceeding. For  
reasonably well-conditioned solvers where a variety of iterative  
methods just work, that is converge ok, you can try them all to see  
what is fastest on your machine, but for nasty matrices this trial  
and error is a waste of time, because almost none of the iterative  
solvers will even converge, those that converge will not always  
converge (they are not reliable) and they will be slower than direct  
solvers. We've had good success with the MUMPS parallel direct solver  
and recommend trying the other ones as well. If your goal is to run  
some simulation (and not do research on iterative solvers for nasty  
matrices) I would just determine the best direct solver and use it  
(and live with the memory usage and time it requires).

Barry


 Any pointers are appreciated.

 Best regards,
 Umut





KSP different results with the default and direct LU

2009-10-22 Thread Umut Tabak
Barry Smith wrote:

When testing always run with -ksp_converged_reason or call 
 KSPGetConvergedReason() after KSP solve to determine if PETSc thinks 
 it has actually solved the system. Also since iterative solvers only 
 solve to some tolerance the answer may not be wrong it may just be 
 accurate up to the tolerance and with ill-conditioned matrices a tight 
 tolerance on the residual norm may still be a loose tolerance on the 
 norm of the error.
Dear Barry,

Thanks for the advice and tolerance reminder, indeed I missed that.
 General purpose iterative solvers, like in PETSc, used willy-nilly 
 for very ill-conditioned linear systems, are basically worthless. You 
 need to either stick to direct solvers or understand the types of 
 iterative solvers that are used in the field of expertise for your 
 class of problems.  For example, if your problems come from 
 semi-conductor simulations then you need to understand the literature 
 of iterative solvers for semi-conductors before proceeding. For 
 reasonably well-conditioned solvers where a variety of iterative 
 methods just work, that is converge ok, you can try them all to see 
 what is fastest on your machine, but for nasty matrices this trial 
 and error is a waste of time, because almost none of the iterative 
 solvers will even converge, those that converge will not always 
 converge (they are not reliable) and they will be slower than direct 
 solvers. We've had good success with the MUMPS parallel direct solver 
 and recommend trying the other ones as well. If your goal is to run 
 some simulation (and not do research on iterative solvers for nasty 
 matrices) I would just determine the best direct solver and use it 
 (and live with the memory usage and time it requires).
Thanks for this explanation, it is really helpful. My matrices seem 
quite nasty(and unsymmetric from the point of view of 
eigenvalues/vectors), so my goal is not conducting research in this 
field but rather to solve my linear systems and nasty eigenvalue 
problems as an engineer to get the results reliably.

So as a conclusion I should go 'direct' as long as I am not pretty sure 
about reliability of my iterative methods.

Thanks again.

Umut



Manually setting DA min and max indexes

2009-10-22 Thread Milad Fatenejad
Hello:

Is it possible to set the min/max m,n,p indexes when creating a 3D
distributed array? I'm writing a 3D code on a structured mesh, and the
mesh is partitioned among the processes elsewhere. Assuming that each
process already knows what chunk of the mesh it owns, is it possible
to give Petsc this information
when you create a DA so that the partitioning is consistent between
PETSc and the rest of the code?

Thank You
Milad


Manually setting DA min and max indexes

2009-10-22 Thread Jed Brown
Milad Fatenejad wrote:
 Hello:
 
 Is it possible to set the min/max m,n,p indexes when creating a 3D
 distributed array? I'm writing a 3D code on a structured mesh, and the
 mesh is partitioned among the processes elsewhere. Assuming that each
 process already knows what chunk of the mesh it owns, is it possible
 to give Petsc this information
 when you create a DA so that the partitioning is consistent between
 PETSc and the rest of the code?

For a simple decomposition, you just set the local sizes (m,n,p instead
of M,N,P in DACreate3d).  For irregular cases, use the lx,ly,lz inputs
(or DASetVertexDivision).  If you want to number the subdomains
differently than PETSc, you should be able to make a communicator with
your preferred numbering (see MPI_Group_incl) and create the DA on this
communicator.

Jed

-- next part --
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 261 bytes
Desc: OpenPGP digital signature
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20091022/31960152/attachment.pgp


One simple question about complex number

2009-10-22 Thread Yujie
Hi, PETSc Developers,

What is the difference between PETSc complex number and stl::complex. If I
define a variable var using stl::complex, whether is it ok to do
var=2.0+PETSC_i*3.0?  Thanks a lot.

Regards,
Yujie
-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20091022/7910e4e9/attachment.htm


One simple question about complex number

2009-10-22 Thread Satish Balay
we use 'std:complex' with --with-clanguage=cxx [and with c - we use
c99 complex support]. I'm not sure what stl::complex is.

Satish

On Thu, 22 Oct 2009, Yujie wrote:

 Hi, PETSc Developers,
 
 What is the difference between PETSc complex number and stl::complex. If I
 define a variable var using stl::complex, whether is it ok to do
 var=2.0+PETSC_i*3.0?  Thanks a lot.
 
 Regards,
 Yujie
 



One simple question about complex number

2009-10-22 Thread Yujie
Thank you for you reply, Satish. STL is Standard Template Library. I think
stl::complex should be the same with std::complex.

Regards,
Yujie

On Thu, Oct 22, 2009 at 3:05 PM, Satish Balay balay at mcs.anl.gov wrote:

 we use 'std:complex' with --with-clanguage=cxx [and with c - we use
 c99 complex support]. I'm not sure what stl::complex is.

 Satish

 On Thu, 22 Oct 2009, Yujie wrote:

  Hi, PETSc Developers,
 
  What is the difference between PETSc complex number and stl::complex. If
 I
  define a variable var using stl::complex, whether is it ok to do
  var=2.0+PETSC_i*3.0?  Thanks a lot.
 
  Regards,
  Yujie
 


-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20091022/5a637134/attachment.htm


One simple question about complex number

2009-10-22 Thread Jed Brown
Yujie wrote:
 Thank you for you reply, Satish. STL is Standard Template Library. I
 think stl::complex should be the same with std::complex.

The STL complex type is std::complex (STL uses the namespace std::), are
you sure that you have an stl::complex?

Jed

-- next part --
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 261 bytes
Desc: OpenPGP digital signature
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20091022/91c0d10d/attachment.pgp


MatZeroRows ignore negative indices?

2009-10-22 Thread Pedro Juan Torres Lopez
Hello,

In MatZeroRows() if I put negative value in the 3rd argument (rows[ ]) it
will be ignored?. I'm sorry if I'm being repetitive with this issue.

Regards

Pedro
-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20091022/1d88a75f/attachment.htm


MatZeroRows ignore negative indices?

2009-10-22 Thread Barry Smith

   No, there is no support for this. You should pass in an array that  
only contains the rows you want zeroed. You cannot pass in negative  
entries for row numbers.

Barry

On Oct 22, 2009, at 5:38 PM, Pedro Juan Torres Lopez wrote:

 Hello,

 In MatZeroRows() if I put negative value in the 3rd argument  
 (rows[ ]) it will be ignored?. I'm sorry if I'm being repetitive  
 with this issue.

 Regards

 Pedro