[petsc-users] -ksp_diagonal_scale question

2012-03-19 Thread Max Rudolph
I was looking at the documentation for KSPSetDiagonalScale and it
contains this comment:

This routine is only used if the matrix and preconditioner matrix are
the same thing.

However,?I have found that even if the matrix and preconditioner are
different, using the -ksp_diagonal_scale command line argument changes
the convergence behavior of the solvers. Is the man page correct?
Also, can you point me towards documentation of the type of scaling
that is applied?

Thanks for your help,
Max Rudolph


[petsc-users] Binary VTK viewer

2012-03-08 Thread Max Rudolph
I was looking into the various PetscViewerFormats available and it appears
that only ASCII vtk format is supported. I found an old thread on this
mailing list with talk between Blaise Bourdin and Matt Knepley of adding
support for output of binary vtk files. Was this ever done?

Thanks for your help,
Max
-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120308/2d2bf351/attachment.htm


[petsc-users] Matrix format mpiaij does not have a built-in PETSc XXX!

2012-02-26 Thread Max Rudolph
MPIAIJ and SEQQIJ matrices are subtypes of the AIJ matrix type. Looking at
that table, you should be able to use any of the PCs that supports AIJ and
has an X under 'parallel'.

Max


On Sun, Feb 26, 2012 at 8:16 AM, Aron Roland aaronroland at gmx.de wrote:

 Dear All,

 I hope somebody can help us on this or give at least some clearance.

 We have just included PETSc as an solver for our sparse matrix evolving
 from an unstructured mesh advection scheme.

 The problem is that we are using the mpiaij matrix type, since our matrix
 is naturally sparse. However it seems that PETSc has no PC for this, except
 the PCSOR, which showed to be not very effective for our problem.

 All others give the error msg. of the mail subject, where XXX are the
 different PC tried.

 The manual is a bit diffuse on this e.g.

 http://www.mcs.anl.gov/petsc/**documentation/**linearsolvertable.htmlhttp://www.mcs.anl.gov/petsc/documentation/linearsolvertable.html

 it is claimed that certain PC's are running on aij matrices ... but these
 are to be defined either as seq. or parallel (mpiaij) matrices. Moreover in
 the above mentioned list are two columns parallel/seriel, what is the
 intention of parallel capability when not applicable to matrices stored
 within the parallel mpiaij framework.

 I guess we just not understanding the concept or have some other
 difficulties of understanding of all this.

 Any comments help is welcome

 Aron

-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120226/9eca1576/attachment.htm


[petsc-users] Starting point for Stokes fieldsplit

2012-02-26 Thread Max Rudolph
I did eventually make my test case work using the split preconditioning,
first using additive schwartz as the preconditioner for the upper left
(0,0) block and then using ML with asm and gmres within each multigrid
level. I am posting my runtime options in case it might be helpful for
someone else out there who wants to try this. The key to getting the solver
to converge for me was starting with a good initial guess (the solution
diverged with a zero initial guess), especially for the pressure field,
using gcr as the outer ksp and, small ksp_rtol values for both of the inner
ksps.

Max

  -stokes_pc_fieldsplit_0_fields 0,1 -stokes_pc_fieldsplit_1_fields 2 \
-stokes_pc_type fieldsplit -stokes_pc_fieldsplit_type multiplicative \
-stokes_ksp_initial_guess_nonzero \
-stokes_fieldsplit_0_pc_type asm \
-stokes_fieldsplit_0_ksp_type gmres \
-stokes_fieldsplit_0_ksp_initial_guess_nonzero \
-stokes_fieldsplit_0_ksp_max_it 10 \
-stokes_fieldsplit_0_ksp_rtol 1.0e-9 \
-stokes_fieldsplit_1_pc_type jacobi \
-stokes_fieldsplit_1_ksp_type gmres \
-stokes_fieldsplit_1_ksp_max_it 10 \
-stokes_fieldsplit_1_ksp_rtol 1.0e-9 \
-stokes_ksp_type gcr \
-stokes_ksp_monitor_blocks \
-stokes_ksp_monitor \
-stokes_ksp_view \
-stokes_ksp_atol 1e-2 \
-stokes_ksp_rtol 0.0 \

On Wed, Feb 22, 2012 at 1:29 PM, Max Rudolph maxwellr at gmail.com wrote:



 On Tue, Feb 21, 2012 at 3:36 AM, Dave May dave.mayhem23 at gmail.com wrote:

 Max,

 
  The test case that I am working with is isoviscous convection, benchmark
  case 1a from Blankenbach 1989.
 

 Okay, I know this problem.
 An iso viscous problem, solved on a uniform grid using  dx=dy=dz
 discretised
 via FV should be super easy to precondition.

 
 
  I think that this is the problem. The (2,2) slot in the LHS matrix is
 all
  zero (pressure does not appear in the continuity equation), so I think
 that
  the preconditioner is meaningless. I am still confused as to why this
 choice
  of preconditioner was suggested in the tutorial, and what is a better
 choice
  of preconditioner for this block? Should I be using one of the Schur
  complement methods instead of the additive or multiplicative field
 split?
 

 No, you need to define an appropriate stokes preconditioner
 You should assemble this matrix
  B = ( K,B ; B^T, -1/eta* I )
 as the preconditioner for stokes.
 Here eta* is a measure of the local viscosity within each pressure
 control volume.
 Unless you specify to use the real diagonal

 Pass this into the third argument in KSPSetOperators() (i.e. the Pmat
 variable)

 http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/KSP/KSPSetOperators.html

 Not sure how you represent A and B, but if you really want to run just
 additive with fieldsplit, you don't need the off diagonal blocks, so
  B = ( K,0 ; 0, -1/eta* I )
 would yield the same result. Depending on your matrix representation,
 this may save you some memory.

 PCFieldsplit will use the B(1,1) and B(2,2) to build the stokes
 preconditioner unless you ask for it to use the real diagonal - but
 for the stokes operator A, this makes no sense.

 This is the right thing to do (as Matt states).
 Try it out, and let us know how it goes.


 Cheers,
  Dave


 Dave and Matt,
 Thanks for your help. I had some time to work on this a little more. I now
 have a stokes operator A that looks like this:
 A=(K B; B^T 0) and a matrix from which the preconditioner is generated
 P=(K B; B^T -1/eta*I)

 I verified that I can solve this system using the default ksp and pc
 settings in 77 iterations for the first timestep (initial guess zero) and
 in 31 iterations for the second timestep (nonzero initial guess).

 I adopted your suggestion to use the multiplicative field split as a
 starting point. My reading of the PETSc manual suggests to me that the
 preconditioner formed should then look like:

 B = (ksp(K,K) 0;-B^T*ksp(K,K)*ksp(0,-1/eta*I)ksp(0,-1/eta*I))

 My interpretation of the output suggests that the solvers within each
 fieldsplit are converging nicely, but the global residual is not decreasing
 after the first few iterations. Given the disparity in residual sizes, I
 think that there might be a problem with the scaling of the pressure
 variable (I scaled the continuity equation by eta/dx where dx is my grid
 spacing). I also scaled the (1,1) block in the preconditioner by this scale
 factor. Thanks again for all of your help.

 Max


 Options used:
 -stokes_pc_fieldsplit_0_fields 0,1 -stokes_pc_fieldsplit_1_fields 2 \
 -stokes_pc_type fieldsplit -stokes_pc_fieldsplit_type multiplicative \
 -stokes_fieldsplit_0_pc_type ml \
 -stokes_fieldsplit_0_ksp_type gmres \
 -stokes_fieldsplit_0_ksp_monitor_true_residual \
 -stokes_fieldsplit_0_ksp_norm_type UNPRECONDITIONED \
 -stokes_fieldsplit_0_ksp_max_it 3 \
 -stokes_fieldsplit_0_ksp_type gmres \
 -stokes_fieldsplit_0_ksp_rtol 1.0e-4 \
 -stokes_fieldsplit_0_mg_levels_ksp_type

[petsc-users] Accessing Vector's ghost values

2012-02-23 Thread Max Rudolph
Did you try running with -on_error_attach_debugger and and using your
debugger to try to figure out where your code is segfaulting?

Max

On Thu, Feb 23, 2012 at 10:07 AM, Bojan Niceno bojan.niceno at psi.ch wrote:

  Thanks Ju.  I studied this case carefully, and it seems clear to me.
 When I apply the same techniques in my code, I get error messages I sent to
 my reply to Matthew.


 Cheers,


 Bojan



 On 2/23/2012 6:46 PM, Ju LIU wrote:



 2012/2/23 Bojan Niceno bojan.niceno at psi.ch

 Hi all,

 I've never used a mailing list before, so I hope this message will reach
 PETSc users and experts and someone might be willing to help me.  I am also
 novice in PETSc.

 I have developed an unstructured finite volume solver on top of PETSc
 libraries.  In sequential, it works like a charm.  For the parallel
 version, I do domain decomposition externally with Metis, and work out
 local and global numberings, as well as communication patterns between
 processor.  (The latter don't seem to be needed for PETSc, though.)  When I
 run my program in parallel, it also works, but I miss values in vectors'
 ghost points.

 I create vectors with command: VecCreate(PETSC_COMM_WORLD, x);

 Is it possible to get the ghost values if a vector is created like this?

 I have tried to use VecCreateGhost, but for some reason which is beyond
 my comprehension, PETSc goes berserk when it reaches the command:
 VecCreateGhost(PETSC_COMM_WORLD, n, PETSC_DECIDE, nghost, ifrom, x)

 Can anyone help me?  Either how to reach ghost values for vector created
 by VecCreate, or how to use VecCreateGhost properly?


 http://www.mcs.anl.gov/petsc/petsc-current/src/vec/vec/examples/tutorials/ex9.c.html
 could be helpful.


   Bojan




 --

-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120223/f9528d25/attachment.htm
-- next part --
A non-text attachment was scrubbed...
Name: not available
Type: image/png
Size: 6515 bytes
Desc: not available
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120223/f9528d25/attachment.png


[petsc-users] Starting point for Stokes fieldsplit

2012-02-22 Thread Max Rudolph
On Tue, Feb 21, 2012 at 3:36 AM, Dave May dave.mayhem23 at gmail.com wrote:

 Max,

 
  The test case that I am working with is isoviscous convection, benchmark
  case 1a from Blankenbach 1989.
 

 Okay, I know this problem.
 An iso viscous problem, solved on a uniform grid using  dx=dy=dz
 discretised
 via FV should be super easy to precondition.

 
 
  I think that this is the problem. The (2,2) slot in the LHS matrix is all
  zero (pressure does not appear in the continuity equation), so I think
 that
  the preconditioner is meaningless. I am still confused as to why this
 choice
  of preconditioner was suggested in the tutorial, and what is a better
 choice
  of preconditioner for this block? Should I be using one of the Schur
  complement methods instead of the additive or multiplicative field split?
 

 No, you need to define an appropriate stokes preconditioner
 You should assemble this matrix
  B = ( K,B ; B^T, -1/eta* I )
 as the preconditioner for stokes.
 Here eta* is a measure of the local viscosity within each pressure
 control volume.
 Unless you specify to use the real diagonal

 Pass this into the third argument in KSPSetOperators() (i.e. the Pmat
 variable)

 http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/KSP/KSPSetOperators.html

 Not sure how you represent A and B, but if you really want to run just
 additive with fieldsplit, you don't need the off diagonal blocks, so
  B = ( K,0 ; 0, -1/eta* I )
 would yield the same result. Depending on your matrix representation,
 this may save you some memory.

 PCFieldsplit will use the B(1,1) and B(2,2) to build the stokes
 preconditioner unless you ask for it to use the real diagonal - but
 for the stokes operator A, this makes no sense.

 This is the right thing to do (as Matt states).
 Try it out, and let us know how it goes.


 Cheers,
  Dave


Dave and Matt,
Thanks for your help. I had some time to work on this a little more. I now
have a stokes operator A that looks like this:
A=(K B; B^T 0) and a matrix from which the preconditioner is generated P=(K
B; B^T -1/eta*I)

I verified that I can solve this system using the default ksp and pc
settings in 77 iterations for the first timestep (initial guess zero) and
in 31 iterations for the second timestep (nonzero initial guess).

I adopted your suggestion to use the multiplicative field split as a
starting point. My reading of the PETSc manual suggests to me that the
preconditioner formed should then look like:

B = (ksp(K,K) 0;-B^T*ksp(K,K)*ksp(0,-1/eta*I)ksp(0,-1/eta*I))

My interpretation of the output suggests that the solvers within each
fieldsplit are converging nicely, but the global residual is not decreasing
after the first few iterations. Given the disparity in residual sizes, I
think that there might be a problem with the scaling of the pressure
variable (I scaled the continuity equation by eta/dx where dx is my grid
spacing). I also scaled the (1,1) block in the preconditioner by this scale
factor. Thanks again for all of your help.

Max


Options used:
-stokes_pc_fieldsplit_0_fields 0,1 -stokes_pc_fieldsplit_1_fields 2 \
-stokes_pc_type fieldsplit -stokes_pc_fieldsplit_type multiplicative \
-stokes_fieldsplit_0_pc_type ml \
-stokes_fieldsplit_0_ksp_type gmres \
-stokes_fieldsplit_0_ksp_monitor_true_residual \
-stokes_fieldsplit_0_ksp_norm_type UNPRECONDITIONED \
-stokes_fieldsplit_0_ksp_max_it 3 \
-stokes_fieldsplit_0_ksp_type gmres \
-stokes_fieldsplit_0_ksp_rtol 1.0e-4 \
-stokes_fieldsplit_0_mg_levels_ksp_type gmres \
-stokes_fieldsplit_0_mg_levels_pc_type bjacobi \
-stokes_fieldsplit_0_mg_levels_ksp_max_it 4 \
-stokes_fieldsplit_1_pc_type jacobi \
-stokes_fieldsplit_1_ksp_type preonly \
-stokes_fieldsplit_1_ksp_max_it 3 \
-stokes_fieldsplit_1_ksp_monitor_true_residual \
-stokes_ksp_type gcr \
-stokes_ksp_monitor_blocks \
-stokes_ksp_monitor_draw \
-stokes_ksp_view \
-stokes_ksp_atol 1e-6 \
-stokes_ksp_rtol 1e-6 \
-stokes_ksp_max_it 100 \
-stokes_ksp_norm_type UNPRECONDITIONED \
-stokes_ksp_monitor_true_residual \

Output:

  0 KSP Component U,V,P residual norm [ 0.e+00,
1.165111661413e+06, 0.e+00 ]
  Residual norms for stokes_ solve.
  0 KSP unpreconditioned resid norm 1.165111661413e+06 true resid norm
1.165111661413e+06 ||r(i)||/||b|| 1.e+00
Residual norms for stokes_fieldsplit_0_ solve.
0 KSP unpreconditioned resid norm 1.165111661413e+06 true resid norm
1.165111661413e+06 ||r(i)||/||b|| 1.e+00
1 KSP unpreconditioned resid norm 3.173622513625e+05 true resid norm
3.173622513625e+05 ||r(i)||/||b|| 2.723878421898e-01
2 KSP unpreconditioned resid norm 5.634119635158e+04 true resid norm
1.725996376799e+05 ||r(i)||/||b|| 1.481399967026e-01
3 KSP unpreconditioned resid norm 1.218418968344e+03 true resid norm
1.559727441168e+05 ||r(i)||/||b|| 1.338693528546e-01
  1 KSP Component U,V,P residual norm [ 

[petsc-users] Specifying different ksp_types for multiple linear systems

2012-02-19 Thread Max Rudolph
In my thermomechanical convection code, I set up and solve two linear systems, 
the first for the Stokes system and the second for an energy equation. 
Currently, these are separate matrices and during each timestep I first create 
a KSP object for each system, then solve, then destroy the KSP contexts. I 
would like to try out the -pc_type fieldsplit for the Stokes system, but is it 
possible to use -pc_type fieldsplit for the Stokes  system and different 
pc_type and ksp_type for the energy equation? Is there perhaps a way to name 
the KSP associated with the Stokes system and then refer to this label from the 
command line (e.g. -pc_type_0 fieldsplit -pc_type_1 lu) ? Thanks very much for 
your help.

Max Rudolph


[petsc-users] Starting point for Stokes fieldsplit

2012-02-19 Thread Max Rudolph
I am solving a 2D Stokes flow problem using a finite volume discretization, 
velocity (dof 0,1)-pressure (dof 2) formulation. Until now I have mostly used 
MUMPS. I am trying to use the PCFieldSplit interface now, with the goal of 
trying out the multigrid functionality provided through ML. I looked at these 
(http://www.mcs.anl.gov/petsc/documentation/tutorials/Speedup10.pdf) talk 
slides for some guidance on command line options and tried. When assembling the 
linear system, I obtain the global degree-of-freedom indices using 
DMDAGetGlobalIndices. The solver does not appear to be converging and I was 
wondering if someone could explain to me why this might be happening. Thanks 
for your help.

Max

petscmpiexec -n $1 $2 $3 \
-stokes_pc_fieldsplit_0_fields 0,1 -stokes_pc_fieldsplit_1_fields 2 \
-stokes_pc_type fieldsplit -stokes_pc_fieldsplit_type additive \
-stokes_fieldsplit_0_pc_type ml -stokes_fieldsplit_0_ksp_type preonly \
-stokes_fieldsplit_1_pc_type jacobi -stokes_fieldsplit_1_ksp_type preonly \
-stokes_ksp_view \
-stokes_ksp_monitor_true_residual \

  Residual norms for stokes_ solve.
  0 KSP preconditioned resid norm 2.156200561011e-07 true resid norm 
1.165111661413e+06 ||r(i)||/||b|| 1.e+00
  1 KSP preconditioned resid norm 2.156200561011e-07 true resid norm 
1.165111661413e+06 ||r(i)||/||b|| 1.e+00
  2 KSP preconditioned resid norm 5.176186579848e-08 true resid norm 
5.747484453195e+06 ||r(i)||/||b|| 4.932990239085e+00
  3 KSP preconditioned resid norm 5.054067588022e-08 true resid norm 
5.763708451168e+06 ||r(i)||/||b|| 4.946915082953e+00
  4 KSP preconditioned resid norm 3.556841873413e-08 true resid norm 
4.649778784249e+06 ||r(i)||/||b|| 3.990843915003e+00
  5 KSP preconditioned resid norm 3.177972840516e-08 true resid norm 
4.677248322326e+06 ||r(i)||/||b|| 4.014420657890e+00
  6 KSP preconditioned resid norm 3.100188857346e-08 true resid norm 
4.857618195959e+06 ||r(i)||/||b|| 4.169229745814e+00
  7 KSP preconditioned resid norm 3.045495907075e-08 true resid norm 
5.030091724356e+06 ||r(i)||/||b|| 4.317261504580e+00
  8 KSP preconditioned resid norm 2.993896937859e-08 true resid norm 
5.213745290794e+06 ||r(i)||/||b|| 4.474888942808e+00
  9 KSP preconditioned resid norm 2.944838631679e-08 true resid norm 
5.403745734510e+06 ||r(i)||/||b|| 4.637963822245e+00
 10 KSP preconditioned resid norm 2.898115557667e-08 true resid norm 
5.596205405394e+06 ||r(i)||/||b|| 4.803149423985e+00
 11 KSP preconditioned resid norm 2.853548106431e-08 true resid norm 
5.788329048801e+06 ||r(i)||/||b|| 4.968046617765e+00
 12 KSP preconditioned resid norm 2.810975464206e-08 true resid norm 
5.978285004145e+06 ||r(i)||/||b|| 5.131083313418e+00
 13 KSP preconditioned resid norm 2.770253127208e-08 true resid norm 
6.164551408716e+06 ||r(i)||/||b|| 5.290953316217e+00
 14 KSP preconditioned resid norm 2.731250834421e-08 true resid norm 
6.346469882864e+06 ||r(i)||/||b|| 5.447091547575e+00
 15 KSP preconditioned resid norm 2.693850812065e-08 true resid norm 
6.523343466838e+06 ||r(i)||/||b|| 5.598899816114e+00



[petsc-users] Valgrind and uninitialized values

2012-02-02 Thread Max Rudolph
I am trying to track down a memory corruption bug using valgrind, but I am 
having to wade through lots and lots of error messages similar to this one, 
which I believe are either spurious or related to some problem in Petsc and not 
in my code (please correct me if I'm wrong!)

==18162== Conditional jump or move depends on uninitialised value(s)
==18162==at 0x7258D73: MPIDI_CH3U_Handle_recv_req 
(ch3u_handle_recv_req.c:99)
==18162==by 0x724F2E1: MPIDI_CH3I_SMP_read_progress (ch3_smp_progress.c:656)
==18162==by 0x72462D4: MPIDI_CH3I_Progress (ch3_progress.c:185)
==18162==by 0x72AC52B: MPIC_Wait (helper_fns.c:518)
==18162==by 0x72AC314: MPIC_Sendrecv (helper_fns.c:163)
==18162==by 0x7218E18: MPIR_Allgather_OSU (allgather_osu.c:524)
==18162==by 0x7217099: PMPI_Allgather (allgather.c:840)
==18162==by 0x649FFD: PetscLayoutSetUp (in 
/work/01038/max/gk_0.1mm/gk_conv_50_vg/iso-convect-p)
==18162==by 0x631C32: VecCreate_MPI_Private (in 
/work/01038/max/gk_0.1mm/gk_conv_50_vg/iso-convect-p)
==18162==by 0x63253E: VecCreate_MPI (in 
/work/01038/max/gk_0.1mm/gk_conv_50_vg/iso-convect-p)
==18162==by 0x5E3279: VecSetType (in 
/work/01038/max/gk_0.1mm/gk_conv_50_vg/iso-convect-p)
==18162==by 0x6329CC: VecCreate_Standard (in 
/work/01038/max/gk_0.1mm/gk_conv_50_vg/iso-convect-p)
==18162==by 0x5E3279: VecSetType (in 
/work/01038/max/gk_0.1mm/gk_conv_50_vg/iso-convect-p)
==18162==by 0x93BFE7: DMCreateGlobalVector_DA (in 
/work/01038/max/gk_0.1mm/gk_conv_50_vg/iso-convect-p)
==18162==by 0x8C7F68: DMCreateGlobalVector (in 
/work/01038/max/gk_0.1mm/gk_conv_50_vg/iso-convect-p)
==18162==by 0x93B6DA: VecDuplicate_MPI_DA (in 
/work/01038/max/gk_0.1mm/gk_conv_50_vg/iso-convect-p)
==18162==by 0x5CF419: VecDuplicate (in 
/work/01038/max/gk_0.1mm/gk_conv_50_vg/iso-convect-p)
==18162==by 0x438F04: initializeNodalFields (nodalFields.c:35)
==18162==by 0x434730: main (main_isotropic_convection.c:123)

The relevant line of code is:
33:  ierr = DMCreateGlobalVector(grid-da, nodalFields-lastT); CHKERRQ(ierr);
34:  ierr = PetscObjectSetName((PetscObject) nodalFields-lastT, 
lastT);CHKERRQ(ierr);
35:  ierr = VecDuplicate( nodalFields-lastT, 
nodalFields-thisT);CHKERRQ(ierr);

I am using icc with petsc-3.2 on the Intel Westmere cluster at TACC. Petsc was 
compiled with debugging enabled. Thanks for your help.

Max


[petsc-users] Valgrind and uninitialized values

2012-02-02 Thread Max Rudolph
No, I am using the version of Petsc supplied on the lonestar machine, which was 
compiled with their MPI build. I will look into making a suppressions file. 
Thanks for your help.

Max

 
  I am trying to track down a memory corruption bug using valgrind, but I am
 
  having to wade through lots and lots of error messages similar to this one,
 
  which I believe are either spurious or related to some problem in Petsc and
 
  not in my code (please correct me if I'm wrong!)
 
 
 Are you using ---download-mpich? It should be valgrind clean. These are a
 pain, but you can make a supressions
 file as well.
 
Matt
 



[petsc-users] -log_summary problem

2012-01-03 Thread Max Rudolph
 On Tue, Dec 20, 2011 at 19:35, Max Rudolph rudolph at berkeley.edu wrote:
 When I run my code with the -log_summary option, it hangs indefinitely after 
 displaying:
 
 
 Average time to get PetscTime(): 9.53674e-08
 Average time for MPI_Barrier(): 0.00164938
 
 Is this a common problem, and if so, how do I fix it? This does not happen 
 when I run the example programs - only my own code, so I must be at fault but 
 without an error message I am not sure where to start. I am using 
 petsc-3.1-p7. Thanks for your help.
 
 Are all processes calling PetscFinalize()?
 
 How did you set -log_summary? It should be provided at the time you invoke 
 PetscInitialize() on all processes.
 
 Try running in a debugger, then break when it hangs and print the stack trace.

I found the problem, or at least a workaround. I have a PetscRandom and freed 
it in the second to last line of my main subroutine:
...
  ierr=  PetscRandomCreate(PETSC_COMM_WORLD, r);CHKERRQ(ierr);
  ierr = PetscRandomSetType(r,PETSCRAND48);CHKERRQ(ierr);
...
  ierr = PetscRandomDestroy( r );CHKERRQ(ierr);
  ierr = PetscFinalize();
}

If I comment out the line with PetscRandomDestroy, -log_summary seems to work.

Max
-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120103/1b31fc77/attachment.htm


[petsc-users] Mumps Error

2011-08-31 Thread Max Rudolph
I'm not sure if you figured out a solution yet, but I think that you might want 
to run with 

-mat_mumps_icntl_14 100

Max


 Dear Users,
 
 I'm having trouble figuring out why the MUMPS solver is failing on a 
 specific range of one of my parameters. When using the PETSc direct solver 
 on a single processor I have no issues. There error is:
 
 [0]PETSC ERROR: - Error Message 
 
 [0]PETSC ERROR: Error in external library!
 [0]PETSC ERROR: Error reported by MUMPS in numerical factorization phase: 
 INFO(1)=-9, INFO(2)=13
 !
 [0]PETSC ERROR: 
 
 [0]PETSC ERROR: Petsc Release Version 3.1.0, Patch 8, Thu Mar 17 13:37:48 
 CDT 2011
 [0]PETSC ERROR: See docs/changes/index.html for recent updates.
 [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
 [0]PETSC ERROR: See docs/index.html for manual pages.
 [0]PETSC ERROR: 
 
 [0]PETSC ERROR: ./cntor on a complex-c named hpc-1-14.local by abyrd Wed Aug 
 31 10:53:42 2011
 [0]PETSC ERROR: Libraries linked from 
 /panfs/storage.local/scs/home/abyrd/petsc-3.1-p8/complex-cpp-mumps/lib
 [0]PETSC ERROR: Configure run at Mon Jul 11 15:28:42 2011
 [0]PETSC ERROR: Configure options PETSC_ARCH=complex-cpp-mumps 
 --with-cc=mpicc --with-fc=mpif90 --with-blas-lapack-dir=/usr/lib64 
 --with-shared --with-clanguage=c++ --with-scalar-type=complex 
 --download-mumps=1 --download-blacs=1 --download-scalapack=1 
 --download-parmetis=1 --with-cxx=mpicxx
 [0]PETSC ERROR: 
 
 [0]PETSC ERROR: MatFactorNumeric_MUMPS() line 517 in 
 src/mat/impls/aij/mpi/mumps/mumps.c
 [0]PETSC ERROR: MatLUFactorNumeric() line 2587 in src/mat/interface/matrix.c
 [0]PETSC ERROR: PCSetUp_LU() line 158 in src/ksp/pc/impls/factor/lu/lu.c
 [0]PETSC ERROR: PCSetUp() line 795 in src/ksp/pc/interface/precon.c
 [0]PETSC ERROR: KSPSetUp() line 237 in src/ksp/ksp/interface/itfunc.c
 [0]PETSC ERROR: InvertHamiltonian() line 102 in WDinvert.h
 
 
 I suspect it has something to do with the preconditioning or setup of the 
 matrix I am trying to invert. The matrix becomes singular at energy = 0 eV, 
 and is nearly singular for values close to that, but the code is failing on 
 energies relatively far from that point. The affected energy interval is 
 [-0.03095, 0.03095].
 
 Is anyone able to point in the right direction to figure out what I'm not 
 setting up properly?
 
 Respectfully,
 Adam Byrd
 PETScCntor.zip
 
 
 
 
 ___
 petsc-users mailing list
 petsc-users at mcs.anl.gov
 https://lists.mcs.anl.gov/mailman/listinfo/petsc-users



[petsc-users] VecPointwiseDivide for Vecs with multiple DOFs?

2011-07-14 Thread Max Rudolph
Is there a straightforward way to perform a pointwise divide of every DOF in a 
global Vec with multiple degrees of freedom by another vector that has only one 
degree of freedom? Thanks for your help.

Max