Re: [petsc-users] (no subject)

2018-10-31 Thread Smith, Barry F. via petsc-users
https://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind


> On Oct 31, 2018, at 9:18 PM, Wenjin Xing  wrote:
> 
> Hi Barry
>  
> As you said, I have set the mat option to Aij. (MATAIJ = "aij" - A matrix 
> type to be used for sparse matrices. This matrix type is identical to 
> MATSEQAIJ when constructed with a single process communicator.) However, a 
> new error pops up.  By the way, I am using a single processor, not in 
> parallel mode.
>  
>  
>  
> 
>  
> Kind regards
> Wenjin
>  
>  
>  
>  
> -Original Message-
> From: Smith, Barry F. [mailto:bsm...@mcs.anl.gov] 
> Sent: Thursday, 1 November 2018 9:28 AM
> To: Wenjin Xing 
> Cc: petsc-users@mcs.anl.gov
> Subject: Re: [petsc-users] (no subject)
>  
>  
>This option only works with AIJ matrices; you must be using either BAIJ or 
> SBAIJ matrices? (or a shell matrix)
>  
>Barry
>  
>  
> > On Oct 31, 2018, at 5:45 AM, Wenjin Xing via petsc-users 
> >  wrote:
> > 
> > My issue is summarized in the picture and posted in the link 
> > https://scicomp.stackexchange.com/questions/30458/what-does-the-error-this-matrix-type-does-not-have-a-find-zero-diagonals-define?noredirect=1#comment56074_30458
> >  
> > 
> >  
> > Kind regards
> > Wenjin



Re: [petsc-users] DIVERGED_NANORING with PC GAMG

2018-10-31 Thread Smith, Barry F. via petsc-users


> On Oct 31, 2018, at 5:39 PM, Appel, Thibaut via petsc-users 
>  wrote:
> 
> Well yes naturally for the residual but adding -ksp_true_residual just gives
> 
>   0 KSP unpreconditioned resid norm 3.583290589961e+00 true resid norm 
> 3.583290589961e+00 ||r(i)||/||b|| 1.e+00
>   1 KSP unpreconditioned resid norm 0.e+00 true resid norm 
> 3.583290589961e+00 ||r(i)||/||b|| 1.e+00
> Linear solve converged due to CONVERGED_ATOL iterations 1

   Very bad stuff is happening in the preconditioner. The preconditioner must 
have a null space (which it shouldn't have to be a useful preconditioner).

> 
> Mark - if that helps - a Poisson equation is used for the pressure so the 
> Helmholtz is the same as for the velocity in the interior.
> 
> Thibaut
> 
>> Le 31 oct. 2018 à 21:05, Mark Adams  a écrit :
>> 
>> These are indefinite (bad) Helmholtz problems. Right?
>> 
>> On Wed, Oct 31, 2018 at 2:38 PM Matthew Knepley  wrote:
>> On Wed, Oct 31, 2018 at 2:13 PM Thibaut Appel  
>> wrote:
>> Hi Mark, Matthew,
>> 
>> Thanks for taking the time.
>> 
>> 1) You're not suggesting having -fieldsplit_X_ksp_type fgmres for each 
>> field, are you?
>> 
>> 2) No, the matrix has pressure in one of the fields. Here it's a 2D problem 
>> (but we're also doing 3D), the unknowns are (p,u,v) and those are my 3 
>> fields. We are dealing with subsonic/transsonic flows so it is convection 
>> dominated indeed.
>> 
>> 3) We are in frequency domain with respect to time, i.e. 
>> \partial{phi}/\partial{t} = -i*omega*phi.
>> 
>> 4) Hypre is unfortunately not an option since we are in complex arithmetic.
>> 
>> 
>> 
>>> I'm not sure about "-fieldsplit_pc_type gamg" GAMG should work on one 
>>> block, and hence be a subpc. I'm not up on fieldsplit syntax.
>> According to the online manual page this syntax applies the suffix to all 
>> the defined fields?
>> 
>> 
>> 
>>> Mark is correct. I wanted you to change the smoother. He shows how to 
>>> change it to Richardson (make sure you add the self-scale option), which is 
>>> probably the best choice.
>>> 
>>>   Thanks,
>>> 
>>>  Matt
>> 
>> You did tell me to set it to GMRES if I'm not mistaken, that's why I tried 
>> "-fieldsplit_mg_levels_ksp_type gmres" (mentioned in the email). Also, it 
>> wasn't clear whether these should be applied to each block or the whole 
>> system, as the online manual pages + .pdf manual barely mention smoothers 
>> and how to manipulate MG objects with KSP/PC, this especially with 
>> PCFIELDSPLIT where examples are scarce.
>> 
>> From what I can gather from your suggestions I tried (lines with X are 
>> repeated for X={0,1,2}) 
>> 
>> This looks good. How can an identically zero vector produce a 0 residual? 
>> You should always monitor with
>> 
>>   -ksp_monitor_true_residual.
>> 
>>Thanks,
>>  
>> Matt 
>> -ksp_view_pre -ksp_monitor -ksp_converged_reason \
>> -ksp_type fgmres -ksp_rtol 1.0e-8 \
>> -pc_type fieldsplit \
>> -pc_fieldsplit_type multiplicative \
>> -pc_fieldsplit_block_size 3 \
>> -pc_fieldsplit_0_fields 0 \
>> -pc_fieldsplit_1_fields 1 \
>> -pc_fieldsplit_2_fields 2 \
>> -fieldsplit_X_pc_type gamg \
>> -fieldsplit_X_ksp_type gmres \
>> -fieldsplit_X_ksp_rtol 1e-10 \
>> -fieldsplit_X_mg_levels_ksp_type richardson \
>> -fieldsplit_X_mg_levels_pc_type sor \
>> -fieldsplit_X_pc_gamg_agg_nsmooths 0 \
>> -fieldsplit_X_mg_levels_ksp_richardson_self_scale \
>> -log_view
>> 
>> which yields 
>> 
>> KSP Object: 1 MPI processes
>>   type: fgmres
>> restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization 
>> with no iterative refinement
>> happy breakdown tolerance 1e-30
>>   maximum iterations=1, initial guess is zero
>>   tolerances:  relative=1e-08, absolute=1e-50, divergence=1.
>>   left preconditioning
>>   using DEFAULT norm type for convergence test
>> PC Object: 1 MPI processes
>>   type: fieldsplit
>>   PC has not been set up so information may be incomplete
>> FieldSplit with MULTIPLICATIVE composition: total splits = 3, blocksize 
>> = 3
>> Solver info for each split is in the following KSP objects:
>>   Split number 0 Fields  0
>>   KSP Object: (fieldsplit_0_) 1 MPI processes
>> type: preonly
>> maximum iterations=1, initial guess is zero
>> tolerances:  relative=1e-05, absolute=1e-50, divergence=1.
>> left preconditioning
>> using DEFAULT norm type for convergence test
>>   PC Object: (fieldsplit_0_) 1 MPI processes
>> type not yet set
>> PC has not been set up so information may be incomplete
>>   Split number 1 Fields  1
>>   KSP Object: (fieldsplit_1_) 1 MPI processes
>> type: preonly
>> maximum iterations=1, initial guess is zero
>> tolerances:  relative=1e-05, absolute=1e-50, divergence=1.
>> left preconditioning
>> using DEFAULT norm type for convergence test
>>   PC Object: (fieldsplit_1_) 1 MPI processes
>> type not yet set
>> PC has not been set up so information may be incomplete

Re: [petsc-users] (no subject)

2018-10-31 Thread Smith, Barry F. via petsc-users


   This option only works with AIJ matrices; you must be using either BAIJ or 
SBAIJ matrices? (or a shell matrix)

   Barry


> On Oct 31, 2018, at 5:45 AM, Wenjin Xing via petsc-users 
>  wrote:
> 
> My issue is summarized in the picture and posted in the link 
> https://scicomp.stackexchange.com/questions/30458/what-does-the-error-this-matrix-type-does-not-have-a-find-zero-diagonals-define?noredirect=1#comment56074_30458
>  
> 
>  
> Kind regards
> Wenjin



Re: [petsc-users] DIVERGED_NANORING with PC GAMG

2018-10-31 Thread Appel, Thibaut via petsc-users
Well yes naturally for the residual but adding -ksp_true_residual just gives

  0 KSP unpreconditioned resid norm 3.583290589961e+00 true resid norm 
3.583290589961e+00 ||r(i)||/||b|| 1.e+00
  1 KSP unpreconditioned resid norm 0.e+00 true resid norm 
3.583290589961e+00 ||r(i)||/||b|| 1.e+00
Linear solve converged due to CONVERGED_ATOL iterations 1

Mark - if that helps - a Poisson equation is used for the pressure so the 
Helmholtz is the same as for the velocity in the interior.

Thibaut

Le 31 oct. 2018 à 21:05, Mark Adams mailto:mfad...@lbl.gov>> a 
écrit :

These are indefinite (bad) Helmholtz problems. Right?

On Wed, Oct 31, 2018 at 2:38 PM Matthew Knepley 
mailto:knep...@gmail.com>> wrote:
On Wed, Oct 31, 2018 at 2:13 PM Thibaut Appel  wrote:
Hi Mark, Matthew,

Thanks for taking the time.

1) You're not suggesting having -fieldsplit_X_ksp_type fgmres for each field, 
are you?

2) No, the matrix has pressure in one of the fields. Here it's a 2D problem 
(but we're also doing 3D), the unknowns are (p,u,v) and those are my 3 fields. 
We are dealing with subsonic/transsonic flows so it is convection dominated 
indeed.

3) We are in frequency domain with respect to time, i.e. 
\partial{phi}/\partial{t} = -i*omega*phi.

4) Hypre is unfortunately not an option since we are in complex arithmetic.



I'm not sure about "-fieldsplit_pc_type gamg" GAMG should work on one block, 
and hence be a subpc. I'm not up on fieldsplit syntax.
According to the online manual page this syntax applies the suffix to all the 
defined fields?



Mark is correct. I wanted you to change the smoother. He shows how to change it 
to Richardson (make sure you add the self-scale option), which is probably the 
best choice.

  Thanks,

 Matt

You did tell me to set it to GMRES if I'm not mistaken, that's why I tried 
"-fieldsplit_mg_levels_ksp_type gmres" (mentioned in the email). Also, it 
wasn't clear whether these should be applied to each block or the whole system, 
as the online manual pages + .pdf manual barely mention smoothers and how to 
manipulate MG objects with KSP/PC, this especially with PCFIELDSPLIT where 
examples are scarce.

From what I can gather from your suggestions I tried (lines with X are repeated 
for X={0,1,2})

This looks good. How can an identically zero vector produce a 0 residual? You 
should always monitor with

  -ksp_monitor_true_residual.

   Thanks,

Matt
-ksp_view_pre -ksp_monitor -ksp_converged_reason \
-ksp_type fgmres -ksp_rtol 1.0e-8 \
-pc_type fieldsplit \
-pc_fieldsplit_type multiplicative \
-pc_fieldsplit_block_size 3 \
-pc_fieldsplit_0_fields 0 \
-pc_fieldsplit_1_fields 1 \
-pc_fieldsplit_2_fields 2 \
-fieldsplit_X_pc_type gamg \
-fieldsplit_X_ksp_type gmres \
-fieldsplit_X_ksp_rtol 1e-10 \
-fieldsplit_X_mg_levels_ksp_type richardson \
-fieldsplit_X_mg_levels_pc_type sor \
-fieldsplit_X_pc_gamg_agg_nsmooths 0 \
-fieldsplit_X_mg_levels_ksp_richardson_self_scale \
-log_view

which yields

KSP Object: 1 MPI processes
  type: fgmres
restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization 
with no iterative refinement
happy breakdown tolerance 1e-30
  maximum iterations=1, initial guess is zero
  tolerances:  relative=1e-08, absolute=1e-50, divergence=1.
  left preconditioning
  using DEFAULT norm type for convergence test
PC Object: 1 MPI processes
  type: fieldsplit
  PC has not been set up so information may be incomplete
FieldSplit with MULTIPLICATIVE composition: total splits = 3, blocksize = 3
Solver info for each split is in the following KSP objects:
  Split number 0 Fields  0
  KSP Object: (fieldsplit_0_) 1 MPI processes
type: preonly
maximum iterations=1, initial guess is zero
tolerances:  relative=1e-05, absolute=1e-50, divergence=1.
left preconditioning
using DEFAULT norm type for convergence test
  PC Object: (fieldsplit_0_) 1 MPI processes
type not yet set
PC has not been set up so information may be incomplete
  Split number 1 Fields  1
  KSP Object: (fieldsplit_1_) 1 MPI processes
type: preonly
maximum iterations=1, initial guess is zero
tolerances:  relative=1e-05, absolute=1e-50, divergence=1.
left preconditioning
using DEFAULT norm type for convergence test
  PC Object: (fieldsplit_1_) 1 MPI processes
type not yet set
PC has not been set up so information may be incomplete
  Split number 2 Fields  2
  KSP Object: (fieldsplit_2_) 1 MPI processes
type: preonly
maximum iterations=1, initial guess is zero
tolerances:  relative=1e-05, absolute=1e-50, divergence=1.
left preconditioning
using DEFAULT norm type for convergence test
  PC Object: (fieldsplit_2_) 1 MPI processes
type not yet set
PC has not been set up so information may be incomplete
  linear system matrix = precond matrix:
  Mat Object: 1 MPI processes
type: seqaij
rows=52500, cols=52500
total: nonzeros=1127079, 

Re: [petsc-users] Convergence of AMG

2018-10-31 Thread Mark Adams via petsc-users
On Wed, Oct 31, 2018 at 3:43 PM Manav Bhatia  wrote:

> Here are the updates. I did not find the options to make much difference
> in the results.
>
> I noticed this message in the GAMG output for cases 2, 3:  HARD stop of
> coarsening on level 3.  Grid too small: 1 block nodes
>

Yea, this is what it looks like but I can see that there are 5 levels below.


>
> Is this implying that the mesh on level 3 could not be coarsened towards
> levels 4/5?
>
> In one of your earlier emails you had mentioned about reducing the rate of
> coarsening. What is the option to do that.
>

The coarsening rate is pretty slow but you could try reducing it more with
a finite -pc_gamg_threshold val (you could try val=0.01, 0.02, 0.05).


>
> Common options:
> -pc_mg_levels 5 -mg_levels_ksp_max_it 4 -pc_gamg_square_graph 0
> -pc_gamg_threshold 0. -mg_levels_ksp_type richardson -gamg_est_ksp_type cg
>
> Case#  |  #levels   |#KSP Iters  |  Extra Options
> 
>  1  | 5|   67|
>  2  | 5|   67|
>  -pc_gamg_agg_nsmooths 2
>  3  | 5|   67|
>  -pc_gamg_agg_nsmooths 2  -mg_levels_esteig_ksp_type cg
>

I have done plates a long time ago and they worked fine, but there is
something going wrong here. Perhaps my plate formulations were different
but I don't think so (what we called Reissner-Mindlin).

You might try an easier test case if this is not a super simple test (eg,
well shaped elements).

Otherwise you can limit the number of levels that you use and use a
parallel direct solver for the (large) coarse grid. To do that use:
"-pc_gamg_use_parallel_coarse_grid_solver -mg_coarse_pc_type lu" and
configure your PETSc with a parallel coarse grid solver like MUMPS or
SuperLU.

Then set -pc_gamg_coarse_eq_limit N with N being large. Increase it and
minimize the solve time. You will find that parallel lu is faster than MG
for "small" problems (note, small could be pretty large here). So you will
want to run LU first (ie, no AMG) to get a baseline and then try GAMG.

Oh, and you could also try hypre (=pc_type hypre -pc_hypre_type boomeramg).
Hypre should not be great in theory but it is worth a try. Maybe there's
just something going wrong with GAMG.


Re: [petsc-users] DIVERGED_NANORING with PC GAMG

2018-10-31 Thread Mark Adams via petsc-users
These are indefinite (bad) Helmholtz problems. Right?

On Wed, Oct 31, 2018 at 2:38 PM Matthew Knepley  wrote:

> On Wed, Oct 31, 2018 at 2:13 PM Thibaut Appel 
> wrote:
>
>> Hi Mark, Matthew,
>>
>> Thanks for taking the time.
>>
>> 1) You're not suggesting having -fieldsplit_X_ksp_type *f*gmres for each
>> field, are you?
>>
>> 2) No, the matrix *has* pressure in one of the fields. Here it's a 2D
>> problem (but we're also doing 3D), the unknowns are (p,u,v) and those are
>> my 3 fields. We are dealing with subsonic/transsonic flows so it is
>> convection dominated indeed.
>>
>> 3) We are in frequency domain with respect to time, i.e.
>> \partial{phi}/\partial{t} = -i*omega*phi.
>>
>> 4) Hypre is unfortunately not an option since we are in complex
>> arithmetic.
>>
>>
>> I'm not sure about "-fieldsplit_pc_type gamg" GAMG should work on one
>> block, and hence be a subpc. I'm not up on fieldsplit syntax.
>>
>> According to the online manual page this syntax applies the suffix to all
>> the defined fields?
>>
>>
>> Mark is correct. I wanted you to change the smoother. He shows how to
>> change it to Richardson (make sure you add the self-scale option), which is
>> probably the best choice.
>>
>>   Thanks,
>>
>>  Matt
>>
>> You did tell me to set it to GMRES if I'm not mistaken, that's why I
>> tried "-fieldsplit_mg_levels_ksp_type gmres" (mentioned in the email).
>> Also, it wasn't clear whether these should be applied to each block or the
>> whole system, as the online manual pages + .pdf manual barely mention
>> smoothers and how to manipulate MG objects with KSP/PC, this especially
>> with PCFIELDSPLIT where examples are scarce.
>>
>> From what I can gather from your suggestions I tried (lines with X are
>> repeated for X={0,1,2})
>>
>> This looks good. How can an identically zero vector produce a 0 residual?
> You should always monitor with
>
>   -ksp_monitor_true_residual.
>
>Thanks,
>
> Matt
>
>> -ksp_view_pre -ksp_monitor -ksp_converged_reason \
>> -ksp_type fgmres -ksp_rtol 1.0e-8 \
>> -pc_type fieldsplit \
>> -pc_fieldsplit_type multiplicative \
>> -pc_fieldsplit_block_size 3 \
>> -pc_fieldsplit_0_fields 0 \
>> -pc_fieldsplit_1_fields 1 \
>> -pc_fieldsplit_2_fields 2 \
>> -fieldsplit_X_pc_type gamg \
>> -fieldsplit_X_ksp_type gmres \
>> -fieldsplit_X_ksp_rtol 1e-10 \
>> -fieldsplit_X_mg_levels_ksp_type richardson \
>> -fieldsplit_X_mg_levels_pc_type sor \
>> -fieldsplit_X_pc_gamg_agg_nsmooths 0 \
>> -fieldsplit_X_mg_levels_ksp_richardson_self_scale \
>> -log_view
>>
>> which yields
>>
>> KSP Object: 1 MPI processes
>>   type: fgmres
>> restart=30, using Classical (unmodified) Gram-Schmidt
>> Orthogonalization with no iterative refinement
>> happy breakdown tolerance 1e-30
>>   maximum iterations=1, initial guess is zero
>>   tolerances:  relative=1e-08, absolute=1e-50, divergence=1.
>>   left preconditioning
>>   using DEFAULT norm type for convergence test
>> PC Object: 1 MPI processes
>>   type: fieldsplit
>>   PC has not been set up so information may be incomplete
>> FieldSplit with MULTIPLICATIVE composition: total splits = 3,
>> blocksize = 3
>> Solver info for each split is in the following KSP objects:
>>   Split number 0 Fields  0
>>   KSP Object: (fieldsplit_0_) 1 MPI processes
>> type: preonly
>> maximum iterations=1, initial guess is zero
>> tolerances:  relative=1e-05, absolute=1e-50, divergence=1.
>> left preconditioning
>> using DEFAULT norm type for convergence test
>>   PC Object: (fieldsplit_0_) 1 MPI processes
>> type not yet set
>> PC has not been set up so information may be incomplete
>>   Split number 1 Fields  1
>>   KSP Object: (fieldsplit_1_) 1 MPI processes
>> type: preonly
>> maximum iterations=1, initial guess is zero
>> tolerances:  relative=1e-05, absolute=1e-50, divergence=1.
>> left preconditioning
>> using DEFAULT norm type for convergence test
>>   PC Object: (fieldsplit_1_) 1 MPI processes
>> type not yet set
>> PC has not been set up so information may be incomplete
>>   Split number 2 Fields  2
>>   KSP Object: (fieldsplit_2_) 1 MPI processes
>> type: preonly
>> maximum iterations=1, initial guess is zero
>> tolerances:  relative=1e-05, absolute=1e-50, divergence=1.
>> left preconditioning
>> using DEFAULT norm type for convergence test
>>   PC Object: (fieldsplit_2_) 1 MPI processes
>> type not yet set
>> PC has not been set up so information may be incomplete
>>   linear system matrix = precond matrix:
>>   Mat Object: 1 MPI processes
>> type: seqaij
>> rows=52500, cols=52500
>> total: nonzeros=1127079, allocated nonzeros=1128624
>> total number of mallocs used during MatSetValues calls =0
>>   not using I-node routines
>>   0 KSP Residual norm 3.583290589961e+00
>>   1 KSP Residual norm 0.e+00
>> Linear solve converged due to CONVERGED_ATOL iterations 1
>>
>> so something must not be 

Re: [petsc-users] Two applications with PETSc

2018-10-31 Thread Guido Giuntoli via petsc-users
This is what I need ! Thank you Matt !

El El mié, 31 oct 2018 a las 19:53, Matthew Knepley 
escribió:

> On Wed, Oct 31, 2018 at 1:34 PM Guido Giuntoli via petsc-users <
> petsc-users@mcs.anl.gov> wrote:
>
>> Hi, I have two codes that use PETSc. The first one is parallel and uses
>> MPI and the other doesn't uses MPI (uses sequencial Mats and Vecs because
>> the problem is smaller). I need now to couple both codes and my question is
>> how do I deal with the PetscInitialize in the sequential code ? I know that
>> PetscInitialize calls MPI_Init so I think if the first code just called
>> MPI_Init before I will get an error or not ? Every process in the parallel
>> code needs to use the functions of the sequential code, so every process
>> will call PetscInitialize of the sequential code.
>>
>> constrain : I would like to use the same compiled library of petsc to
>> link both codes.
>>
>
> You should only call PetscInitialize() once (just like MPIInit()). You can
> check whether it has been called using PetscInitialized().
>
>   Thanks,
>
> Matt
>
>
>> Thank you, Guido.
>>
>
>>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
> https://www.cse.buffalo.edu/~knepley/
> 
>


Re: [petsc-users] DIVERGED_NANORING with PC GAMG

2018-10-31 Thread Thibaut Appel via petsc-users

Hi Mark, Matthew,

Thanks for taking the time.

1) You're not suggesting having -fieldsplit_X_ksp_type *f*gmres for each 
field, are you?


2) No, the matrix *has* pressure in one of the fields. Here it's a 2D 
problem (but we're also doing 3D), the unknowns are (p,u,v) and those 
are my 3 fields. We are dealing with subsonic/transsonic flows so it is 
convection dominated indeed.


3) We are in frequency domain with respect to time, i.e. 
\partial{phi}/\partial{t} = -i*omega*phi.


4) Hypre is unfortunately not an option since we are in complex arithmetic.


I'm not sure about "-fieldsplit_pc_type gamg" GAMG should work on one 
block, and hence be a subpc. I'm not up on fieldsplit syntax.


According to the online manual page this syntax applies the suffix to 
all the defined fields?



Mark is correct. I wanted you to change the smoother. He shows how to 
change it to Richardson (make sure you add the self-scale option), 
which is probably the best choice.


  Thanks,

     Matt


You did tell me to set it to GMRES if I'm not mistaken, that's why I 
tried "-fieldsplit_mg_levels_ksp_type gmres" (mentioned in the email). 
Also, it wasn't clear whether these should be applied to each block or 
the whole system, as the online manual pages + .pdf manual barely 
mention smoothers and how to manipulate MG objects with KSP/PC, this 
especially with PCFIELDSPLIT where examples are scarce.


From what I can gather from your suggestions I tried (lines with X are 
repeated for X={0,1,2})


-ksp_view_pre -ksp_monitor -ksp_converged_reason \
-ksp_type fgmres -ksp_rtol 1.0e-8 \
-pc_type fieldsplit \
-pc_fieldsplit_type multiplicative \
-pc_fieldsplit_block_size 3 \
-pc_fieldsplit_0_fields 0 \
-pc_fieldsplit_1_fields 1 \
-pc_fieldsplit_2_fields 2 \
-fieldsplit_X_pc_type gamg \
-fieldsplit_X_ksp_type gmres \
-fieldsplit_X_ksp_rtol 1e-10 \
-fieldsplit_X_mg_levels_ksp_type richardson \
-fieldsplit_X_mg_levels_pc_type sor \
-fieldsplit_X_pc_gamg_agg_nsmooths 0 \
-fieldsplit_X_mg_levels_ksp_richardson_self_scale \
-log_view

which yields

KSP Object: 1 MPI processes
  type: fgmres
    restart=30, using Classical (unmodified) Gram-Schmidt 
Orthogonalization with no iterative refinement

    happy breakdown tolerance 1e-30
  maximum iterations=1, initial guess is zero
  tolerances:  relative=1e-08, absolute=1e-50, divergence=1.
  left preconditioning
  using DEFAULT norm type for convergence test
PC Object: 1 MPI processes
  type: fieldsplit
  PC has not been set up so information may be incomplete
    FieldSplit with MULTIPLICATIVE composition: total splits = 3, 
blocksize = 3

    Solver info for each split is in the following KSP objects:
  Split number 0 Fields  0
  KSP Object: (fieldsplit_0_) 1 MPI processes
    type: preonly
    maximum iterations=1, initial guess is zero
    tolerances:  relative=1e-05, absolute=1e-50, divergence=1.
    left preconditioning
    using DEFAULT norm type for convergence test
  PC Object: (fieldsplit_0_) 1 MPI processes
    type not yet set
    PC has not been set up so information may be incomplete
  Split number 1 Fields  1
  KSP Object: (fieldsplit_1_) 1 MPI processes
    type: preonly
    maximum iterations=1, initial guess is zero
    tolerances:  relative=1e-05, absolute=1e-50, divergence=1.
    left preconditioning
    using DEFAULT norm type for convergence test
  PC Object: (fieldsplit_1_) 1 MPI processes
    type not yet set
    PC has not been set up so information may be incomplete
  Split number 2 Fields  2
  KSP Object: (fieldsplit_2_) 1 MPI processes
    type: preonly
    maximum iterations=1, initial guess is zero
    tolerances:  relative=1e-05, absolute=1e-50, divergence=1.
    left preconditioning
    using DEFAULT norm type for convergence test
  PC Object: (fieldsplit_2_) 1 MPI processes
    type not yet set
    PC has not been set up so information may be incomplete
  linear system matrix = precond matrix:
  Mat Object: 1 MPI processes
    type: seqaij
    rows=52500, cols=52500
    total: nonzeros=1127079, allocated nonzeros=1128624
    total number of mallocs used during MatSetValues calls =0
  not using I-node routines
  0 KSP Residual norm 3.583290589961e+00
  1 KSP Residual norm 0.e+00
Linear solve converged due to CONVERGED_ATOL iterations 1

so something must not be set correctly. The solution is identically zero 
everywhere.


Is that option list what you meant? If you could let me know what should 
be corrected.



Thanks for your support,


Thibaut


On 31/10/2018 16:43, Mark Adams wrote:



On Tue, Oct 30, 2018 at 5:23 PM Appel, Thibaut via petsc-users 
mailto:petsc-users@mcs.anl.gov>> wrote:


Dear users,

Following a suggestion from Matthew Knepley I’ve been trying to
apply fieldsplit/gamg for my set of PDEs but I’m still
encountering issues despite various tests. pc_gamg simply won’t start.
Note that direct solvers always yield the correct, physical result.
Removing the fieldsplit 

Re: [petsc-users] DIVERGED_NANORING with PC GAMG

2018-10-31 Thread Mark Adams via petsc-users
Again, you probably want to avoid Cheby. with  ‘-mg_levels_ksp_type
richardson -mg_levels_pc_type sor’ with the proper prefix.

I'm not sure about "-fieldsplit_pc_type gamg" GAMG should work on one
block, and hence be a subpc. I'm not up on fieldsplit syntax.

On Wed, Oct 31, 2018 at 9:22 AM Thibaut Appel via petsc-users <
petsc-users@mcs.anl.gov> wrote:

> Hi Matthew,
>
> Which database option are you referring to?
>
> I tried to add -fieldsplit_mg_levels_ksp_type gmres (and
> -fieldsplit_mg_levels_ksp_max_it 4 for another run) to my options (cf.
> below) which starts the iterations but it takes 1 hour for PETSc to do 13
> of them so it must be wrong.
>
> Reminder: my baseline database options line reads
>
> -ksp_view_pre -ksp_monitor -ksp_converged_reason \
> -ksp_rtol 1.0e-8 -ksp_gmres_restart 300 \
> -ksp_type fgmres \
> -pc_type fieldsplit \
> -pc_fieldsplit_type multiplicative \
> -pc_fieldsplit_block_size 3 \
> -pc_fieldsplit_0_fields 0   \
> -pc_fieldsplit_1_fields 1   \
> -pc_fieldsplit_2_fields 2   \
> -fieldsplit_pc_type gamg\
> -fieldsplit_ksp_type gmres  \
> -fieldsplit_ksp_rtol 1.0e-8
>
> which gives
>
> KSP Object: 1 MPI processes
>   type: fgmres
> restart=300, using Classical (unmodified) Gram-Schmidt
> Orthogonalization with no iterative refinement
> happy breakdown tolerance 1e-30
>   maximum iterations=1, initial guess is zero
>   tolerances:  relative=1e-08, absolute=1e-50, divergence=1.
>   left preconditioning
>   using DEFAULT norm type for convergence test
> PC Object: 1 MPI processes
>   type: fieldsplit
>   PC has not been set up so information may be incomplete
> FieldSplit with MULTIPLICATIVE composition: total splits = 3,
> blocksize = 3
> Solver info for each split is in the following KSP objects:
>   Split number 0 Fields  0
>   KSP Object: (fieldsplit_0_) 1 MPI processes
> type: preonly
> maximum iterations=1, initial guess is zero
> tolerances:  relative=1e-05, absolute=1e-50, divergence=1.
> left preconditioning
> using DEFAULT norm type for convergence test
>   PC Object: (fieldsplit_0_) 1 MPI processes
> type not yet set
> PC has not been set up so information may be incomplete
>   Split number 1 Fields  1
>   KSP Object: (fieldsplit_1_) 1 MPI processes
> type: preonly
> maximum iterations=1, initial guess is zero
> tolerances:  relative=1e-05, absolute=1e-50, divergence=1.
> left preconditioning
> using DEFAULT norm type for convergence test
>   PC Object: (fieldsplit_1_) 1 MPI processes
> type not yet set
> PC has not been set up so information may be incomplete
>   Split number 2 Fields  2
>   KSP Object: (fieldsplit_2_) 1 MPI processes
> type: preonly
> maximum iterations=1, initial guess is zero
> tolerances:  relative=1e-05, absolute=1e-50, divergence=1.
> left preconditioning
> using DEFAULT norm type for convergence test
>   PC Object: (fieldsplit_2_) 1 MPI processes
> type not yet set
> PC has not been set up so information may be incomplete
>   linear system matrix = precond matrix:
>   Mat Object: 1 MPI processes
> type: seqaij
> rows=52500, cols=52500
> total: nonzeros=1127079, allocated nonzeros=1128624
> total number of mallocs used during MatSetValues calls =0
>   not using I-node routines
>   0 KSP Residual norm 3.583290589961e+00
> [0]PETSC ERROR: - Error Message
> --
> [0]PETSC ERROR: Petsc has generated inconsistent data
> [0]PETSC ERROR: Eigen estimator failed: DIVERGED_NANORINF at iteration
> 0[0]PETSC ERROR: Petsc Release Version 3.10.2, unknown
> [0]PETSC ERROR: Configure options --PETSC_ARCH=msi_cplx_debug
> --with-scalar-type=complex --with-precision=double --with-debugging=1
> --with-valgrind=1 --with-debugger=gdb --with-fortran-kernels=1
> --download-mpich --download-hwloc --download-fblaslapack
> --download-scalapack --download-metis --download-parmetis
> --download-ptscotch --download-mumps --download-slepc
> [0]PETSC ERROR: #1 KSPSolve_Chebyshev() line 381 in
> /home/thibaut/Packages/petsc/src/ksp/ksp/impls/cheby/cheby.c
> [0]PETSC ERROR: #2 KSPSolve() line 780 in
> /home/thibaut/Packages/petsc/src/ksp/ksp/interface/itfunc.c
> [0]PETSC ERROR: #3 PCMGMCycle_Private() line 20 in
> /home/thibaut/Packages/petsc/src/ksp/pc/impls/mg/mg.c
> [0]PETSC ERROR: #4 PCApply_MG() line 377 in
> /home/thibaut/Packages/petsc/src/ksp/pc/impls/mg/mg.c
> [0]PETSC ERROR: #5 PCApply() line 462 in
> /home/thibaut/Packages/petsc/src/ksp/pc/interface/precon.c
> [0]PETSC ERROR: #6 KSP_PCApply() line 281 in
> /home/thibaut/Packages/petsc/include/petsc/private/kspimpl.h
> [0]PETSC ERROR: #7 KSPInitialResidual() line 67 in
> /home/thibaut/Packages/petsc/src/ksp/ksp/interface/itres.c
> [0]PETSC ERROR: #8 KSPSolve_GMRES() line 233 in
> /home/thibaut/Packages/petsc/src/ksp/ksp/impls/gmres/gmres.c
> [0]PETSC ERROR: #9 KSPSolve() line 780 in

Re: [petsc-users] DIVERGED_NANORING with PC GAMG

2018-10-31 Thread Mark Adams via petsc-users
On Tue, Oct 30, 2018 at 5:23 PM Appel, Thibaut via petsc-users <
petsc-users@mcs.anl.gov> wrote:

> Dear users,
>
> Following a suggestion from Matthew Knepley I’ve been trying to apply
> fieldsplit/gamg for my set of PDEs but I’m still encountering issues
> despite various tests. pc_gamg simply won’t start.
> Note that direct solvers always yield the correct, physical result.
> Removing the fieldsplit to focus on the gamg bit and trying to solve the
> linear system on a modest size problem still gives, with
>
> '-ksp_monitor -ksp_rtol 1.0e-10 -ksp_gmres_restart 300 -ksp_type gmres
> -pc_type gamg'
>
> [3]PETSC ERROR: - Error Message
> --
> [3]PETSC ERROR: Petsc has generated inconsistent data
> [3]PETSC ERROR: Have un-symmetric graph (apparently). Use
> '-(null)pc_gamg_sym_graph true' to symetrize the graph or
> '-(null)pc_gamg_threshold -1' if the matrix is structurally symmetric.
>
> And since then, after adding '-pc_gamg_sym_graph true' I have been getting
> [0]PETSC ERROR: - Error Message
> --
> [0]PETSC ERROR: Petsc has generated inconsistent data
> [0]PETSC ERROR: Eigen estimator failed: DIVERGED_NANORINF at iteration
>
> -ksp_chebyshev_esteig_noisy 0/1 does not change anything
>
> Knowing that Chebyshev eigen estimator needs a positive spectrum I tried
> ‘-mg_levels_ksp_type gmres’ but iterations would just go on endlessly.
>

This is OK, but you need to use '-ksp_type *f*gmres' (this could be why it
is failing ...).

It looks like your matrix is 1) just the velocity field and 2) very
unsymmetric (eg, convection dominated). I would start with
‘-mg_levels_ksp_type richardson -mg_levels_pc_type sor’.

I would also start with unsmoothed aggregation: '-pc_gamg_nsmooths 0'


>
> It seems that I have indeed eigenvalues of rather high magnitude in the
> spectrum of my operator without being able to determine the reason.
> The eigenvectors look like small artifacts at the wall-inflow or
> wall-outflow corners with zero anywhere else but I do not know how to
> interpret this.
> Equations are time-harmonic linearized Navier-Stokes to which a forcing is
> applied, there’s no time-marching.
>

You mean you are in frequency domain?


>
> Matrix is formed with a MPIAIJ type. The formulation is incompressible, in
> complex arithmetic and the 2D physical domain is mapped to a logically
> rectangular,


This kind of messes up the null space that AMG depends on but AMG theory is
gone for NS anyway.


> regular collocated grid with a high-order finite difference method.
> I determine the ownership of the rows/degrees of freedom of the matrix
> with PetscSplitOwnership and I’m not using DMDA.
>

Our iterative solvers are probably not going to work well on this but you
should test hypre also (-pc_type hypre -pc_hypre_type boomeramg). You need
to configure PETSc to download hypre.

Mark


>
> The Fortran application code is memory-leak free and has undergone a
> strict verification/validation procedure for different variations of the
> PDEs.
>
> If there’s any problem with the matrix what could help for the diagnostic?
> At this point I’m running out of ideas so I would really appreciate
> additional suggestions and discussions.
>
> Thanks for your continued support,
>
>
> Thibaut


Re: [petsc-users] DIVERGED_NANORING with PC GAMG

2018-10-31 Thread Thibaut Appel via petsc-users

Hi Matthew,

Which database option are you referring to?

I tried to add -fieldsplit_mg_levels_ksp_type gmres (and 
-fieldsplit_mg_levels_ksp_max_it 4 for another run) to my options (cf. 
below) which starts the iterations but it takes 1 hour for PETSc to do 
13 of them so it must be wrong.


Reminder: my baseline database options line reads

-ksp_view_pre -ksp_monitor -ksp_converged_reason \
-ksp_rtol 1.0e-8 -ksp_gmres_restart 300 \
-ksp_type fgmres \
-pc_type fieldsplit \
-pc_fieldsplit_type multiplicative \
-pc_fieldsplit_block_size 3 \
-pc_fieldsplit_0_fields 0   \
-pc_fieldsplit_1_fields 1   \
-pc_fieldsplit_2_fields 2   \
-fieldsplit_pc_type gamg    \
-fieldsplit_ksp_type gmres  \
-fieldsplit_ksp_rtol 1.0e-8

which gives

KSP Object: 1 MPI processes
  type: fgmres
    restart=300, using Classical (unmodified) Gram-Schmidt 
Orthogonalization with no iterative refinement

    happy breakdown tolerance 1e-30
  maximum iterations=1, initial guess is zero
  tolerances:  relative=1e-08, absolute=1e-50, divergence=1.
  left preconditioning
  using DEFAULT norm type for convergence test
PC Object: 1 MPI processes
  type: fieldsplit
  PC has not been set up so information may be incomplete
    FieldSplit with MULTIPLICATIVE composition: total splits = 3, 
blocksize = 3

    Solver info for each split is in the following KSP objects:
  Split number 0 Fields  0
  KSP Object: (fieldsplit_0_) 1 MPI processes
    type: preonly
    maximum iterations=1, initial guess is zero
    tolerances:  relative=1e-05, absolute=1e-50, divergence=1.
    left preconditioning
    using DEFAULT norm type for convergence test
  PC Object: (fieldsplit_0_) 1 MPI processes
    type not yet set
    PC has not been set up so information may be incomplete
  Split number 1 Fields  1
  KSP Object: (fieldsplit_1_) 1 MPI processes
    type: preonly
    maximum iterations=1, initial guess is zero
    tolerances:  relative=1e-05, absolute=1e-50, divergence=1.
    left preconditioning
    using DEFAULT norm type for convergence test
  PC Object: (fieldsplit_1_) 1 MPI processes
    type not yet set
    PC has not been set up so information may be incomplete
  Split number 2 Fields  2
  KSP Object: (fieldsplit_2_) 1 MPI processes
    type: preonly
    maximum iterations=1, initial guess is zero
    tolerances:  relative=1e-05, absolute=1e-50, divergence=1.
    left preconditioning
    using DEFAULT norm type for convergence test
  PC Object: (fieldsplit_2_) 1 MPI processes
    type not yet set
    PC has not been set up so information may be incomplete
  linear system matrix = precond matrix:
  Mat Object: 1 MPI processes
    type: seqaij
    rows=52500, cols=52500
    total: nonzeros=1127079, allocated nonzeros=1128624
    total number of mallocs used during MatSetValues calls =0
  not using I-node routines
  0 KSP Residual norm 3.583290589961e+00
[0]PETSC ERROR: - Error Message 
--

[0]PETSC ERROR: Petsc has generated inconsistent data
[0]PETSC ERROR: Eigen estimator failed: DIVERGED_NANORINF at iteration 
0[0]PETSC ERROR: Petsc Release Version 3.10.2, unknown
[0]PETSC ERROR: Configure options --PETSC_ARCH=msi_cplx_debug 
--with-scalar-type=complex --with-precision=double --with-debugging=1 
--with-valgrind=1 --with-debugger=gdb --with-fortran-kernels=1 
--download-mpich --download-hwloc --download-fblaslapack 
--download-scalapack --download-metis --download-parmetis 
--download-ptscotch --download-mumps --download-slepc
[0]PETSC ERROR: #1 KSPSolve_Chebyshev() line 381 in 
/home/thibaut/Packages/petsc/src/ksp/ksp/impls/cheby/cheby.c
[0]PETSC ERROR: #2 KSPSolve() line 780 in 
/home/thibaut/Packages/petsc/src/ksp/ksp/interface/itfunc.c
[0]PETSC ERROR: #3 PCMGMCycle_Private() line 20 in 
/home/thibaut/Packages/petsc/src/ksp/pc/impls/mg/mg.c
[0]PETSC ERROR: #4 PCApply_MG() line 377 in 
/home/thibaut/Packages/petsc/src/ksp/pc/impls/mg/mg.c
[0]PETSC ERROR: #5 PCApply() line 462 in 
/home/thibaut/Packages/petsc/src/ksp/pc/interface/precon.c
[0]PETSC ERROR: #6 KSP_PCApply() line 281 in 
/home/thibaut/Packages/petsc/include/petsc/private/kspimpl.h
[0]PETSC ERROR: #7 KSPInitialResidual() line 67 in 
/home/thibaut/Packages/petsc/src/ksp/ksp/interface/itres.c
[0]PETSC ERROR: #8 KSPSolve_GMRES() line 233 in 
/home/thibaut/Packages/petsc/src/ksp/ksp/impls/gmres/gmres.c
[0]PETSC ERROR: #9 KSPSolve() line 780 in 
/home/thibaut/Packages/petsc/src/ksp/ksp/interface/itfunc.c
[0]PETSC ERROR: #10 PCApply_FieldSplit() line 1107 in 
/home/thibaut/Packages/petsc/src/ksp/pc/impls/fieldsplit/fieldsplit.c
[0]PETSC ERROR: #11 PCApply() line 462 in 
/home/thibaut/Packages/petsc/src/ksp/pc/interface/precon.c
[0]PETSC ERROR: #12 KSP_PCApply() line 281 in 
/home/thibaut/Packages/petsc/include/petsc/private/kspimpl.h
[0]PETSC ERROR: #13 KSPFGMRESCycle() line 166 in 
/home/thibaut/Packages/petsc/src/ksp/ksp/impls/gmres/fgmres/fgmres.c
[0]PETSC ERROR: #14 

[petsc-users] (no subject)

2018-10-31 Thread Wenjin Xing via petsc-users
My issue is summarized in the picture and posted in the link 
https://scicomp.stackexchange.com/questions/30458/what-does-the-error-this-matrix-type-does-not-have-a-find-zero-diagonals-define?noredirect=1#comment56074_30458

[cid:image001.png@01D4715E.DAED5B40]

Kind regards
Wenjin


Re: [petsc-users] Segmentation violation

2018-10-31 Thread Santiago Andres Triana via petsc-users
Hi Hong,

You can find the matrices here:
https://www.dropbox.com/s/ejpa9owkv8tjnwi/A.petsc?dl=0
https://www.dropbox.com/s/urjtxaezl0cv3om/B.petsc?dl=0

Changing the target value leads to the same error. What is strange is that
this works without a problem on two other machines. But in my main
workstation (the one I use for developing and testing) it fails :(

Thanks so much for your help!
Santiago



On Wed, Oct 31, 2018 at 2:48 AM Zhang, Hong  wrote:

> Santiago,
> The shift '-eps_target -2e-3+1.01i' is very close to the eigenvalues. What
> happens if you pick a target little away from your eigenvalues?
> I suspect mumps encounters a zero pivot during numerical factorization.
> There are options to handle it, but I need matrices A and B to investigate.
> I am not sure if the problem comes from memory bug.
> Anyway, I'm cc'ing mumps developers here.
>
> Hong
>
> On Tue, Oct 30, 2018 at 8:09 PM Smith, Barry F. via petsc-users <
> petsc-users@mcs.anl.gov> wrote:
>
>>
>>   Yeah this doesn't look good for MUMPS but isn't for sure the problem
>> either.
>>
>>The valgrind output should be sent to the MUMPS developers.
>>
>>Hong,
>>
>>  Can you send this to the MUMPS developers and see what they say?
>>
>> Thanks
>>
>>Barry
>>
>>
>> > On Oct 30, 2018, at 2:04 PM, Santiago Andres Triana 
>> wrote:
>> >
>> > This is the output of
>> > mpiexec -n 2 valgrind --tool=memcheck -q --num-callers=20
>> --log-file=valgrind.log.%p ./ex7 -malloc off -f1 A.petsc -f2 B.petsc
>> -eps_nev 4 -eps_target -2e-3+1.01i -st_type sinvert
>> >
>> > Generalized eigenproblem stored in file.
>> >
>> >  Reading COMPLEX matrices from binary files...
>> > [1]PETSC ERROR:
>> 
>> > [1]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation,
>> probably memory access out of range
>> > [1]PETSC ERROR: Try option -start_in_debugger or
>> -on_error_attach_debugger
>> > [1]PETSC ERROR: or see
>> http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind
>> > [1]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac
>> OS X to find memory corruption errors
>> > [1]PETSC ERROR: likely location of problem given in stack below
>> > [1]PETSC ERROR: -  Stack Frames
>> 
>> > [1]PETSC ERROR: Note: The EXACT line numbers in the stack are not
>> available,
>> > [1]PETSC ERROR:   INSTEAD the line number of the start of the
>> function
>> > [1]PETSC ERROR:   is given.
>> > [1]PETSC ERROR: [1] MatFactorNumeric_MUMPS line 1205
>> /home/spin2/petsc-3.10.2/src/mat/impls/aij/mpi/mumps/mumps.c
>> > [1]PETSC ERROR: [1] MatLUFactorNumeric line 3054
>> /home/spin2/petsc-3.10.2/src/mat/interface/matrix.c
>> > [1]PETSC ERROR: [1] PCSetUp_LU line 59
>> /home/spin2/petsc-3.10.2/src/ksp/pc/impls/factor/lu/lu.c
>> > [1]PETSC ERROR: [1] PCSetUp line 894
>> /home/spin2/petsc-3.10.2/src/ksp/pc/interface/precon.c
>> > [1]PETSC ERROR: [1] KSPSetUp line 304
>> /home/spin2/petsc-3.10.2/src/ksp/ksp/interface/itfunc.c
>> > [1]PETSC ERROR: [1] STSetUp_Sinvert line 96
>> /home/spin2/slepc-3.10.1/src/sys/classes/st/impls/sinvert/sinvert.c
>> > [1]PETSC ERROR: [1] STSetUp line 233
>> /home/spin2/slepc-3.10.1/src/sys/classes/st/interface/stsolve.c
>> > [1]PETSC ERROR: [1] EPSSetUp line 104
>> /home/spin2/slepc-3.10.1/src/eps/interface/epssetup.c
>> > [1]PETSC ERROR: [1] EPSSolve line 129
>> /home/spin2/slepc-3.10.1/src/eps/interface/epssolve.c
>> > [1]PETSC ERROR: - Error Message
>> --
>> > [1]PETSC ERROR: Signal received
>> > [1]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html
>> for trouble shooting.
>> > [1]PETSC ERROR: Petsc Release Version 3.10.2, Oct, 09, 2018
>> > [1]PETSC ERROR: ./ex7 on a arch-linux2-c-opt named wobble-wkst-as by
>> spin2 Tue Oct 30 19:42:18 2018
>> > [1]PETSC ERROR: Configure options --download-mpich
>> -with-scalar-type=complex --download-mumps --download-parmetis
>> --download-metis --download-scalapack --download-fblaslapack
>> --with-debugging=1 --download-superlu_dist --download-ptscotch
>> > [1]PETSC ERROR: #1 User provided function() line 0 in  unknown file
>> > application called MPI_Abort(MPI_COMM_WORLD, 59) - process 1
>> >
>> >
>> >
>> > and one of the two valgrind logs (the other was empty):
>> >
>> > ==63004== Use of uninitialised value of size 8
>> > ==63004==at 0x694F8FF: zmumps_redistribution_
>> (zfac_distrib_distentry.F:367)
>> > ==63004==by 0x68E1266: zmumps_fac_driver_ (zfac_driver.F:1777)
>> > ==63004==by 0x6869F63: zmumps_ (zmumps_driver.F:1686)
>> > ==63004==by 0x6861B64: zmumps_f77_ (zmumps_f77.F:267)
>> > ==63004==by 0x685FB43: zmumps_c (mumps_c.c:417)
>> > ==63004==by 0x5B741CD: MatFactorNumeric_MUMPS (mumps.c:1227)
>> > ==63004==by 0x53C3DDB: MatLUFactorNumeric (matrix.c:3065)
>> > ==63004==by 0x626E652: PCSetUp_LU (lu.c:131)