Re: [petsc-users] gamg failure with petsc-dev

2014-03-21 Thread Stephan Kramer

On 21/03/14 04:24, Jed Brown wrote:

Stephan Kramer s.kra...@imperial.ac.uk writes:


We have been having some problems with GAMG on petsc-dev (master) for
cases that worked fine on petsc 3.4. We're solving a Stokes equation
(just the velocity block) for a simple convection in a square box
(isoviscous). The problem only occurs if we supply a near null space
(via MatSetNearNullSpace) where we supply the usual (1,0) (0,1) and
(-y,x) (near) null space vectors. If we supply those, the smoother
complains that the diagonal of the A matrix at the first coarsened
level contains a zero. If I dump out the prolongator from the finest
to the first coarsened level it indeed contains a zero column at that
same index. We're pretty confident that the fine level A matrix is
correct (it solves fine with LU). I've briefly spoken to Matt about
this and he suggested trying to run with -pc_gamg_agg_nsmooths 0 (as
the default changed from 3.4 - dev) but that didn't make any
difference, the dumped out prolongator still has zero columns, and it
crashes in the same way. Do you have any further suggestions what to
try and how to further debug this?


Do you set the block size?  Can you reproduce by modifying
src/ksp/ksp/examples/tutorials/ex49.c (plane strain elasticity)?



I don't set a block size, no. About ex49: Ah great, with master (just updated 
now) I get:

[skramer@stommel]{/data/stephan/git/petsc/src/ksp/ksp/examples/tutorials}$ 
./ex49 -elas_pc_type gamg -mx 100 -my 100 -mat_no_inode
[0]PETSC ERROR: - Error Message 
--
[0]PETSC ERROR: Arguments are incompatible
[0]PETSC ERROR: Zero diagonal on row 1
[0]PETSC ERROR: See http://http://www.mcs.anl.gov/petsc/documentation/faq.html 
for trouble shooting.
[0]PETSC ERROR: Petsc Development GIT revision: v3.4.4-3671-gbb161d1  GIT Date: 
2014-03-21 01:14:15 +
[0]PETSC ERROR: ./ex49 on a linux-gnu-c-opt named stommel by skramer Fri Mar 21 
11:25:55 2014
[0]PETSC ERROR: Configure options --download-fblaslapack=1 --download-blacs=1 
--download-scalapack=1 --download-ptscotch=1 --download-mumps=1 
--download-hypre=1 --download-suitesparse=1 --download-ml=1
[0]PETSC ERROR: #1 MatInvertDiagonal_SeqAIJ() line 1728 in 
/data/stephan/git/petsc/src/mat/impls/aij/seq/aij.c
[0]PETSC ERROR: #2 MatSOR_SeqAIJ() line 1760 in 
/data/stephan/git/petsc/src/mat/impls/aij/seq/aij.c
[0]PETSC ERROR: #3 MatSOR() line 3734 in 
/data/stephan/git/petsc/src/mat/interface/matrix.c
[0]PETSC ERROR: #4 PCApply_SOR() line 35 in 
/data/stephan/git/petsc/src/ksp/pc/impls/sor/sor.c
[0]PETSC ERROR: #5 PCApply() line 440 in 
/data/stephan/git/petsc/src/ksp/pc/interface/precon.c
[0]PETSC ERROR: #6 KSP_PCApply() line 227 in 
/data/stephan/git/petsc/include/petsc-private/kspimpl.h
[0]PETSC ERROR: #7 KSPSolve_Chebyshev() line 456 in 
/data/stephan/git/petsc/src/ksp/ksp/impls/cheby/cheby.c
[0]PETSC ERROR: #8 KSPSolve() line 458 in 
/data/stephan/git/petsc/src/ksp/ksp/interface/itfunc.c
[0]PETSC ERROR: #9 PCMGMCycle_Private() line 19 in 
/data/stephan/git/petsc/src/ksp/pc/impls/mg/mg.c
[0]PETSC ERROR: #10 PCMGMCycle_Private() line 48 in 
/data/stephan/git/petsc/src/ksp/pc/impls/mg/mg.c
[0]PETSC ERROR: #11 PCApply_MG() line 330 in 
/data/stephan/git/petsc/src/ksp/pc/impls/mg/mg.c
[0]PETSC ERROR: #12 PCApply() line 440 in 
/data/stephan/git/petsc/src/ksp/pc/interface/precon.c
[0]PETSC ERROR: #13 KSP_PCApply() line 227 in 
/data/stephan/git/petsc/include/petsc-private/kspimpl.h
[0]PETSC ERROR: #14 KSPInitialResidual() line 63 in 
/data/stephan/git/petsc/src/ksp/ksp/interface/itres.c
[0]PETSC ERROR: #15 KSPSolve_GMRES() line 234 in 
/data/stephan/git/petsc/src/ksp/ksp/impls/gmres/gmres.c
[0]PETSC ERROR: #16 KSPSolve() line 458 in 
/data/stephan/git/petsc/src/ksp/ksp/interface/itfunc.c
[0]PETSC ERROR: #17 solve_elasticity_2d() line 1053 in 
/data/stephan/git/petsc/src/ksp/ksp/examples/tutorials/ex49.c
[0]PETSC ERROR: #18 main() line 1104 in 
/data/stephan/git/petsc/src/ksp/ksp/examples/tutorials/ex49.c
[0]PETSC ERROR: End of Error Message ---send entire error 
message to petsc-ma...@mcs.anl.gov--

Which is the same error we were getting on our problem
Cheers
Stephan


[petsc-users] Preallocation Memory of Finite Element Method's Sparse Matrices

2014-03-21 Thread 吕超




Your faithfully:

 Last e-mail has some literal error, sorry~

 program src/ksp/ksp/examples/tutorials/ex3.c.html is about Bilinear 
elements on the unit square for Laplacian.

 After preallocation using   

 ierr  = MatMPIAIJSetPreallocation(A,9,NULL,5,NULL);CHKERRQ(ierr); /* More 
than necessary */,


 Results of commands of mpiexec -n 2 ./ex3 and mpiexec -n 3 ./ex3 are 
Norm of error 2.22327e-06 Iterations 6 and Norm of error 3.12849e-07 
Iterations 8. Both results are good!

 However, if I use mpiexec -n 4 ./ex3 or 5,6,7...precesses, error 
[2]PETSC ERROR: New nonzero at (4,29) (here is for process 4, other positions 
for different processes) caused a malloc! appear!. For me, this error is 
unbelievable, because first, the preallocation is more than necessary,how can 
the new malloc appear? Second, the global number 4 point originally has no 
neighbor vertices whose global number is 29! This error has tortured me for a 
long time.

 This error seems meaningless, however, my recent 3d finite element method 
cannot be caculated by more processes owing to the new nonzero malloc error! 
And this is why I want to use 4 or much more processes to compute ex3.c.

 Thank you for all previous assistence and hope you have a good life!

your sincerely

LV CHAO

2014/3/21





Re: [petsc-users] 2 level schur

2014-03-21 Thread Luc Berger-Vergiat

Is there a way to now what the new numbering is?
I am assuming that in y example since there are two fields only the 
numbers associated with them are 0 and 1 hence I tried:


   -fieldsplit_0_fieldsplit_Field_2_fields 1
   -fieldsplit_0_fieldsplit_Field_3_fields 0

which did not work. As mentioned earlier, the following does not work 
either:


   -fieldsplit_0_fieldsplit_Field_2_fields 3
   -fieldsplit_0_fieldsplit_Field_3_fields 2

and without too much expectation I also passed the following

   -fieldsplit_0_fieldsplit_Field_2_fields Field_3
   -fieldsplit_0_fieldsplit_Field_3_fields Field_2

to no avail.

By the way I attached the output from -ksp_view in case I might be doing 
something wrong?


Best,
Luc

On 03/20/2014 09:01 PM, Matthew Knepley wrote:
On Thu, Mar 20, 2014 at 6:20 PM, Luc Berger-Vergiat 
lb2...@columbia.edu mailto:lb2...@columbia.edu wrote:


Hi all,
I am solving a four field problem using two Schur complements.
Here are the arguments that I usually pass to PETSc to do it:

-ksp_type gmres -pc_type fieldsplit -pc_fieldsplit_type schur
-pc_fieldsplit_schur_factorization_type full
-pc_fieldsplit_schur_precondition selfp
-pc_fieldsplit_0_fields 2,3 -pc_fieldsplit_1_fields 0,1
-fieldsplit_0_ksp_type preonly -fieldsplit_0_pc_type
fieldsplit -fieldsplit_0_pc_fieldsplit_type schur
-fieldsplit_0_pc_fieldsplit_schur_factorization_type full
-fieldsplit_0_pc_fieldsplit_schur_precondition selfp
-fieldsplit_0_fieldsplit_Field_2_fields 2
-fieldsplit_0_fieldsplit_Field_3_fields 3
-fieldsplit_0_fieldsplit_Field_2_ksp_type preonly
-fieldsplit_0_fieldsplit_Field_2_pc_type ilu
-fieldsplit_0_fieldsplit_Field_3_ksp_type preonly
-fieldsplit_0_fieldsplit_Field_3_pc_type jacobi
-fieldsplit_1_ksp_type preonly -fieldsplit_1_pc_type lu
-malloc_log mlog -log_summary time.log

One issue with this is that when I change
-fieldsplit_0_fieldsplit_Field_2_fields 2 to
-fieldsplit_0_fieldsplit_Field_2_fields 3 it is ineffective, as if
PETSc automatically assign IS 2 to Field 2 even though it is not
what I want.
Is there a way to pass the arguments correctly so that PETSc goes
about switching the IS set of -fieldsplit_0_fieldsplit_Field_2 and
-fieldsplit_0_fieldsplit_Field_3?
This is crucial to me since I am using the selfp option and the
matrix associated to IS 3 is diagonal. By assigning the fields
correctly I can get an exact Schur preconditioner and hence very
fast convergence. Right now my convergence is not optimal because
of this.


I believe the inner Schur field statements should not be using the 
original numbering, but the inner numbering, after they have been 
reordered.


   Matt

Thanks!

Best,
Luc




--
What most experimenters take for granted before they begin their 
experiments is infinitely more interesting than any results to which 
their experiments lead.

-- Norbert Wiener


KSP Object: 1 MPI processes
  type: gmres
GMRES: restart=30, using Classical (unmodified) Gram-Schmidt 
Orthogonalization with no iterative refinement
GMRES: happy breakdown tolerance 1e-30
  maximum iterations=1, initial guess is zero
  tolerances:  relative=1e-08, absolute=1e-16, divergence=1e+16
  left preconditioning
  using PRECONDITIONED norm type for convergence test
PC Object: 1 MPI processes
  type: fieldsplit
FieldSplit with Schur preconditioner, factorization FULL
Preconditioner for the Schur complement formed from Sp, an assembled 
approximation to S, which uses A00's diagonal's inverse
Split info:
Split number 0 Defined by IS
Split number 1 Defined by IS
KSP solver for A00 block
  KSP Object:  (fieldsplit_0_)   1 MPI processes
type: preonly
maximum iterations=1, initial guess is zero
tolerances:  relative=1e-05, absolute=1e-50, divergence=1
left preconditioning
using NONE norm type for convergence test
  PC Object:  (fieldsplit_0_)   1 MPI processes
type: fieldsplit
  FieldSplit with Schur preconditioner, factorization FULL
  Preconditioner for the Schur complement formed from Sp, an assembled 
approximation to S, which uses A00's diagonal's inverse
  Split info:
  Split number 0 Defined by IS
  Split number 1 Defined by IS
  KSP solver for A00 block
KSP Object:(fieldsplit_0_fieldsplit_Field_2_)   
  1 MPI processes
  type: preonly
  maximum iterations=1, initial guess is zero
  tolerances:  relative=1e-05, absolute=1e-50, divergence=1
  left preconditioning
  using NONE norm type for convergence test
PC Object:(fieldsplit_0_fieldsplit_Field_2_)
 1 MPI processes
  type: jacobi
  linear system 

[petsc-users] A modified ex12.c

2014-03-21 Thread Jones,Martin Alexander
Does anyone know if the DMPlex solver can be run on GPU?

Martin


Re: [petsc-users] Building PETSc with Intel mpi

2014-03-21 Thread Barry Smith

  Did you follow the directions here: 
http://www.mcs.anl.gov/petsc/documentation/faq.html#libimf

  Did it make any difference?


On Mar 21, 2014, at 9:45 AM, Qin Lu lu_qin_2...@yahoo.com wrote:

 Hello,
  
 I was trying to build PETSc-3.4.2 with Intel MPI using Intel-2013 compilers 
 in Linux, but got the error below. The configure.log is attached.
  
 ***
 UNABLE to EXECUTE BINARIES for ./configure 
 ---
 Cannot run executables created with FC. If this machine uses a batch system 
 to submit jobs you will need to configure using ./configure with the 
 additional option  --with-batch.
  Otherwise there is problem with the compilers. Can you compile and run code 
 with your C/C++ (and maybe Fortran) compilers?
 See http://www.mcs.anl.gov/petsc/documentation/faq.html#libimf
 ***
  
 Thanks a lot for any suggestions abut the problem,
  
 Regards,
 Qin
  
 configure.log



[petsc-users] MatCreateMPIAdj and mat/examples/tutorials/ex11.c

2014-03-21 Thread Torquil Macdonald Sørensen
Hi!

In the documentation of MatCreateMPIAdj it says that the fifth argument
j should be sorted for each row:

http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatCreateMPIAdj.html

The same page links to an example:

http://www.mcs.anl.gov/petsc/petsc-current/src/mat/examples/tutorials/ex11.c.html

but in that example the entries in jj, on lines 40 and 42, do not seem
to be sorted for each row.

On rank 0, the column indices are 0, 1 and 2, which are sorted. But the
second row correspond to the column indices 1, 3, 2, which are not given
in increasing order. The same goes for the indices given in jj for rank
1 on line 42, corresponding to the second row on rank 1.

Doesn't that conflict with the documentation?

Best regards
Torquil Sørensen



Re: [petsc-users] 2 level schur

2014-03-21 Thread Matthew Knepley
On Fri, Mar 21, 2014 at 9:37 AM, Luc Berger-Vergiat lb2...@columbia.eduwrote:

  Is there a way to now what the new numbering is?
 I am assuming that in y example since there are two fields only the
 numbers associated with them are 0 and 1 hence I tried:

 -fieldsplit_0_fieldsplit_Field_2_fields 1
 -fieldsplit_0_fieldsplit_Field_3_fields 0


If its an inner fieldsplit, the numbering for options starts over again

  -fieldsplit_0_fieldsplit_Field_0_fields 1
  -fieldsplit_0_fieldsplit_Field_1_fields 0

  Thanks,

 Matt

  which did not work. As mentioned earlier, the following does not work
 either:

 -fieldsplit_0_fieldsplit_Field_2_fields 3
 -fieldsplit_0_fieldsplit_Field_3_fields 2

 and without too much expectation I also passed the following

 -fieldsplit_0_fieldsplit_Field_2_fields Field_3
 -fieldsplit_0_fieldsplit_Field_3_fields Field_2

 to no avail.

 By the way I attached the output from -ksp_view in case I might be doing
 something wrong?

 Best,
 Luc

 On 03/20/2014 09:01 PM, Matthew Knepley wrote:

  On Thu, Mar 20, 2014 at 6:20 PM, Luc Berger-Vergiat 
 lb2...@columbia.eduwrote:

  Hi all,
 I am solving a four field problem using two Schur complements. Here are
 the arguments that I usually pass to PETSc to do it:

 -ksp_type gmres -pc_type fieldsplit -pc_fieldsplit_type schur
 -pc_fieldsplit_schur_factorization_type full
 -pc_fieldsplit_schur_precondition selfp -pc_fieldsplit_0_fields 2,3
 -pc_fieldsplit_1_fields 0,1 -fieldsplit_0_ksp_type preonly
 -fieldsplit_0_pc_type fieldsplit -fieldsplit_0_pc_fieldsplit_type schur
 -fieldsplit_0_pc_fieldsplit_schur_factorization_type full
 -fieldsplit_0_pc_fieldsplit_schur_precondition selfp
 -fieldsplit_0_fieldsplit_Field_2_fields 2
 -fieldsplit_0_fieldsplit_Field_3_fields 3
 -fieldsplit_0_fieldsplit_Field_2_ksp_type preonly
 -fieldsplit_0_fieldsplit_Field_2_pc_type ilu
 -fieldsplit_0_fieldsplit_Field_3_ksp_type preonly
 -fieldsplit_0_fieldsplit_Field_3_pc_type jacobi -fieldsplit_1_ksp_type
 preonly -fieldsplit_1_pc_type lu -malloc_log mlog -log_summary time.log

 One issue with this is that when I change
 -fieldsplit_0_fieldsplit_Field_2_fields 2 to
 -fieldsplit_0_fieldsplit_Field_2_fields 3 it is ineffective, as if PETSc
 automatically assign IS 2 to Field 2 even though it is not what I want.
 Is there a way to pass the arguments correctly so that PETSc goes about
 switching the IS set of -fieldsplit_0_fieldsplit_Field_2 and
 -fieldsplit_0_fieldsplit_Field_3?
 This is crucial to me since I am using the selfp option and the matrix
 associated to IS 3 is diagonal. By assigning the fields correctly I can get
 an exact Schur preconditioner and hence very fast convergence. Right now my
 convergence is not optimal because of this.


  I believe the inner Schur field statements should not be using the
 original numbering, but the inner numbering, after they have been reordered.

 Matt


 Thanks!

 Best,
 Luc




  --
 What most experimenters take for granted before they begin their
 experiments is infinitely more interesting than any results to which their
 experiments lead.
 -- Norbert Wiener





-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener


Re: [petsc-users] MatCreateMPIAdj and mat/examples/tutorials/ex11.c

2014-03-21 Thread Barry Smith

   Thanks. Now fixed in master.

  Barry

On Mar 21, 2014, at 10:51 AM, Torquil Macdonald Sørensen torq...@gmail.com 
wrote:

 Hi!
 
 In the documentation of MatCreateMPIAdj it says that the fifth argument
 j should be sorted for each row:
 
 http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatCreateMPIAdj.html
 
 The same page links to an example:
 
 http://www.mcs.anl.gov/petsc/petsc-current/src/mat/examples/tutorials/ex11.c.html
 
 but in that example the entries in jj, on lines 40 and 42, do not seem
 to be sorted for each row.
 
 On rank 0, the column indices are 0, 1 and 2, which are sorted. But the
 second row correspond to the column indices 1, 3, 2, which are not given
 in increasing order. The same goes for the indices given in jj for rank
 1 on line 42, corresponding to the second row on rank 1.
 
 Doesn't that conflict with the documentation?
 
 Best regards
 Torquil Sørensen
 



Re: [petsc-users] Preallocation Memory of Finite Element Method's Sparse Matrices

2014-03-21 Thread Barry Smith

  Thank you for reporting this. It was our error. In fact 4 is not enough under 
certain circumstances; consider where each process has only a single degree of 
freedom (vertex) then it is coupled to 8 other vertices ALL on other processes. 
Thus we really need to use 8 instead of 4 as the maximum number of off process 
coupling.

  I have fixed this in master so it now runs on any number of processes.

   Barry

On Mar 21, 2014, at 9:11 AM, 吕超 luc...@mail.iggcas.ac.cn wrote:

 
 
 Your faithfully:
 
  Last e-mail has some literal error, sorry~
 
  program src/ksp/ksp/examples/tutorials/ex3.c.html is about Bilinear 
 elements on the unit square for Laplacian.
 
  After preallocation using   
 
  ierr  = MatMPIAIJSetPreallocation(A,9,NULL,5,NULL);CHKERRQ(ierr); /* 
 More than necessary */,
 
  Results of commands of mpiexec -n 2 ./ex3 and mpiexec -n 3 ./ex3 are 
 Norm of error 2.22327e-06 Iterations 6 and Norm of error 3.12849e-07 
 Iterations 8. Both results are good!
 
  However, if I use mpiexec -n 4 ./ex3 or 5,6,7...precesses, error 
 [2]PETSC ERROR: New nonzero at (4,29) (here is for process 4, other 
 positions for different processes) caused a malloc! appear!. For me, this 
 error is unbelievable, because first, the preallocation is more than 
 necessary,how can the new malloc appear? Second, the global number 4 point 
 originally has no neighbor vertices whose global number is 29! This error has 
 tortured me for a long time.
 
  This error seems meaningless, however, my recent 3d finite element 
 method cannot be caculated by more processes owing to the new nonzero malloc 
 error! And this is why I want to use 4 or much more processes to compute 
 ex3.c.
 
  Thank you for all previous assistence and hope you have a good life!
 
 your sin cerely
 
 LV CHAO
 
 2014/3/21
 
 
 
 



Re: [petsc-users] 2 level schur

2014-03-21 Thread Luc Berger-Vergiat

I hear you though that is not what petsc does.
When I name the fields as you suggest:

   -fieldsplit_0_fieldsplit_Field_0_fields 1
   -fieldsplit_0_fieldsplit_Field_1_fields 0

petsc ignores it and still call the fields

   -fieldsplit_0_fieldsplit_Field_2
   -fieldsplit_0_fieldsplit_Field_3

But the automatic naming scheme is not really the issue. It would just 
be nice to be able to switch the two fields.
I will try to change the order in which I pass the IS to the DM and see 
if I can go around the problem that way.


Best,
Luc

On 03/21/2014 12:13 PM, Matthew Knepley wrote:
On Fri, Mar 21, 2014 at 9:37 AM, Luc Berger-Vergiat 
lb2...@columbia.edu mailto:lb2...@columbia.edu wrote:


Is there a way to now what the new numbering is?
I am assuming that in y example since there are two fields only
the numbers associated with them are 0 and 1 hence I tried:

-fieldsplit_0_fieldsplit_Field_2_fields 1
-fieldsplit_0_fieldsplit_Field_3_fields 0


If its an inner fieldsplit, the numbering for options starts over again

  -fieldsplit_0_fieldsplit_Field_0_fields 1
  -fieldsplit_0_fieldsplit_Field_1_fields 0

  Thanks,

 Matt

which did not work. As mentioned earlier, the following does not
work either:

-fieldsplit_0_fieldsplit_Field_2_fields 3
-fieldsplit_0_fieldsplit_Field_3_fields 2

and without too much expectation I also passed the following

-fieldsplit_0_fieldsplit_Field_2_fields Field_3
-fieldsplit_0_fieldsplit_Field_3_fields Field_2

to no avail.

By the way I attached the output from -ksp_view in case I might be
doing something wrong?

Best,
Luc

On 03/20/2014 09:01 PM, Matthew Knepley wrote:

On Thu, Mar 20, 2014 at 6:20 PM, Luc Berger-Vergiat
lb2...@columbia.edu mailto:lb2...@columbia.edu wrote:

Hi all,
I am solving a four field problem using two Schur
complements. Here are the arguments that I usually pass to
PETSc to do it:

-ksp_type gmres -pc_type fieldsplit -pc_fieldsplit_type
schur -pc_fieldsplit_schur_factorization_type full
-pc_fieldsplit_schur_precondition selfp
-pc_fieldsplit_0_fields 2,3 -pc_fieldsplit_1_fields 0,1
-fieldsplit_0_ksp_type preonly -fieldsplit_0_pc_type
fieldsplit -fieldsplit_0_pc_fieldsplit_type schur
-fieldsplit_0_pc_fieldsplit_schur_factorization_type full
-fieldsplit_0_pc_fieldsplit_schur_precondition selfp
-fieldsplit_0_fieldsplit_Field_2_fields 2
-fieldsplit_0_fieldsplit_Field_3_fields 3
-fieldsplit_0_fieldsplit_Field_2_ksp_type preonly
-fieldsplit_0_fieldsplit_Field_2_pc_type ilu
-fieldsplit_0_fieldsplit_Field_3_ksp_type preonly
-fieldsplit_0_fieldsplit_Field_3_pc_type jacobi
-fieldsplit_1_ksp_type preonly -fieldsplit_1_pc_type lu
-malloc_log mlog -log_summary time.log

One issue with this is that when I change
-fieldsplit_0_fieldsplit_Field_2_fields 2 to
-fieldsplit_0_fieldsplit_Field_2_fields 3 it is ineffective,
as if PETSc automatically assign IS 2 to Field 2 even though
it is not what I want.
Is there a way to pass the arguments correctly so that PETSc
goes about switching the IS set of
-fieldsplit_0_fieldsplit_Field_2 and
-fieldsplit_0_fieldsplit_Field_3?
This is crucial to me since I am using the selfp option and
the matrix associated to IS 3 is diagonal. By assigning the
fields correctly I can get an exact Schur preconditioner and
hence very fast convergence. Right now my convergence is not
optimal because of this.


I believe the inner Schur field statements should not be using
the original numbering, but the inner numbering, after they have
been reordered.

   Matt

Thanks!

Best,
Luc




-- 
What most experimenters take for granted before they begin their

experiments is infinitely more interesting than any results to
which their experiments lead.
-- Norbert Wiener





--
What most experimenters take for granted before they begin their 
experiments is infinitely more interesting than any results to which 
their experiments lead.

-- Norbert Wiener




Re: [petsc-users] Preallocation Memory of Finite Element Method's Sparse Matrices

2014-03-21 Thread Jed Brown
Barry Smith bsm...@mcs.anl.gov writes:

   Thank you for reporting this. It was our error. In fact 4 is not
   enough under certain circumstances; consider where each process has
   only a single degree of freedom (vertex) then it is coupled to 8
   other vertices ALL on other processes. Thus we really need to use 8
   instead of 4 as the maximum number of off process coupling.

Note that _your_ code should generally not have this problem because you
should use a non-pathological partition.


pgpWf1NEACFhu.pgp
Description: PGP signature


[petsc-users] The same code randomly produce different residual norms

2014-03-21 Thread Fande Kong
Hi,

I run the code src/snes/examples/tutorials/ex5.c with options:











* mpirun  -n 8  ./ex5 -pc_type mg \ -ksp_monitor\ -pc_mg_levels 3
\ -pc_mg_galerkin \ -da_grid_x 17 \ -da_grid_y 17
\ -mg_levels_ksp_norm_type unpreconditioned \ -snes_monitor
\ -mg_levels_ksp_chebyshev_estimate_eigenvalues 0.5,1.1
\ -mg_levels_pc_type sor \ -pc_mg_type \*


I run this script several times, but at each time, the residual norms have
some tiny differences ( these should not happen). For example:

Case 1:
















*  0 SNES Function norm 1.188788066192e+00 0 KSP Residual norm
1.573384253521e+00 1 KSP Residual norm 3.616321708396e-02 2 KSP
Residual norm 2.780221563755e-04 3 KSP Residual norm 2.194662354037e-06
  1 SNES Function norm 5.125240595190e-03 0 KSP Residual norm
1.217284338266e-01 1 KSP Residual norm 1.774247017346e-04 2 KSP
Residual norm 2.557118591292e-06 3 KSP Residual norm 1.622173269367e-08
  2 SNES Function norm 3.922335995111e-05 0 KSP Residual norm
9.960745924876e-04 1 KSP Residual norm 1.273916336665e-06 2 KSP
Residual norm 1.571259383270e-08 3 KSP Residual norm 1.250266145356e-10
  3 SNES Function norm 2.662898279023e-09 *


Case 2:

















*  0 SNES Function norm 1.188788066192e+00 0 KSP Residual norm
1.573384253521e+00 1 KSP Residual norm 3.616321708396e-02 2 KSP
Residual norm 2.780221563755e-04 3 KSP Residual norm 2.194662354037e-06
  1 SNES Function norm 5.125240595190e-03 0 KSP Residual norm
1.217284338266e-01 1 KSP Residual norm 1.774247017347e-04 2 KSP
Residual norm 2.557118591292e-06 3 KSP Residual norm 1.622173269367e-08
  2 SNES Function norm 3.922335995108e-05 0 KSP Residual norm
9.960745924862e-04 1 KSP Residual norm 1.273916336654e-06 2 KSP
Residual norm 1.571259383257e-08 3 KSP Residual norm 1.250266145346e-10
  3 SNES Function norm 2.662898285038e-09 *

These differences are marked by red color. So here, I was wondering there
are any explanations why these happen?

Thanks,

Fande,


Re: [petsc-users] The same code randomly produce different residual norms

2014-03-21 Thread Jed Brown
Fande Kong fd.k...@siat.ac.cn writes:
 * mpirun  -n 8  ./ex5 -pc_type mg \ -ksp_monitor\ -pc_mg_levels 3
 \ -pc_mg_galerkin \ -da_grid_x 17 \ -da_grid_y 17
 \ -mg_levels_ksp_norm_type unpreconditioned \ -snes_monitor
 \ -mg_levels_ksp_chebyshev_estimate_eigenvalues 0.5,1.1
 \ -mg_levels_pc_type sor \ -pc_mg_type \*


 I run this script several times, but at each time, the residual norms have
 some tiny differences ( these should not happen). For example:

Some messages are unpacked in the order in which they are received,
rather than doing a moderately expensive sorting.  There are some
options to make VecScatter and other components reproducible, but this
configuration might use a feature for which such an option has not been
implemented (-vecscatter_reproduce helps a little in my test, but is not
sufficient).

The difference you're seeing is harmless, but if you have an application
for which this is required (either to reproduce rare events for
re-analysis of an unstable dynamical system or due to programmatic
requirements that misunderstand error analysis of finite-precision
arithmetic), we can hunt down any such places and provide an option to
make it reproducible.


pgpkI40cDJhSv.pgp
Description: PGP signature


Re: [petsc-users] Building PETSc with Intel mpi

2014-03-21 Thread Satish Balay
 --with-mpi-dir=/apps/compilers/intel_2013/impi/4.1.0.024/intel64 
 --with-mpi-compilers=0

Does this mpi not come with mpicc/mpif90 wrappers? If they do - its best to use 
them.

If not - its best to look at the docs for this compiler - and specify it with 
the appropriate

--with-mpi-include --with-mpi-lib options, [instead of the above]

Satish

On Fri, 21 Mar 2014, Qin Lu wrote:

 Sourcing the .csh files of the compiler fixed the problem. Thanks! However, 
 later it got another error (see the attached configure.log for details):
  
 ***
  UNABLE to CONFIGURE with GIVEN OPTIONS    (see configure.log for 
 details):
 ---
 Fortran error! mpi_init() could not be located!
 ***
 
 It seems the configure did not link the Intel MPI libs. I used --with-mpi-dir 
 to specify the MPI directory, can configure get the correct Intel MPI lib 
 names? If I have to specify the lib names (using --with-mpi-lib?), which libs 
 should I specify? I saw a lot of libs under the directory, such as libmpi.a, 
 libmpi_ipl64.a, libmpi_mt.a, etc. 
  
 Thanks a lot,
 Qin
  
 
 
  From: Barry Smith bsm...@mcs.anl.gov
 To: Qin Lu lu_qin_2...@yahoo.com 
 Cc: petsc-users petsc-users@mcs.anl.gov 
 Sent: Friday, March 21, 2014 10:11 AM
 Subject: Re: [petsc-users] Building PETSc with Intel mpi
   
 
 
   Did you follow the directions here: 
 http://www.mcs.anl.gov/petsc/documentation/faq.html#libimf
 
   Did it make any difference?
 
 
 On Mar 21, 2014, at 9:45 AM, Qin Lu lu_qin_2...@yahoo.com wrote:
 
  Hello,
   
  I was trying to build PETSc-3.4.2 with Intel MPI using Intel-2013 compilers 
  in Linux, but got the error below. The configure.log is attached.
   
  ***
                      UNABLE to EXECUTE BINARIES for ./configure 
  ---
  Cannot run executables created with FC. If this machine uses a batch system 
  to submit jobs you will need to configure using ./configure with the 
  additional option  --with-batch.
   Otherwise there is problem with the compilers. Can you compile and run 
 code with your C/C++ (and maybe Fortran) compilers?
  See http://www.mcs.anl.gov/petsc/documentation/faq.html#libimf
  ***
   
  Thanks a lot for any suggestions abut the problem,
   
  Regards,
  Qin
 
   
  configure.log