http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind

especially the GAMG version which you also try in the debugger with the 
argument -start_in_debugger



On Oct 16, 2013, at 9:32 AM, Dave May <[email protected]> wrote:

> Sounds like a memory error.
> I'd run your code through valgrind to double check. The error could be 
> completely unconnected to the nullspaces.
> 
> Cheers,
>   Dave 
> 
> 
> On 16 October 2013 16:18, Bishesh Khanal <[email protected]> wrote:
> Dear all,
> I'm trying to solve a stokes flow with constant viscosity but with non-zero 
> divergence prescribed in the rhs.
> 
> I have a matrix created from DMDA (mDa) of 4 dofs: vx, vy, vz and p 
> respectively.
> I have another DMDA (mDaP) of same size but of 1 dof corresponding to only p.
> I have assigned the null space for constant pressure inside the code. I have 
> assigned two nullspace basis:  One corresponding to vector created from mDa 
> that is assigned to outer ksp. Second corresponding to vector created from 
> mDaP that is assigned to a ksp obtained from the fieldsplit corresponding to 
> the schur complement. 
> 
> Now when running the code, the solver converges for up to certain size, e.g. 
> 92X110 X 92 (the results for this convegent case with -ksp_view is given at 
> the end of the emal.
> But when I double the size of the grid in each dimension, it gives me a 
> run-time error.
> 
> The options  I've used are of the kind:  
> -pc_type fieldsplit -pc_fieldsplit_type schur -pc_fieldsplit_dm_splits 0 
> -pc_fieldsplit_0_fields 0,1,2 -pc_fieldsplit_1_fields 3 -fieldsplit_0_pc_type 
> hypre -fieldsplit_0_ksp_converged_reason -fieldsplit_1_ksp_converged_reason 
> -ksp_converged_reason -ksp_view
> 
> Here are: 
> 1. Error message when using hypre for fieldsplit_0
> 2. Error message when using gamg for fieldsplit_0
> 3. -ksp_view of the working case using hypre for filedsplit_0
> 
> I get following error when I use hypre :
> 1. 
> ******************************************************************************************************
> [5]PETSC ERROR: --------------------- Error Message 
> ------------------------------------
> [5]PETSC ERROR: Signal received!
> [5]PETSC ERROR: 
> ------------------------------------------------------------------------
> [5]PETSC ERROR: Petsc Release Version 3.4.3, Oct, 15, 2013 
> [5]PETSC ERROR: See docs/changes/index.html for recent updates.
> [5]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
> [5]PETSC ERROR: See docs/index.html for manual pages.
> [5]PETSC ERROR: 
> ------------------------------------------------------------------------
> [5]PETSC ERROR: /epi/asclepios2/bkhanal/works/AdLemModel/build/src/AdLemMain 
> on a arch-linux2-cxx-debug named nef001 by bkhanal Wed Oct 16 15:08:42 2013
> [5]PETSC ERROR: Libraries linked from /epi/asclepios2/bkhanal/petscDebug/lib
> [5]PETSC ERROR: Configure run at Wed Oct 16 14:18:48 2013
> [5]PETSC ERROR: Configure options --with-mpi-dir=/opt/openmpi-gcc/current/ 
> --with-shared-libraries --prefix=/epi/asclepios2/bkhanal/petscDebug 
> -download-f-blas-lapack=1 --download-metis --download-parmetis 
> --download-superlu_dist --download-scalapack --download-mumps 
> --download-hypre --with-clanguage=cxx
> [5]PETSC ERROR: 
> ------------------------------------------------------------------------
> [5]PETSC ERROR: User provided function() line 0 in unknown directory unknown 
> file
> [6]PETSC ERROR: 
> ------------------------------------------------------------------------
> [6]PETSC ERROR: Caught signal number 15 Terminate: Somet process (or the 
> batch system) has told this process to end
> [6]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger
> [6]PETSC ERROR: or see 
> http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[6]PETSC ERROR: 
> or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory 
> corruption errors
> [6]PETSC ERROR: likely location of problem given in stack below
> [6]PETSC ERROR: ---------------------  Stack Frames 
> ------------------------------------
> [6]PETSC ERROR: Note: The EXACT line numbers in the stack are not available,
> [6]PETSC ERROR:       INSTEAD the line number of the start of the function
> [6]PETSC ERROR:       is given.
> [6]PETSC ERROR: [6] HYPRE_SetupXXX line 130 
> /tmp/petsc-3.4.3/src/ksp/pc/impls/hypre/hypre.c
> [6]PETSC ERROR: [6] PCSetUp_HYPRE line 94 
> /tmp/petsc-3.4.3/src/ksp/pc/impls/hypre/hypre.c
> [6]PETSC ERROR: [6] PCSetUp line 868 
> /tmp/petsc-3.4.3/src/ksp/pc/interface/precon.c
> [6]PETSC ERROR: [6] KSPSetUp line 192 
> /tmp/petsc-3.4.3/src/ksp/ksp/interface/itfunc.c
> [6]PETSC ERROR: [6] KSPSolve line 356 
> /tmp/petsc-3.4.3/src/ksp/ksp/interface/itfunc.c
> [6]PETSC ERROR: [6] MatMult_SchurComplement line 75 
> /tmp/petsc-3.4.3/src/ksp/ksp/utils/schurm.c
> [6]PETSC ERROR: [6] MatNullSpaceTest line 408 
> /tmp/petsc-3.4.3/src/mat/interface/matnull.c
> [6]PETSC ERROR: [6] solveModel line 113 
> "unknowndirectory/"/epi/asclepios2/bkhanal/works/AdLemModel/src/PetscAdLemTaras3D.cxx
> 
> 
> 2. 
> ****************************************************************************************************
> Using gamg instead has errors like following:
> 
> [5]PETSC ERROR: ---------------------  Stack Frames 
> ------------------------------------
> [5]PETSC ERROR: Note: The EXACT line numbers in the stack are not available,
> [5]PETSC ERROR:       INSTEAD the line number of the start of the function
> [5]PETSC ERROR:       is given.
> [5]PETSC ERROR: [5] PetscLLCondensedAddSorted line 1202 
> /tmp/petsc-3.4.3/include/petsc-private/matimpl.h
> [5]PETSC ERROR: [5] MatPtAPSymbolic_MPIAIJ_MPIAIJ line 124 
> /tmp/petsc-3.4.3/src/mat/impls/aij/mpi/mpiptap.c
> [5]PETSC ERROR: [5] MatPtAP_MPIAIJ_MPIAIJ line 80 
> /tmp/petsc-3.4.3/src/mat/impls/aij/mpi/mpiptap.c
> [5]PETSC ERROR: [5] MatPtAP line 8223 
> /tmp/petsc-3.4.3/src/mat/interface/matrix.c
> [5]PETSC ERROR: [5] createLevel line 144 
> /tmp/petsc-3.4.3/src/ksp/pc/impls/gamg/gamg.c
> [5]PETSC ERROR: [5] PCSetUp_GAMG line 545 
> /tmp/petsc-3.4.3/src/ksp/pc/impls/gamg/gamg.c
> [5]PETSC ERROR: [5] PCSetUp line 868 
> /tmp/petsc-3.4.3/src/ksp/pc/interface/precon.c
> [5]PETSC ERROR: [5] KSPSetUp line 192 
> /tmp/petsc-3.4.3/src/ksp/ksp/interface/itfunc.c
> [5]PETSC ERROR: [5] KSPSolve line 356 
> /tmp/petsc-3.4.3/src/ksp/ksp/interface/itfunc.c
> [5]PETSC ERROR: [5] MatMult_SchurComplement line 75 
> /tmp/petsc-3.4.3/src/ksp/ksp/utils/schurm.c
> [5]PETSC ERROR: [5] MatNullSpaceTest line 408 
> /tmp/petsc-3.4.3/src/mat/interface/matnull.c
> [5]PETSC ERROR: [5] solveModel line 113 
> "unknowndirectory/"/epi/asclepios2/bkhanal/works/AdLemModel/src/PetscAdLemTaras3D.cxx
> 
> 
> 3. 
> ********************************************************************************************************
> 
> BUT, It does give me results when I use a domain of size: 91X109 X 91 (half 
> sized in each dimension) The result along with ksp view in this case is as 
> follows:
> 
> Linear solve converged due to CONVERGED_RTOL iterations 2
> KSP Object: 64 MPI processes
>   type: gmres
>     GMRES: restart=30, using Classical (unmodified) Gram-Schmidt 
> Orthogonalization with no iterative refinement
>     GMRES: happy breakdown tolerance 1e-30
>   maximum iterations=10000, initial guess is zero
>   tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
>   left preconditioning
>   has attached null space
>   using PRECONDITIONED norm type for convergence test
> PC Object: 64 MPI processes
>   type: fieldsplit
>     FieldSplit with Schur preconditioner, blocksize = 4, factorization FULL
>     Preconditioner for the Schur complement formed from user provided matrix
>     Split info:
>     Split number 0 Fields  0, 1, 2
>     Split number 1 Fields  3
>     KSP solver for A00 block 
>       KSP Object:      (fieldsplit_0_)       64 MPI processes
>         type: gmres
>           GMRES: restart=30, using Classical (unmodified) Gram-Schmidt 
> Orthogonalization with no iterative refinement
>           GMRES: happy breakdown tolerance 1e-30
>         maximum iterations=10000, initial guess is zero
>         tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
>         left preconditioning
>         using PRECONDITIONED norm type for convergence test
>       PC Object:      (fieldsplit_0_)       64 MPI processes
>         type: hypre
>           HYPRE BoomerAMG preconditioning
>           HYPRE BoomerAMG: Cycle type V
>           HYPRE BoomerAMG: Maximum number of levels 25
>           HYPRE BoomerAMG: Maximum number of iterations PER hypre call 1
>           HYPRE BoomerAMG: Convergence tolerance PER hypre call 0
>           HYPRE BoomerAMG: Threshold for strong coupling 0.25
>           HYPRE BoomerAMG: Interpolation truncation factor 0
>           HYPRE BoomerAMG: Interpolation: max elements per row 0
>           HYPRE BoomerAMG: Number of levels of aggressive coarsening 0
>           HYPRE BoomerAMG: Number of paths for aggressive coarsening 1
>           HYPRE BoomerAMG: Maximum row sums 0.9
>           HYPRE BoomerAMG: Sweeps down         1
>           HYPRE BoomerAMG: Sweeps up           1
>           HYPRE BoomerAMG: Sweeps on coarse    1
>           HYPRE BoomerAMG: Relax down          symmetric-SOR/Jacobi
>           HYPRE BoomerAMG: Relax up            symmetric-SOR/Jacobi
>           HYPRE BoomerAMG: Relax on coarse     Gaussian-elimination
>           HYPRE BoomerAMG: Relax weight  (all)      1
>           HYPRE BoomerAMG: Outer relax weight (all) 1
>           HYPRE BoomerAMG: Using CF-relaxation
>           HYPRE BoomerAMG: Measure type        local
>           HYPRE BoomerAMG: Coarsen type        Falgout
>           HYPRE BoomerAMG: Interpolation type  classical
>         linear system matrix = precond matrix:
>         Matrix Object:         64 MPI processes
>           type: mpiaij
>           rows=2793120, cols=2793120
>           total: nonzeros=221624352, allocated nonzeros=221624352
>           total number of mallocs used during MatSetValues calls =0
>             using I-node (on process 0) routines: found 14812 nodes, limit 
> used is 5
>     KSP solver for S = A11 - A10 inv(A00) A01 
>       KSP Object:      (fieldsplit_1_)       64 MPI processes
>         type: gmres
>           GMRES: restart=30, using Classical (unmodified) Gram-Schmidt 
> Orthogonalization with no iterative refinement
>           GMRES: happy breakdown tolerance 1e-30
>         maximum iterations=10000, initial guess is zero
>         tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
>         left preconditioning
>         has attached null space
>         using PRECONDITIONED norm type for convergence test
>       PC Object:      (fieldsplit_1_)       64 MPI processes
>         type: bjacobi
>           block Jacobi: number of blocks = 64
>           Local solve is same for all blocks, in the following KSP and PC 
> objects:
>         KSP Object:        (fieldsplit_1_sub_)         1 MPI processes
>           type: preonly
>           maximum iterations=10000, initial guess is zero
>           tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
>           left preconditioning
>           using NONE norm type for convergence test
>         PC Object:        (fieldsplit_1_sub_)         1 MPI processes
>           type: ilu
>             ILU: out-of-place factorization
>             0 levels of fill
>             tolerance for zero pivot 2.22045e-14
>             using diagonal shift on blocks to prevent zero pivot [INBLOCKS]
>             matrix ordering: natural
>             factor fill ratio given 1, needed 1
>               Factored matrix follows:
>                 Matrix Object:                 1 MPI processes
>                   type: seqaij
>                   rows=14812, cols=14812
>                   package used to perform factorization: petsc
>                   total: nonzeros=368098, allocated nonzeros=368098
>                   total number of mallocs used during MatSetValues calls =0
>                     not using I-node routines
>           linear system matrix = precond matrix:
>           Matrix Object:           1 MPI processes
>             type: seqaij
>             rows=14812, cols=14812
>             total: nonzeros=368098, allocated nonzeros=368098
>             total number of mallocs used during MatSetValues calls =0
>               not using I-node routines
> 
>         linear system matrix followed by preconditioner matrix:
>         Matrix Object:         64 MPI processes
>           type: schurcomplement
>           rows=931040, cols=931040
>             Schur complement A11 - A10 inv(A00) A01
>             A11
>               Matrix Object:               64 MPI processes
>                 type: mpiaij
>                 rows=931040, cols=931040
>                 total: nonzeros=24624928, allocated nonzeros=24624928
>                 total number of mallocs used during MatSetValues calls =0
>                   not using I-node (on process 0) routines
>             A10
>               Matrix Object:               64 MPI processes
>                 type: mpiaij
>                 rows=931040, cols=2793120
>                 total: nonzeros=73874784, allocated nonzeros=73874784
>                 total number of mallocs used during MatSetValues calls =0
>                   not using I-node (on process 0) routines
>             KSP of A00
>               KSP Object:              (fieldsplit_0_)               64 MPI 
> processes
>                 type: gmres
>                   GMRES: restart=30, using Classical (unmodified) 
> Gram-Schmidt Orthogonalization with no iterative refinement
>                   GMRES: happy breakdown tolerance 1e-30
>                 maximum iterations=10000, initial guess is zero
>                 tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
>                 left preconditioning
>                 using PRECONDITIONED norm type for convergence test
>               PC Object:              (fieldsplit_0_)               64 MPI 
> processes
>                 type: hypre
>                   HYPRE BoomerAMG preconditioning
>                   HYPRE BoomerAMG: Cycle type V
>                   HYPRE BoomerAMG: Maximum number of levels 25
>                   HYPRE BoomerAMG: Maximum number of iterations PER hypre 
> call 1
>                   HYPRE BoomerAMG: Convergence tolerance PER hypre call 0
>                   HYPRE BoomerAMG: Threshold for strong coupling 0.25
>                   HYPRE BoomerAMG: Interpolation truncation factor 0
>                   HYPRE BoomerAMG: Interpolation: max elements per row 0
>                   HYPRE BoomerAMG: Number of levels of aggressive coarsening 0
>                   HYPRE BoomerAMG: Number of paths for aggressive coarsening 1
>                   HYPRE BoomerAMG: Maximum row sums 0.9
>                   HYPRE BoomerAMG: Sweeps down         1
>                   HYPRE BoomerAMG: Sweeps up           1
>                   HYPRE BoomerAMG: Sweeps on coarse    1
>                   HYPRE BoomerAMG: Relax down          symmetric-SOR/Jacobi
>                   HYPRE BoomerAMG: Relax up            symmetric-SOR/Jacobi
>                   HYPRE BoomerAMG: Relax on coarse     Gaussian-elimination
>                   HYPRE BoomerAMG: Relax weight  (all)      1
>                   HYPRE BoomerAMG: Outer relax weight (all) 1
>                   HYPRE BoomerAMG: Using CF-relaxation
>                   HYPRE BoomerAMG: Measure type        local
>                   HYPRE BoomerAMG: Coarsen type        Falgout
>                   HYPRE BoomerAMG: Interpolation type  classical
>                 linear system matrix = precond matrix:
>                 Matrix Object:                 64 MPI processes
>                   type: mpiaij
>                   rows=2793120, cols=2793120
>                   total: nonzeros=221624352, allocated nonzeros=221624352
>                   total number of mallocs used during MatSetValues calls =0
>                     using I-node (on process 0) routines: found 14812 nodes, 
> limit used is 5
>             A01
>               Matrix Object:               64 MPI processes
>                 type: mpiaij
>                 rows=2793120, cols=931040
>                 total: nonzeros=73874784, allocated nonzeros=73874784
>                 total number of mallocs used during MatSetValues calls =0
>                   using I-node (on process 0) routines: found 14812 nodes, 
> limit used is 5
>         Matrix Object:         64 MPI processes
>           type: mpiaij
>           rows=931040, cols=931040
>           total: nonzeros=24624928, allocated nonzeros=24624928
>           total number of mallocs used during MatSetValues calls =0
>             not using I-node (on process 0) routines
>   linear system matrix = precond matrix:
>   Matrix Object:   64 MPI processes
>     type: mpiaij
>     rows=3724160, cols=3724160, bs=4
>     total: nonzeros=393998848, allocated nonzeros=393998848
>     total number of mallocs used during MatSetValues calls =0
> 
> ******************************************************************************************************
> What could be going wrong here ? Is it something related to null-space 
> setting ? But I do not know why it does not arise for smaller domain sizes!
> 

Reply via email to