You need to use AIJ for gamg.  

ML might work.  You need to configure with ML.  It is an external package.

On Apr 15, 2013, at 5:40 PM, Hugo Gagnon <opensource.petsc at user.fastmail.fm> 
wrote:

> Good point, this is indeed linear elasticity.  Following your suggestion I 
> first got the following error since I now use MATBAIJ:
> 
> [0]PCSetData_AGG bs=3 MM=30441
> [0]PETSC ERROR: --------------------- Error Message 
> ------------------------------------
> [0]PETSC ERROR: Arguments are incompatible!
> [0]PETSC ERROR: MatMatMult requires A, mpibaij, to be compatible with B, 
> mpiaij!
> [0]PETSC ERROR: 
> ------------------------------------------------------------------------
> [0]PETSC ERROR: Petsc Release Version 3.3.0, Patch 6, Mon Feb 11 12:26:34 CST 
> 2013 
> [0]PETSC ERROR: See docs/changes/index.html for recent updates.
> [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
> [0]PETSC ERROR: See docs/index.html for manual pages.
> [0]PETSC ERROR: 
> ------------------------------------------------------------------------
> [0]PETSC ERROR: /Users/hugo/Documents/jetstream/jetstream_x86_64 on a 
> arch-darw named user204-27.wireless.utoronto.ca by hugo Mon Apr 15 17:31:55 
> 2013
> [0]PETSC ERROR: Libraries linked from 
> /Users/hugo/Documents/petsc-3.3-p6/arch-darwin-c-opt/lib
> [0]PETSC ERROR: Configure run at Mon Feb 18 15:08:07 2013
> [0]PETSC ERROR: Configure options --with-debugging=0
> [0]PETSC ERROR: 
> ------------------------------------------------------------------------
> [0]PETSC ERROR: MatMatMult() line 8617 in src/mat/interface/matrix.c
> [0]PETSC ERROR: PCGAMGOptprol_AGG() line 1358 in src/ksp/pc/impls/gamg/agg.c
> [0]PETSC ERROR: PCSetUp_GAMG() line 673 in src/ksp/pc/impls/gamg/gamg.c
> [0]PETSC ERROR: PCSetUp() line 832 in src/ksp/pc/interface/precon.c
> [0]PETSC ERROR: PCApply() line 380 in src/ksp/pc/interface/precon.c
> [0]PETSC ERROR: KSPSolve_CG() line 139 in src/ksp/ksp/impls/cg/cg.c
> [0]PETSC ERROR: KSPSolve() line 446 in src/ksp/ksp/interface/itfunc.c
> 
> I tried converting the matrix to MATAIJ with:
> 
> call MatConvert(Paa,MATAIJ,MAT_INITIAL_MATRIX,Paa2,Pierr)
> 
> and now I have this error (with ksp_view):
> 
> [0]PCSetData_AGG bs=1 MM=30441
>    KSPCG resid. tolerance target  =   1.000E-10
>    KSPCG initial residual |res0|  =   8.981E-02
>    KSPCG iter =    0: |res|/|res0| =  1.000E+00
>    KSPCG iter =    1: |res|/|res0| =  4.949E-01
> KSP Object: 2 MPI processes
>   type: cg
>   maximum iterations=4000
>   tolerances:  relative=1e-10, absolute=1e-50, divergence=10000
>   left preconditioning
>   using nonzero initial guess
>   using PRECONDITIONED norm type for convergence test
> PC Object: 2 MPI processes
>   type: gamg
>     MG: type is MULTIPLICATIVE, levels=2 cycles=v
>       Cycles per PCApply=1
>       Using Galerkin computed coarse grid matrices
>   Coarse grid solver -- level -------------------------------
>     KSP Object:    (mg_coarse_)     2 MPI processes
>       type: gmres
>         GMRES: restart=30, using Classical (unmodified) Gram-Schmidt 
> Orthogonalization with no iterative refinement
>         GMRES: happy breakdown tolerance 1e-30
>       maximum iterations=1, initial guess is zero
>       tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
>       left preconditioning
>       using NONE norm type for convergence test
>     PC Object:    (mg_coarse_)     2 MPI processes
>       type: bjacobi
>         block Jacobi: number of blocks = 2
>         Local solve info for each block is in the following KSP and PC 
> objects:
>       [0] number of local blocks = 1, first local block number = 0
>         [0] local block number 0
>         KSP Object:        (mg_coarse_sub_)         1 MPI processes
>           type: preonly
>               KSP Object:        (mg_coarse_sub_)        maximum 
> iterations=10000, initial guess is zero
>           tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
>      1 MPI processes
>           type: preonly
>             left preconditioning
>               maximum iterations=10000, initial guess is zero
>             using NONE norm type for convergence test
>         PC Object:        (mg_coarse_sub_)         1 MPI processes
>   tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
>           left preconditioning
>           using NONE norm type for convergence test
>         PC Object:        (mg_coarse_sub_)            type: lu
>             LU: out-of-place factorization
>        1 MPI processes
>           type: lu
>             tolerance for zero pivot 2.22045e-14
>                   LU: out-of-place factorization
>             matrix ordering: nd
>             factor fill ratio given 5, needed 5.06305
>       tolerance for zero pivot 2.22045e-14
>                       Factored matrix follows:
>                 matrix ordering: nd
>             factor fill ratio given 5, needed 0
>           Matrix Object:                        Factored matrix follows:
>                  1 MPI processes
>                   type: seqaij
> Matrix Object:                 1 MPI processes
>                     rows=552, cols=552
>                 type: seqaij
>                           package used to perform factorization: petsc
>               rows=0, cols=0
>                         total: nonzeros=106962, allocated nonzeros=106962
>             package used to perform factorization: petsc
>                           total number of mallocs used during MatSetValues 
> calls =0
>                     not using I-node routines
>           linear system matrix = precond matrix:
>           Matrix Object:           1 MPI processes
>         total: nonzeros=1, allocated nonzeros=1
>                   total number of mallocs used during MatSetValues calls =0
>                     not using I-node routines
>           linear system matrix = precond matrix:
>                   type: seqaij
>         Matrix Object:                rows=552, cols=552
>        1 MPI processes
>             type: seqaij
>         total: nonzeros=21126, allocated nonzeros=21126
>                   rows=0, cols=0
>       total number of mallocs used during MatSetValues calls =0
>                   total: nonzeros=0, allocated nonzeros=0
>           not using I-node routines
>                 total number of mallocs used during MatSetValues calls =0
>             - - - - - - - - - - - - - - - - - -
>   not using I-node routines
>       [1] number of local blocks = 1, first local block number = 1
>         [1] local block number 0
>         - - - - - - - - - - - - - - - - - -
>       linear system matrix = precond matrix:
>       Matrix Object:       2 MPI processes
>         type: mpiaij
>         rows=552, cols=552
>         total: nonzeros=21126, allocated nonzeros=21126
>         total number of mallocs used during MatSetValues calls =0
>           not using I-node (on process 0) routines
>   Down solver (pre-smoother) on level 1 -------------------------------
>     KSP Object:    (mg_levels_1_)     2 MPI processes
>       type: chebyshev
>         Chebyshev: eigenvalue estimates:  min = 0.0508405, max = 5.88746
>       maximum iterations=2
>       tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
>       left preconditioning
>       using nonzero initial guess
>       using NONE norm type for convergence test
>     PC Object:    (mg_levels_1_)     2 MPI processes
>       type: jacobi
>       linear system matrix = precond matrix:
>       Matrix Object:       2 MPI processes
>         type: mpiaij
>         rows=60879, cols=60879
>         total: nonzeros=4509729, allocated nonzeros=4509729
>         total number of mallocs used during MatSetValues calls =0
>           using I-node (on process 0) routines: found 10147 nodes, limit used 
> is 5
>   Up solver (post-smoother) same as down solver (pre-smoother)
>   linear system matrix = precond matrix:
>   Matrix Object:   2 MPI processes
>     type: mpiaij
>     rows=60879, cols=60879
>     total: nonzeros=4509729, allocated nonzeros=4509729
>     total number of mallocs used during MatSetValues calls =0
>       using I-node (on process 0) routines: found 10147 nodes, limit used is 5
>  Error in FEMesh_Mod::moveFEMeshPETSc() : KSP returned with error code =      
>      -8
> 
> --
>   Hugo Gagnon
> 
> On 2013-04-15, at 5:26 PM, Mark F. Adams <mark.adams at columbia.edu> wrote:
> 
>> Its probably not worth trying to verify the code with ILU(3) because the 
>> space of algorithms is large as Jed points out (e.g., ILU(3) does not fully 
>> define the solver unless they use the same node ordering, shifting 
>> strategies and whatever else your ILU is doing to make ILU not suck).  It 
>> looks like you are doing 3D elasticity. Try
>> 
>> -pc_type gamg
>> -pc_gamg_agg_nsmooths 1
>> 
>> assuming you have v3.3 or higher.
>> 
>> 
>> On Apr 15, 2013, at 5:15 PM, Jed Brown <jedbrown at mcs.anl.gov> wrote:
>> 
>>> Hugo Gagnon <opensource.petsc at user.fastmail.fm> writes:
>>> 
>>>> For the problem I'm describing my serial in-house solver does not work
>>>> with ILU(0) but works with ILU(3).  I have no option to run Jacobi.
>>>> When I apply the same problem to PETSc's PC solver with ILU(3) in
>>>> serial I get KSP_DIVERGED_INDEFINITE_PC
>>> 
>>> Does your in-house ILU(3) use a different ordering?  What shift scheme
>>> does it use?
>>> 
>>>> on the first iteration (in MPI the solution somewhat converges but
>>>> very slowly).
>>>> 
>>>> call KSPGetPC(Pksp,Ppc,Pierr)
>>>> call PCSetType(Ppc,PCILU,Pierr)
>>>> call PCFactorSetLevels(Ppc,3,Pierr)
>>>> 
>>>> This effectively changes the fill level from 0 to 3, right?
>>> 
>>> This only works in serial.  Check the -ksp_view output to see what is
>>> done.  You should just call KSPSetFromOptions() and use run-time options
>>> to configure the solver.  You can do it from code later, but writing
>>> code is slow to figure out what works.
>>> 
>> 
> 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: 
<http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20130415/a0703c28/attachment-0001.html>

Reply via email to