On Aug 2, 2013, at 6:11 PM, Barry Smith <bsm...@mcs.anl.gov> wrote:

> 
> On Aug 2, 2013, at 4:52 PM, Michele Rosso <mro...@uci.edu> wrote:
> 
>> Barry,
>> 
>> thank you very much for your help. I was trying to debug the error with no 
>> success!
>> Now it works like a charm for me too!
>> I have still two questions for you:
>> 
>> 1) How did you choose the number of levels to use: trial and error?
> 
>   I just used 2 because it is more than one level :-).  When you use a finer 
> mesh you can use more levels.
> 
>> 2) For a singular system (periodic), besides the nullspace removal, should I 
>> change any parameter?
> 
>   I don't know of anything.
> 

Don't you need an iterative coarse grid solver?

>   But there is a possible problem with -pc_mg_galerkin, PETSc does not 
> transfer the null space information from the fine mesh to the other meshes 
> and technically we really want the multigrid to remove the null space on all 
> the levels but usually it will work without worrying about that.

I've found that the one outer cleaning is plenty and in fact can work w/o it.

> 
>   Barry
> 
>> 
>> Again, thank you very much!
>> 
>> Michele
>> 
>> On 08/02/2013 02:38 PM, Barry Smith wrote:
>>>   Finally got it. My failing memory. I had to add the line
>>> 
>>>   call KSPSetDMActive(ksp,PETSC_FALSE,ierr)
>>> 
>>>   immediately after KSPSetDM()   and 
>>> 
>>>   change 
>>> 
>>>    call DMCreateMatrix(da,MATMPIAIJ,A,ierr)
>>> 
>>>   to
>>> 
>>>    call DMCreateMatrix(da,MATAIJ,A,ierr)
>>> 
>>>    so it will work in both parallel and sequential then 
>>> 
>>> ksp_monitor -ksp_converged_reason -pc_type mg -ksp_view  -pc_mg_galerkin 
>>> -pc_mg_levels 2
>>> 
>>> works great with 2 levels. 
>>> 
>>>   Barry
>>> 
>>> 
>>> 
>>> 
>>> On Aug 1, 2013, at 6:29 PM, Michele Rosso 
>>> <mro...@uci.edu>
>>> wrote:
>>> 
>>> 
>>>> Barry, 
>>>> 
>>>> no problem. I attached the full code in test_poisson_solver.tar.gz.
>>>> My test code is a very reduced version of my productive code 
>>>> (incompressible DNS code) thus fftw3 and the library 2decomp&fft are 
>>>> needed to run it.
>>>> I attached the 2decomp&fft version I used: it is a matter of minutes to 
>>>> install it, so you should not have any problem.
>>>> Please, contact me for any question/suggestion.
>>>> I the mean time I will try to debug it.
>>>> 
>>>> Michele
>>>> 
>>>> 
>>>> 
>>>> 
>>>> On 08/01/2013 04:19 PM, Barry Smith wrote:
>>>> 
>>>>>   Run on one process until this is debugged. You can try the option 
>>>>> 
>>>>> -start_in_debugger noxterm 
>>>>> 
>>>>> and then call VecView(vec,0) in the debugger when it gives the error 
>>>>> below. It seems like some objects are not getting their initial values 
>>>>> set properly. Are you able to email the code so we can run it and figure 
>>>>> out what is going on?
>>>>> 
>>>>>   Barry
>>>>> 
>>>>> On Aug 1, 2013, at 5:52 PM, Michele Rosso 
>>>>> 
>>>>> <mro...@uci.edu>
>>>>> 
>>>>> wrote:
>>>>> 
>>>>> 
>>>>> 
>>>>>> Barry,
>>>>>> 
>>>>>> I checked the matrix: the element (0,0) is not zero, nor any other 
>>>>>> diagonal element is.
>>>>>> The matrix is symmetric positive define (i.e. the standard Poisson 
>>>>>> matrix).
>>>>>> Also, -da_refine is never used (see previous output).
>>>>>> I tried to run with -pc_type mg -pc_mg_galerkin -mg_levels_pc_type 
>>>>>> jacobi -mg_levels_ksp_type chebyshev 
>>>>>> -mg_levels_ksp_chebyshev_estimate_eigenvalues  -ksp_view -options_left
>>>>>> 
>>>>>> and now the error is different:
>>>>>> 0]PETSC ERROR: [1]PETSC ERROR: --------------------- Error Message 
>>>>>> ------------------------------------
>>>>>> [1]PETSC ERROR: Floating point exception!
>>>>>> [1]PETSC ERROR: Vec entry at local location 0 is not-a-number or 
>>>>>> infinite at beginning of function: Parameter number 2!
>>>>>> [1]PETSC ERROR: 
>>>>>> ------------------------------------------------------------------------
>>>>>> [1]PETSC ERROR: Petsc Release Version 3.4.2, Jul, 02, 2013 
>>>>>> [1]PETSC ERROR: See docs/changes/index.html for recent updates.
>>>>>> [1]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
>>>>>> [2]PETSC ERROR: --------------------- Error Message 
>>>>>> ------------------------------------
>>>>>> [2]PETSC ERROR: Floating point exception!
>>>>>> [2]PETSC ERROR: Vec entry at local location 0 is not-a-number or 
>>>>>> infinite at beginning of function: Parameter number 2!
>>>>>> [2]PETSC ERROR: 
>>>>>> ------------------------------------------------------------------------
>>>>>> [2]PETSC ERROR: Petsc Release Version 3.4.2, Jul, 02, 2013 
>>>>>> [2]PETSC ERROR: See docs/changes/index.html for recent updates.
>>>>>> [2]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
>>>>>> [2]PETSC ERROR: [3]PETSC ERROR: --------------------- Error Message 
>>>>>> ------------------------------------
>>>>>> [3]PETSC ERROR: Floating point exception!
>>>>>> [3]PETSC ERROR: Vec entry at local location 0 is not-a-number or 
>>>>>> infinite at beginning of function: Parameter number 2!
>>>>>> [3]PETSC ERROR: 
>>>>>> ------------------------------------------------------------------------
>>>>>> [3]PETSC ERROR: Petsc Release Version 3.4.2, Jul, 02, 2013 
>>>>>> [3]PETSC ERROR: See docs/changes/index.html for recent updates.
>>>>>> [3]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
>>>>>> [3]PETSC ERROR: See docs/index.html for manual pages.
>>>>>> [3]PETSC ERROR: 
>>>>>> ------------------------------------------------------------------------
>>>>>> [1]PETSC ERROR: See docs/index.html for manual pages.
>>>>>> [1]PETSC ERROR: 
>>>>>> ------------------------------------------------------------------------
>>>>>> [1]PETSC ERROR: ./test on a linux-gnu-dbg named enterprise-A by mic Thu 
>>>>>> Aug  1 15:43:16 2013
>>>>>> [1]PETSC ERROR: Libraries linked from 
>>>>>> /opt/petsc/petsc-3.4.2/linux-gnu-dbg/lib
>>>>>> [1]PETSC ERROR: See docs/index.html for manual pages.
>>>>>> [2]PETSC ERROR: 
>>>>>> ------------------------------------------------------------------------
>>>>>> [2]PETSC ERROR: ./test on a linux-gnu-dbg named enterprise-A by mic Thu 
>>>>>> Aug  1 15:43:16 2013
>>>>>> [2]PETSC ERROR: Libraries linked from 
>>>>>> /opt/petsc/petsc-3.4.2/linux-gnu-dbg/lib
>>>>>> [2]PETSC ERROR: Configure run at Thu Aug  1 12:01:44 2013
>>>>>> [2]PETSC ERROR: [3]PETSC ERROR: ./test on a linux-gnu-dbg named 
>>>>>> enterprise-A by mic Thu Aug  1 15:43:16 2013
>>>>>> [3]PETSC ERROR: Libraries linked from 
>>>>>> /opt/petsc/petsc-3.4.2/linux-gnu-dbg/lib
>>>>>> [3]PETSC ERROR: Configure run at Thu Aug  1 12:01:44 2013
>>>>>> [3]PETSC ERROR: Configure options 
>>>>>> Configure run at Thu Aug  1 12:01:44 2013
>>>>>> [1]PETSC ERROR: Configure options 
>>>>>> [1]PETSC ERROR: 
>>>>>> ------------------------------------------------------------------------
>>>>>> [1]PETSC ERROR: VecValidValues() line 28 in 
>>>>>> src/vec/vec/interface/rvector.c
>>>>>> Configure options 
>>>>>> [2]PETSC ERROR: 
>>>>>> ------------------------------------------------------------------------
>>>>>> [2]PETSC ERROR: VecValidValues() line 28 in 
>>>>>> src/vec/vec/interface/rvector.c
>>>>>> [3]PETSC ERROR: 
>>>>>> ------------------------------------------------------------------------
>>>>>> [3]PETSC ERROR: VecValidValues() line 28 in 
>>>>>> src/vec/vec/interface/rvector.c
>>>>>> [3]PETSC ERROR: [1]PETSC ERROR: MatMult() line 2174 in 
>>>>>> src/mat/interface/matrix.c
>>>>>> [1]PETSC ERROR: [2]PETSC ERROR: MatMult() line 2174 in 
>>>>>> src/mat/interface/matrix.c
>>>>>> [2]PETSC ERROR: KSP_MatMult() line 204 in 
>>>>>> src/ksp/ksp/impls/cheby//opt/petsc/petsc-3.4.2/include/petsc-private/kspimpl.h
>>>>>> MatMult() line 2174 in src/mat/interface/matrix.c
>>>>>> [3]PETSC ERROR: KSP_MatMult() line 204 in 
>>>>>> src/ksp/ksp/impls/cheby//opt/petsc/petsc-3.4.2/include/petsc-private/kspimpl.h
>>>>>> [3]PETSC ERROR: KSP_MatMult() line 204 in 
>>>>>> src/ksp/ksp/impls/cheby//opt/petsc/petsc-3.4.2/include/petsc-private/kspimpl.h
>>>>>> [1]PETSC ERROR: KSPSolve_Chebyshev() line 504 in 
>>>>>> src/ksp/ksp/impls/cheby/cheby.c
>>>>>> [2]PETSC ERROR: KSPSolve_Chebyshev() line 504 in 
>>>>>> src/ksp/ksp/impls/cheby/cheby.c
>>>>>> [2]PETSC ERROR: KSPSolve_Chebyshev() line 504 in 
>>>>>> src/ksp/ksp/impls/cheby/cheby.c
>>>>>> [3]PETSC ERROR: KSPSolve() line 441 in src/ksp/ksp/interface/itfunc.c
>>>>>> [1]PETSC ERROR: KSPSolve() line 441 in src/ksp/ksp/interface/itfunc.c
>>>>>> [1]PETSC ERROR: KSPSolve() line 441 in src/ksp/ksp/interface/itfunc.c
>>>>>> [2]PETSC ERROR: PCMGMCycle_Private() line 19 in src/ksp/pc/impls/mg/mg.c
>>>>>> [3]PETSC ERROR: PCMGMCycle_Private() line 19 in src/ksp/pc/impls/mg/mg.c
>>>>>> PCMGMCycle_Private() line 19 in src/ksp/pc/impls/mg/mg.c
>>>>>> [1]PETSC ERROR: PCApply_MG() line 330 in src/ksp/pc/impls/mg/mg.c
>>>>>> [2]PETSC ERROR: PCApply_MG() line 330 in src/ksp/pc/impls/mg/mg.c
>>>>>> [2]PETSC ERROR: PCApply() line 442 in src/ksp/pc/interface/precon.c
>>>>>> [3]PETSC ERROR: PCApply_MG() line 330 in src/ksp/pc/impls/mg/mg.c
>>>>>> [3]PETSC ERROR: PCApply() line 442 in src/ksp/pc/interface/precon.c
>>>>>> [1]PETSC ERROR: PCApply() line 442 in src/ksp/pc/interface/precon.c
>>>>>> [1]PETSC ERROR: KSP_PCApply() line 227 in 
>>>>>> src/ksp/ksp/impls/cg//opt/petsc/petsc-3.4.2/include/petsc-private/kspimpl.h
>>>>>> [2]PETSC ERROR: KSP_PCApply() line 227 in 
>>>>>> src/ksp/ksp/impls/cg//opt/petsc/petsc-3.4.2/include/petsc-private/kspimpl.h
>>>>>> [2]PETSC ERROR: [3]PETSC ERROR: KSP_PCApply() line 227 in 
>>>>>> src/ksp/ksp/impls/cg//opt/petsc/petsc-3.4.2/include/petsc-private/kspimpl.h
>>>>>> [3]PETSC ERROR: KSPSolve_CG() line 175 in src/ksp/ksp/impls/cg/cg.c
>>>>>> [1]PETSC ERROR: KSPSolve_CG() line 175 in src/ksp/ksp/impls/cg/cg.c
>>>>>> [1]PETSC ERROR: KSPSolve() line 441 in src/ksp/ksp/interface/itfunc.c
>>>>>> KSPSolve_CG() line 175 in src/ksp/ksp/impls/cg/cg.c
>>>>>> [2]PETSC ERROR: KSPSolve() line 441 in src/ksp/ksp/interface/itfunc.c
>>>>>> [3]PETSC ERROR: KSPSolve() line 441 in src/ksp/ksp/interface/itfunc.c
>>>>>> --------------------- Error Message ------------------------------------
>>>>>> [0]PETSC ERROR: Floating point exception!
>>>>>> [0]PETSC ERROR: Vec entry at local location 0 is not-a-number or 
>>>>>> infinite at beginning of function: Parameter number 2!
>>>>>> [0]PETSC ERROR: 
>>>>>> ------------------------------------------------------------------------
>>>>>> [0]PETSC ERROR: Petsc Release Version 3.4.2, Jul, 02, 2013 
>>>>>> [0]PETSC ERROR: See docs/changes/index.html for recent updates.
>>>>>> [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
>>>>>> [0]PETSC ERROR: See docs/index.html for manual pages.
>>>>>> [0]PETSC ERROR: 
>>>>>> ------------------------------------------------------------------------
>>>>>> [0]PETSC ERROR: ./test on a linux-gnu-dbg named enterprise-A by mic Thu 
>>>>>> Aug  1 15:43:16 2013
>>>>>> [0]PETSC ERROR: Libraries linked from 
>>>>>> /opt/petsc/petsc-3.4.2/linux-gnu-dbg/lib
>>>>>> [0]PETSC ERROR: Configure run at Thu Aug  1 12:01:44 2013
>>>>>> [0]PETSC ERROR: Configure options 
>>>>>> [0]PETSC ERROR: 
>>>>>> ------------------------------------------------------------------------
>>>>>> [0]PETSC ERROR: VecValidValues() line 28 in 
>>>>>> src/vec/vec/interface/rvector.c
>>>>>> [0]PETSC ERROR: MatMult() line 2174 in src/mat/interface/matrix.c
>>>>>> [0]PETSC ERROR: KSP_MatMult() line 204 in 
>>>>>> src/ksp/ksp/impls/cheby//opt/petsc/petsc-3.4.2/include/petsc-private/kspimpl.h
>>>>>> [0]PETSC ERROR: KSPSolve_Chebyshev() line 504 in 
>>>>>> src/ksp/ksp/impls/cheby/cheby.c
>>>>>> [0]PETSC ERROR: KSPSolve() line 441 in src/ksp/ksp/interface/itfunc.c
>>>>>> [0]PETSC ERROR: PCMGMCycle_Private() line 19 in src/ksp/pc/impls/mg/mg.c
>>>>>> [0]PETSC ERROR: PCApply_MG() line 330 in src/ksp/pc/impls/mg/mg.c
>>>>>> [0]PETSC ERROR: PCApply() line 442 in src/ksp/pc/interface/precon.c
>>>>>> [0]PETSC ERROR: KSP_PCApply() line 227 in 
>>>>>> src/ksp/ksp/impls/cg//opt/petsc/petsc-3.4.2/include/petsc-private/kspimpl.h
>>>>>> [0]PETSC ERROR: KSPSolve_CG() line 175 in src/ksp/ksp/impls/cg/cg.c
>>>>>> [0]PETSC ERROR: KSPSolve() line 441 in src/ksp/ksp/interface/itfunc.c
>>>>>> 
>>>>>> #PETSc Option Table entries:
>>>>>> -ksp_view
>>>>>> -mg_levels_ksp_chebyshev_estimate_eigenvalues
>>>>>> -mg_levels_ksp_type chebyshev
>>>>>> -mg_levels_pc_type jacobi
>>>>>> -options_left
>>>>>> -pc_mg_galerkin
>>>>>> -pc_type mg
>>>>>> #End of PETSc Option Table entries
>>>>>> There are no unused options.
>>>>>> 
>>>>>> Michele
>>>>>> 
>>>>>> 
>>>>>> On 08/01/2013 03:27 PM, Barry Smith wrote:
>>>>>> 
>>>>>> 
>>>>>>>  Do a MatView() on A before the solve (remove the -da_refine 4) so it 
>>>>>>> is small. Is the 0,0 entry 0? If the matrix has zero on the diagonals 
>>>>>>> you cannot us Gauss-Seidel as the smoother.  You can start with 
>>>>>>> -mg_levels_pc_type jacobi -mg_levels_ksp_type chebychev 
>>>>>>> -mg_levels_ksp_chebyshev_estimate_eigenvalues
>>>>>>> 
>>>>>>>   Is the matrix a Stokes-like matrix? If so then different 
>>>>>>> preconditioners are in order.
>>>>>>> 
>>>>>>>   Barry
>>>>>>> 
>>>>>>> On Aug 1, 2013, at 5:21 PM, Michele Rosso 
>>>>>>> 
>>>>>>> 
>>>>>>> <mro...@uci.edu>
>>>>>>> 
>>>>>>> 
>>>>>>> wrote:
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>>> Barry,
>>>>>>>> 
>>>>>>>> here it is the fraction of code where I set the rhs term and the 
>>>>>>>> matrix.
>>>>>>>> 
>>>>>>>>       ! Create matrix
>>>>>>>>       call form_matrix( A , qrho, lsf, head )
>>>>>>>>       call MatAssemblyBegin(A,MAT_FINAL_ASSEMBLY,ierr)
>>>>>>>>       call MatAssemblyEnd(A,MAT_FINAL_ASSEMBLY,ierr)
>>>>>>>> 
>>>>>>>>       ! Create rhs term
>>>>>>>>       call form_rhs(work, qrho, lsf, b , head)
>>>>>>>> 
>>>>>>>>       ! Solve system
>>>>>>>>       call KSPSetFromOptions(ksp,ierr)
>>>>>>>>       call KSPSetUp(ksp,ierr)
>>>>>>>>       call KSPSolve(ksp,b,x,ierr)
>>>>>>>>       call KSPGetIterationNumber(ksp, iiter ,ierr)
>>>>>>>> 
>>>>>>>> The subroutine form_matrix returns the Mat object A that is filled by 
>>>>>>>> using  MatSetValuesStencil.
>>>>>>>> qrho, lsf and head are additional arguments that are needed to compute 
>>>>>>>> the matrix value.
>>>>>>>> 
>>>>>>>> 
>>>>>>>> Michele
>>>>>>>> 
>>>>>>>> 
>>>>>>>> 
>>>>>>>> On 08/01/2013 03:11 PM, Barry Smith wrote:
>>>>>>>> 
>>>>>>>> 
>>>>>>>> 
>>>>>>>>>  Where are you putting the values into the matrix? It seems the 
>>>>>>>>> matrix has no values in it? The code is stopping because in the 
>>>>>>>>> Gauss-Seidel smoothing it has detected zero diagonals.
>>>>>>>>> 
>>>>>>>>>   Barry
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> On Aug 1, 2013, at 4:47 PM, Michele Rosso 
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> <mro...@uci.edu>
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> wrote:
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>>> Barry,
>>>>>>>>>> 
>>>>>>>>>> I run with :   -pc_type mg -pc_mg_galerkin -da_refine 4  -ksp_view 
>>>>>>>>>> -options_left
>>>>>>>>>> 
>>>>>>>>>> For the test I use a 64^3 grid and 4 processors.
>>>>>>>>>> 
>>>>>>>>>> The output is:
>>>>>>>>>> 
>>>>>>>>>> [2]PETSC ERROR: --------------------- Error Message 
>>>>>>>>>> ------------------------------------
>>>>>>>>>> [2]PETSC ERROR: Arguments are incompatible!
>>>>>>>>>> [2]PETSC ERROR: Zero diagonal on row 0!
>>>>>>>>>> [2]PETSC ERROR: 
>>>>>>>>>> ------------------------------------------------------------------------
>>>>>>>>>> [2]PETSC ERROR: Petsc Release Version 3.4.2, Jul, 02, 2013
>>>>>>>>>> [2]PETSC ERROR: See docs/changes/index.html for recent updates.
>>>>>>>>>> [2]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
>>>>>>>>>> [2]PETSC ERROR: See docs/index.html for manual pages.
>>>>>>>>>> [2]PETSC ERROR: 
>>>>>>>>>> ------------------------------------------------------------------------
>>>>>>>>>> [2]PETSC ERROR: ./test on a linux-gnu-dbg named enterprise-A by mic 
>>>>>>>>>> Thu Aug  1 14:44:04 2013
>>>>>>>>>> [0]PETSC ERROR: [2]PETSC ERROR: Libraries linked from 
>>>>>>>>>> /opt/petsc/petsc-3.4.2/linux-gnu-dbg/lib
>>>>>>>>>> [2]PETSC ERROR: Configure run at Thu Aug  1 12:01:44 2013
>>>>>>>>>> [2]PETSC ERROR: Configure options
>>>>>>>>>> [2]PETSC ERROR: 
>>>>>>>>>> ------------------------------------------------------------------------
>>>>>>>>>> [2]PETSC ERROR: MatInvertDiagonal_SeqAIJ() line 1457 in 
>>>>>>>>>> src/mat/impls/aij/seq/aij.c
>>>>>>>>>> [2]PETSC ERROR: MatSOR_SeqAIJ() line 1489 in 
>>>>>>>>>> src/mat/impls/aij/seq/aij.c
>>>>>>>>>> --------------------- Error Message 
>>>>>>>>>> ------------------------------------
>>>>>>>>>> [2]PETSC ERROR: MatSOR_MPIAIJ() line 1623 in 
>>>>>>>>>> src/mat/impls/aij/mpi/mpiaij.c
>>>>>>>>>> [2]PETSC ERROR: MatSOR() line 3649 in src/mat/interface/matrix.c
>>>>>>>>>> [2]PETSC ERROR: [0]PETSC ERROR: PCApply_SOR() line 35 in 
>>>>>>>>>> src/ksp/pc/impls/sor/sor.c
>>>>>>>>>> [2]PETSC ERROR: PCApply() line 442 in src/ksp/pc/interface/precon.c
>>>>>>>>>> [2]PETSC ERROR: KSP_PCApply() line 227 in 
>>>>>>>>>> src/ksp/ksp/interface//opt/petsc/petsc-3.4.2/include/petsc-private/kspimpl.h
>>>>>>>>>> Arguments are incompatible!
>>>>>>>>>> [2]PETSC ERROR: KSPInitialResidual() line 64 in 
>>>>>>>>>> src/ksp/ksp/interface/itres.c
>>>>>>>>>> [2]PETSC ERROR: KSPSolve_GMRES() line 239 in 
>>>>>>>>>> src/ksp/ksp/impls/gmres/gmres.c
>>>>>>>>>> [2]PETSC ERROR: KSPSolve() line 441 in src/ksp/ksp/interface/itfunc.c
>>>>>>>>>> [2]PETSC ERROR: [0]PETSC ERROR: KSPSolve_Chebyshev() line 409 in 
>>>>>>>>>> src/ksp/ksp/impls/cheby/cheby.c
>>>>>>>>>> [2]PETSC ERROR: KSPSolve() line 441 in src/ksp/ksp/interface/itfunc.c
>>>>>>>>>> [2]PETSC ERROR: PCMGMCycle_Private() line 19 in 
>>>>>>>>>> src/ksp/pc/impls/mg/mg.c
>>>>>>>>>> Zero diagonal on row 0!
>>>>>>>>>> [2]PETSC ERROR: PCApply_MG() line 330 in src/ksp/pc/impls/mg/mg.c
>>>>>>>>>> [2]PETSC ERROR: PCApply() line 442 in src/ksp/pc/interface/precon.c
>>>>>>>>>> [0]PETSC ERROR: [2]PETSC ERROR: KSP_PCApply() line 227 in 
>>>>>>>>>> src/ksp/ksp/impls/cg//opt/petsc/petsc-3.4.2/include/petsc-private/kspimpl.h
>>>>>>>>>> [2]PETSC ERROR: KSPSolve_CG() line 175 in src/ksp/ksp/impls/cg/cg.c
>>>>>>>>>> ------------------------------------------------------------------------
>>>>>>>>>> [2]PETSC ERROR: KSPSolve() line 441 in src/ksp/ksp/interface/itfunc.c
>>>>>>>>>> [0]PETSC ERROR: Petsc Release Version 3.4.2, Jul, 02, 2013
>>>>>>>>>> [0]PETSC ERROR: See docs/changes/index.html for recent updates.
>>>>>>>>>> [3]PETSC ERROR: [0]PETSC ERROR: See docs/faq.html for hints about 
>>>>>>>>>> trouble shooting.
>>>>>>>>>> [0]PETSC ERROR: --------------------- Error Message 
>>>>>>>>>> ------------------------------------
>>>>>>>>>> [3]PETSC ERROR: Arguments are incompatible!
>>>>>>>>>> [3]PETSC ERROR: Zero diagonal on row 0!
>>>>>>>>>> [3]PETSC ERROR: 
>>>>>>>>>> ------------------------------------------------------------------------
>>>>>>>>>> [3]PETSC ERROR: Petsc Release Version 3.4.2, Jul, 02, 2013
>>>>>>>>>> [3]PETSC ERROR: See docs/changes/index.html for recent updates.
>>>>>>>>>> [3]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
>>>>>>>>>> [3]PETSC ERROR: See docs/index.html for manual pages.
>>>>>>>>>> [3]PETSC ERROR: 
>>>>>>>>>> ------------------------------------------------------------------------
>>>>>>>>>> See docs/index.html for manual pages.
>>>>>>>>>> [3]PETSC ERROR: ./test on a linux-gnu-dbg named enterprise-A by mic 
>>>>>>>>>> Thu Aug  1 14:44:04 2013
>>>>>>>>>> [3]PETSC ERROR: Libraries linked from 
>>>>>>>>>> /opt/petsc/petsc-3.4.2/linux-gnu-dbg/lib
>>>>>>>>>> [1]PETSC ERROR: [3]PETSC ERROR: Configure run at Thu Aug  1 12:01:44 
>>>>>>>>>> 2013
>>>>>>>>>> [3]PETSC ERROR: Configure options
>>>>>>>>>> [3]PETSC ERROR: 
>>>>>>>>>> ------------------------------------------------------------------------
>>>>>>>>>> [3]PETSC ERROR: --------------------- Error Message 
>>>>>>>>>> ------------------------------------
>>>>>>>>>> MatInvertDiagonal_SeqAIJ() line 1457 in src/mat/impls/aij/seq/aij.c
>>>>>>>>>> [3]PETSC ERROR: MatSOR_SeqAIJ() line 1489 in 
>>>>>>>>>> src/mat/impls/aij/seq/aij.c
>>>>>>>>>> [3]PETSC ERROR: [0]PETSC ERROR: MatSOR_MPIAIJ() line 1623 in 
>>>>>>>>>> src/mat/impls/aij/mpi/mpiaij.c
>>>>>>>>>> [1]PETSC ERROR: Arguments are incompatible!
>>>>>>>>>> [1]PETSC ERROR: Zero diagonal on row 0!
>>>>>>>>>> [1]PETSC ERROR: 
>>>>>>>>>> ------------------------------------------------------------------------
>>>>>>>>>> [1]PETSC ERROR: Petsc Release Version 3.4.2, Jul, 02, 2013
>>>>>>>>>> [1]PETSC ERROR: See docs/changes/index.html for recent updates.
>>>>>>>>>> [1]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
>>>>>>>>>> [1]PETSC ERROR: See docs/index.html for manual pages.
>>>>>>>>>> [1]PETSC ERROR: 
>>>>>>>>>> ------------------------------------------------------------------------
>>>>>>>>>> [1]PETSC ERROR: ./test on a linux-gnu-dbg named enterprise-A by mic 
>>>>>>>>>> Thu Aug  1 14:44:04 2013
>>>>>>>>>> [1]PETSC ERROR: Libraries linked from 
>>>>>>>>>> /opt/petsc/petsc-3.4.2/linux-gnu-dbg/lib
>>>>>>>>>> [1]PETSC ERROR: Configure run at Thu Aug  1 12:01:44 2013
>>>>>>>>>> [1]PETSC ERROR: Configure options
>>>>>>>>>> [1]PETSC ERROR: 
>>>>>>>>>> ------------------------------------------------------------------------
>>>>>>>>>> [1]PETSC ERROR: MatInvertDiagonal_SeqAIJ() line 1457 in 
>>>>>>>>>> src/mat/impls/aij/seq/aij.c
>>>>>>>>>> [1]PETSC ERROR: [3]PETSC ERROR: MatSOR() line 3649 in 
>>>>>>>>>> src/mat/interface/matrix.c
>>>>>>>>>> [3]PETSC ERROR: PCApply_SOR() line 35 in src/ksp/pc/impls/sor/sor.c
>>>>>>>>>> [3]PETSC ERROR: PCApply() line 442 in src/ksp/pc/interface/precon.c
>>>>>>>>>> [3]PETSC ERROR: KSP_PCApply() line 227 in 
>>>>>>>>>> src/ksp/ksp/interface//opt/petsc/petsc-3.4.2/include/petsc-private/kspimpl.h
>>>>>>>>>> [3]PETSC ERROR: KSPInitialResidual() line 64 in 
>>>>>>>>>> src/ksp/ksp/interface/itres.c
>>>>>>>>>> [3]PETSC ERROR: KSPSolve_GMRES() line 239 in 
>>>>>>>>>> src/ksp/ksp/impls/gmres/gmres.c
>>>>>>>>>> [3]PETSC ERROR: KSPSolve() line 441 in src/ksp/ksp/interface/itfunc.c
>>>>>>>>>> [3]PETSC ERROR: KSPSolve_Chebyshev() line 409 in 
>>>>>>>>>> src/ksp/ksp/impls/cheby/cheby.c
>>>>>>>>>> [3]PETSC ERROR: KSPSolve() line 441 in src/ksp/ksp/interface/itfunc.c
>>>>>>>>>> [3]PETSC ERROR: PCMGMCycle_Private() line 19 in 
>>>>>>>>>> src/ksp/pc/impls/mg/mg.c
>>>>>>>>>> [3]PETSC ERROR: PCApply_MG() line 330 in src/ksp/pc/impls/mg/mg.c
>>>>>>>>>> [3]PETSC ERROR: PCApply() line 442 in src/ksp/pc/interface/precon.c
>>>>>>>>>> [3]PETSC ERROR: KSP_PCApply() line 227 in 
>>>>>>>>>> src/ksp/ksp/impls/cg//opt/petsc/petsc-3.4.2/include/petsc-private/kspimpl.h
>>>>>>>>>> [3]PETSC ERROR: KSPSolve_CG() line 175 in src/ksp/ksp/impls/cg/cg.c
>>>>>>>>>> [3]PETSC ERROR: KSPSolve() line 441 in src/ksp/ksp/interface/itfunc.c
>>>>>>>>>> MatSOR_SeqAIJ() line 1489 in src/mat/impls/aij/seq/aij.c
>>>>>>>>>> [1]PETSC ERROR: MatSOR_MPIAIJ() line 1623 in 
>>>>>>>>>> src/mat/impls/aij/mpi/mpiaij.c
>>>>>>>>>> [1]PETSC ERROR: MatSOR() line 3649 in src/mat/interface/matrix.c
>>>>>>>>>> [1]PETSC ERROR: PCApply_SOR() line 35 in src/ksp/pc/impls/sor/sor.c
>>>>>>>>>> [1]PETSC ERROR: PCApply() line 442 in src/ksp/pc/interface/precon.c
>>>>>>>>>> [1]PETSC ERROR: KSP_PCApply() line 227 in 
>>>>>>>>>> src/ksp/ksp/interface//opt/petsc/petsc-3.4.2/include/petsc-private/kspimpl.h
>>>>>>>>>> [1]PETSC ERROR: KSPInitialResidual() line 64 in 
>>>>>>>>>> src/ksp/ksp/interface/itres.c
>>>>>>>>>> [1]PETSC ERROR: KSPSolve_GMRES() line 239 in 
>>>>>>>>>> src/ksp/ksp/impls/gmres/gmres.c
>>>>>>>>>> [1]PETSC ERROR: KSPSolve() line 441 in src/ksp/ksp/interface/itfunc.c
>>>>>>>>>> [1]PETSC ERROR: KSPSolve_Chebyshev() line 409 in 
>>>>>>>>>> src/ksp/ksp/impls/cheby/cheby.c
>>>>>>>>>> [1]PETSC ERROR: KSPSolve() line 441 in src/ksp/ksp/interface/itfunc.c
>>>>>>>>>> [1]PETSC ERROR: PCMGMCycle_Private() line 19 in 
>>>>>>>>>> src/ksp/pc/impls/mg/mg.c
>>>>>>>>>> [1]PETSC ERROR: PCApply_MG() line 330 in src/ksp/pc/impls/mg/mg.c
>>>>>>>>>> [1]PETSC ERROR: PCApply() line 442 in src/ksp/pc/interface/precon.c
>>>>>>>>>> [1]PETSC ERROR: KSP_PCApply() line 227 in 
>>>>>>>>>> src/ksp/ksp/impls/cg//opt/petsc/petsc-3.4.2/include/petsc-private/kspimpl.h
>>>>>>>>>> [1]PETSC ERROR: KSPSolve_CG() line 175 in src/ksp/ksp/impls/cg/cg.c
>>>>>>>>>> [1]PETSC ERROR: KSPSolve() line 441 in src/ksp/ksp/interface/itfunc.c
>>>>>>>>>> ------------------------------------------------------------------------
>>>>>>>>>> [0]PETSC ERROR: ./test on a linux-gnu-dbg named enterprise-A by mic 
>>>>>>>>>> Thu Aug  1 14:44:04 2013
>>>>>>>>>> [0]PETSC ERROR: Libraries linked from 
>>>>>>>>>> /opt/petsc/petsc-3.4.2/linux-gnu-dbg/lib
>>>>>>>>>> [0]PETSC ERROR: Configure run at Thu Aug  1 12:01:44 2013
>>>>>>>>>> [0]PETSC ERROR: Configure options
>>>>>>>>>> [0]PETSC ERROR: 
>>>>>>>>>> ------------------------------------------------------------------------
>>>>>>>>>> [0]PETSC ERROR: MatInvertDiagonal_SeqAIJ() line 1457 in 
>>>>>>>>>> src/mat/impls/aij/seq/aij.c
>>>>>>>>>> [0]PETSC ERROR: MatSOR_SeqAIJ() line 1489 in 
>>>>>>>>>> src/mat/impls/aij/seq/aij.c
>>>>>>>>>> [0]PETSC ERROR: MatSOR_MPIAIJ() line 1623 in 
>>>>>>>>>> src/mat/impls/aij/mpi/mpiaij.c
>>>>>>>>>> [0]PETSC ERROR: MatSOR() line 3649 in src/mat/interface/matrix.c
>>>>>>>>>> [0]PETSC ERROR: PCApply_SOR() line 35 in src/ksp/pc/impls/sor/sor.c
>>>>>>>>>> [0]PETSC ERROR: PCApply() line 442 in src/ksp/pc/interface/precon.c
>>>>>>>>>> [0]PETSC ERROR: KSP_PCApply() line 227 in 
>>>>>>>>>> src/ksp/ksp/interface//opt/petsc/petsc-3.4.2/include/petsc-private/kspimpl.h
>>>>>>>>>> [0]PETSC ERROR: KSPInitialResidual() line 64 in 
>>>>>>>>>> src/ksp/ksp/interface/itres.c
>>>>>>>>>> [0]PETSC ERROR: KSPSolve_GMRES() line 239 in 
>>>>>>>>>> src/ksp/ksp/impls/gmres/gmres.c
>>>>>>>>>> [0]PETSC ERROR: KSPSolve() line 441 in src/ksp/ksp/interface/itfunc.c
>>>>>>>>>> [0]PETSC ERROR: KSPSolve_Chebyshev() line 409 in 
>>>>>>>>>> src/ksp/ksp/impls/cheby/cheby.c
>>>>>>>>>> [0]PETSC ERROR: KSPSolve() line 441 in src/ksp/ksp/interface/itfunc.c
>>>>>>>>>> [0]PETSC ERROR: PCMGMCycle_Private() line 19 in 
>>>>>>>>>> src/ksp/pc/impls/mg/mg.c
>>>>>>>>>> [0]PETSC ERROR: PCApply_MG() line 330 in src/ksp/pc/impls/mg/mg.c
>>>>>>>>>> [0]PETSC ERROR: PCApply() line 442 in src/ksp/pc/interface/precon.c
>>>>>>>>>> [0]PETSC ERROR: KSP_PCApply() line 227 in 
>>>>>>>>>> src/ksp/ksp/impls/cg//opt/petsc/petsc-3.4.2/include/petsc-private/kspimpl.h
>>>>>>>>>> [0]PETSC ERROR: KSPSolve_CG() line 175 in src/ksp/ksp/impls/cg/cg.c
>>>>>>>>>> [0]PETSC ERROR: KSPSolve() line 441 in src/ksp/ksp/interface/itfunc.c
>>>>>>>>>> #PETSc Option Table entries:
>>>>>>>>>> -da_refine 4
>>>>>>>>>> -ksp_view
>>>>>>>>>> -options_left
>>>>>>>>>> -pc_mg_galerkin
>>>>>>>>>> -pc_type mg
>>>>>>>>>> #End of PETSc Option Table entries
>>>>>>>>>> There is one unused database option. It is:
>>>>>>>>>> Option left: name:-da_refine value: 4
>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>>> Here is the code I use to setup DMDA and KSP:
>>>>>>>>>> 
>>>>>>>>>>    call DMDACreate3d( PETSC_COMM_WORLD ,                             
>>>>>>>>>>      &
>>>>>>>>>>                    & DMDA_BOUNDARY_PERIODIC , 
>>>>>>>>>> DMDA_BOUNDARY_PERIODIC,     &
>>>>>>>>>>                    & DMDA_BOUNDARY_PERIODIC , DMDA_STENCIL_STAR,     
>>>>>>>>>>      &
>>>>>>>>>>                    & N_Z , N_Y , N_X , N_B3 , N_B2 , 1_ip,  1_ip , 
>>>>>>>>>> 1_ip , &
>>>>>>>>>>                    & int(NNZ,ip) ,int(NNY,ip) , NNX, da , ierr)
>>>>>>>>>>         ! Create Global Vectors
>>>>>>>>>>    call DMCreateGlobalVector(da,b,ierr)
>>>>>>>>>>    call VecDuplicate(b,x,ierr)
>>>>>>>>>>         ! Set initial guess for first use of the module to 0
>>>>>>>>>>    call VecSet(x,0.0_rp,ierr)
>>>>>>>>>>         ! Create matrix
>>>>>>>>>>    call DMCreateMatrix(da,MATMPIAIJ,A,ierr)
>>>>>>>>>>          ! Create solver
>>>>>>>>>>    call KSPCreate(PETSC_COMM_WORLD,ksp,ierr)
>>>>>>>>>>    call KSPSetDM(ksp,da,ierr)
>>>>>>>>>>    call KSPSetOperators(ksp,A,A,DIFFERENT_NONZERO_PATTERN,ierr)
>>>>>>>>>> !    call KSPSetOperators(ksp,A,A,SAME_NONZERO_PATTERN,ierr)
>>>>>>>>>>    call KSPSetType(ksp,KSPCG,ierr)
>>>>>>>>>>    call KSPSetNormType(ksp,KSP_NORM_UNPRECONDITIONED,ierr) ! Real 
>>>>>>>>>> residual
>>>>>>>>>>    call KSPSetInitialGuessNonzero(ksp,PETSC_TRUE,ierr)
>>>>>>>>>>    call KSPSetTolerances(ksp, tol ,PETSC_DEFAULT_DOUBLE_PRECISION,&
>>>>>>>>>>        & PETSC_DEFAULT_DOUBLE_PRECISION,PETSC_DEFAULT_INTEGER,ierr)
>>>>>>>>>> 
>>>>>>>>>>    ! To allow using option from command line
>>>>>>>>>>    call KSPSetFromOptions(ksp,ierr)
>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>>> Michele
>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>>> On 08/01/2013 01:04 PM, Barry Smith wrote:
>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>>>>   You can use the option -pc_mg_galerkin  and then MG will compute 
>>>>>>>>>>> the coarser matrices with a sparse matrix matrix matrix product so 
>>>>>>>>>>> you should not need to change your code to try it out.  You still 
>>>>>>>>>>> need to use the KSPSetDM() and -da_refine n to get it working
>>>>>>>>>>> 
>>>>>>>>>>>   If it doesn't work, send us all the output.
>>>>>>>>>>> 
>>>>>>>>>>>   Barry
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> On Aug 1, 2013, at 2:47 PM, Michele Rosso
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> <mro...@uci.edu>
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> wrote:
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>>> Barry,
>>>>>>>>>>>> you are correct, I did not use it. I think I get now where is the 
>>>>>>>>>>>> problem. Correct me if I am wrong, but for the
>>>>>>>>>>>> geometric multigrid to work, ksp must be provided with subroutines 
>>>>>>>>>>>> to compute the matrix and the rhs at any level through
>>>>>>>>>>>> KSPSetComputeOperators and KSPSetComputeRHS.
>>>>>>>>>>>> I do not do that, I simply build a rhs vector and a matrix and 
>>>>>>>>>>>> then I solve the system.
>>>>>>>>>>>> If you confirm what I just wrote, I will try to modify my code 
>>>>>>>>>>>> accordingly and get back to you.
>>>>>>>>>>>> Thank you,
>>>>>>>>>>>> Michele
>>>>>>>>>>>> On 08/01/2013 11:48 AM, Barry Smith wrote:
>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>>>>  Do you use KSPSetDM(ksp,da);  ?  See 
>>>>>>>>>>>>> src/ksp/ksp/examples/tutorials/ex19.c
>>>>>>>>>>>>> 
>>>>>>>>>>>>>   Barry
>>>>>>>>>>>>> 
>>>>>>>>>>>>> On Aug 1, 2013, at 1:35 PM, Michele Rosso
>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>>> <mro...@uci.edu>
>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Barry,
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> I am using a finite difference Cartesian uniform grid and DMDA 
>>>>>>>>>>>>>> and so far it has not given me any problem.
>>>>>>>>>>>>>> I am using a ksp solver (not snes). In a previous thread, I was 
>>>>>>>>>>>>>> told an odd number of grid points was needed for the geometric 
>>>>>>>>>>>>>> multigrid, is that correct?
>>>>>>>>>>>>>> I tried to run my case with
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> -pc_type mg -da_refine 4
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> but it does not seem to use the -da_refine option:
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> mpiexec   -np 4 ./test  -pc_type mg -da_refine 4  -ksp_view 
>>>>>>>>>>>>>> -options_left
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> KSP Object: 4 MPI processes
>>>>>>>>>>>>>> type: cg
>>>>>>>>>>>>>> maximum iterations=10000
>>>>>>>>>>>>>> tolerances:  relative=1e-08, absolute=1e-50, divergence=10000
>>>>>>>>>>>>>> left preconditioning
>>>>>>>>>>>>>> using nonzero initial guess
>>>>>>>>>>>>>> using UNPRECONDITIONED norm type for convergence test
>>>>>>>>>>>>>> PC Object: 4 MPI processes
>>>>>>>>>>>>>> type: mg
>>>>>>>>>>>>>>   MG: type is MULTIPLICATIVE, levels=1 cycles=v
>>>>>>>>>>>>>>     Cycles per PCApply=1
>>>>>>>>>>>>>>     Not using Galerkin computed coarse grid matrices
>>>>>>>>>>>>>> Coarse grid solver -- level -------------------------------
>>>>>>>>>>>>>>   KSP Object:    (mg_levels_0_)     4 MPI processes
>>>>>>>>>>>>>>     type: chebyshev
>>>>>>>>>>>>>>       Chebyshev: eigenvalue estimates:  min = 0.134543, max = 
>>>>>>>>>>>>>> 1.47998
>>>>>>>>>>>>>>       Chebyshev: estimated using:  [0 0.1; 0 1.1]
>>>>>>>>>>>>>>       KSP Object:        (mg_levels_0_est_)         4 MPI 
>>>>>>>>>>>>>> processes
>>>>>>>>>>>>>>         type: gmres
>>>>>>>>>>>>>>           GMRES: restart=30, using Classical (unmodified) 
>>>>>>>>>>>>>> Gram-Schmidt Orthogonalization with no iterative refinement
>>>>>>>>>>>>>>           GMRES: happy breakdown tolerance 1e-30
>>>>>>>>>>>>>>         maximum iterations=10, initial guess is zero
>>>>>>>>>>>>>>         tolerances:  relative=1e-05, absolute=1e-50, 
>>>>>>>>>>>>>> divergence=10000
>>>>>>>>>>>>>>         left preconditioning
>>>>>>>>>>>>>>         using NONE norm type for convergence test
>>>>>>>>>>>>>>       PC Object:        (mg_levels_0_)         4 MPI processes
>>>>>>>>>>>>>>         type: sor
>>>>>>>>>>>>>>           SOR: type = local_symmetric, iterations = 1, local 
>>>>>>>>>>>>>> iterations = 1, omega = 1
>>>>>>>>>>>>>>         linear system matrix = precond matrix:
>>>>>>>>>>>>>>         Matrix Object:           4 MPI processes
>>>>>>>>>>>>>>           type: mpiaij
>>>>>>>>>>>>>>           rows=262144, cols=262144
>>>>>>>>>>>>>>           total: nonzeros=1835008, allocated nonzeros=1835008
>>>>>>>>>>>>>>           total number of mallocs used during MatSetValues calls 
>>>>>>>>>>>>>> =0
>>>>>>>>>>>>>>     maximum iterations=1, initial guess is zero
>>>>>>>>>>>>>>     tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
>>>>>>>>>>>>>>     left preconditioning
>>>>>>>>>>>>>>     using NONE norm type for convergence test
>>>>>>>>>>>>>>   PC Object:    (mg_levels_0_)     4 MPI processes
>>>>>>>>>>>>>>     type: sor
>>>>>>>>>>>>>>       SOR: type = local_symmetric, iterations = 1, local 
>>>>>>>>>>>>>> iterations = 1, omega = 1
>>>>>>>>>>>>>>     linear system matrix = precond matrix:
>>>>>>>>>>>>>>     Matrix Object:       4 MPI processes
>>>>>>>>>>>>>>       type: mpiaij
>>>>>>>>>>>>>>       rows=262144, cols=262144
>>>>>>>>>>>>>>       total: nonzeros=1835008, allocated nonzeros=1835008
>>>>>>>>>>>>>>       total number of mallocs used during MatSetValues calls =0
>>>>>>>>>>>>>> linear system matrix = precond matrix:
>>>>>>>>>>>>>> Matrix Object:   4 MPI processes
>>>>>>>>>>>>>>   type: mpiaij
>>>>>>>>>>>>>>   rows=262144, cols=262144
>>>>>>>>>>>>>>   total: nonzeros=1835008, allocated nonzeros=1835008
>>>>>>>>>>>>>>   total number of mallocs used during MatSetValues calls =0
>>>>>>>>>>>>>> Solution       =    1.53600013     sec
>>>>>>>>>>>>>> #PETSc Option Table entries:
>>>>>>>>>>>>>> -da_refine 4
>>>>>>>>>>>>>> -ksp_view
>>>>>>>>>>>>>> -options_left
>>>>>>>>>>>>>> -pc_type mg
>>>>>>>>>>>>>> #End of PETSc Option Table entries
>>>>>>>>>>>>>> There is one unused database option. It is:
>>>>>>>>>>>>>> Option left: name:-da_refine value: 4
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Michele
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> On 08/01/2013 11:21 AM, Barry Smith wrote:
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>   What kind of mesh are you using? Are you using DMDA? If you 
>>>>>>>>>>>>>>> are using DMDA (and have written your code to use it 
>>>>>>>>>>>>>>> "correctly") then it should be trivial to run with geometric 
>>>>>>>>>>>>>>> multigrid and geometric multigrid should be a bit faster.
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>   For example on src/snes/examples/tutorials/ex19.c   I run 
>>>>>>>>>>>>>>> with ./ex19 -pc_type mg -da_refine 4 and it refines the 
>>>>>>>>>>>>>>> original DMDA 4 times and uses geometric multigrid with 5 
>>>>>>>>>>>>>>> levels.
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>   Barry
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> On Aug 1, 2013, at 1:14 PM, Michele Rosso
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> <mro...@uci.edu>
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>> Hi,
>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>> I am successfully using PETSc (v3.4.2)  to solve a 3D 
>>>>>>>>>>>>>>>> Poisson's equation with CG + GAMG as I was suggested to do in 
>>>>>>>>>>>>>>>> a previous thread.
>>>>>>>>>>>>>>>> So far I am using GAMG with the default settings, i.e.
>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>> -pc_type gamg -pc_gamg_agg_nsmooths 1
>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>> The speed of the solution is satisfactory, but I would like to 
>>>>>>>>>>>>>>>> know if you have any suggestions to further speed it up, 
>>>>>>>>>>>>>>>> particularly
>>>>>>>>>>>>>>>> if there is any parameters worth looking into to achieve an 
>>>>>>>>>>>>>>>> even faster solution, for example number of levels and so on.
>>>>>>>>>>>>>>>> So far I am using Dirichlet's BCs for my test case, but I will 
>>>>>>>>>>>>>>>> soon have periodic conditions: in this case, does GAMG require 
>>>>>>>>>>>>>>>> particular settings?
>>>>>>>>>>>>>>>> Finally, I did not try geometric multigrid: do you think it is 
>>>>>>>>>>>>>>>> worth a shot?
>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>> Here are my current settings:
>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>> I run with
>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>> -pc_type gamg -pc_gamg_agg_nsmooths 1 -ksp_view -options_left
>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>> and the output is:
>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>> KSP Object: 4 MPI processes
>>>>>>>>>>>>>>>>  type: cg
>>>>>>>>>>>>>>>>  maximum iterations=10000
>>>>>>>>>>>>>>>>  tolerances:  relative=1e-08, absolute=1e-50, divergence=10000
>>>>>>>>>>>>>>>>  left preconditioning
>>>>>>>>>>>>>>>>  using nonzero initial guess
>>>>>>>>>>>>>>>>  using UNPRECONDITIONED norm type for convergence test
>>>>>>>>>>>>>>>> PC Object: 4 MPI processes
>>>>>>>>>>>>>>>>  type: gamg
>>>>>>>>>>>>>>>>    MG: type is MULTIPLICATIVE, levels=3 cycles=v
>>>>>>>>>>>>>>>>      Cycles per PCApply=1
>>>>>>>>>>>>>>>>      Using Galerkin computed coarse grid matrices
>>>>>>>>>>>>>>>>  Coarse grid solver -- level -------------------------------
>>>>>>>>>>>>>>>>    KSP Object:    (mg_coarse_)     4 MPI processes
>>>>>>>>>>>>>>>>      type: preonly
>>>>>>>>>>>>>>>>      maximum iterations=1, initial guess is zero
>>>>>>>>>>>>>>>>      tolerances:  relative=1e-05, absolute=1e-50, 
>>>>>>>>>>>>>>>> divergence=10000
>>>>>>>>>>>>>>>>      left preconditioning
>>>>>>>>>>>>>>>>      using NONE norm type for convergence test
>>>>>>>>>>>>>>>>    PC Object:    (mg_coarse_)     4 MPI processes
>>>>>>>>>>>>>>>>      type: bjacobi
>>>>>>>>>>>>>>>>        block Jacobi: number of blocks = 4
>>>>>>>>>>>>>>>>        Local solve info for each block is in the following KSP 
>>>>>>>>>>>>>>>> and PC objects:
>>>>>>>>>>>>>>>>      [0] number of local blocks = 1, first local block number 
>>>>>>>>>>>>>>>> = 0
>>>>>>>>>>>>>>>>                [0] local block number 0
>>>>>>>>>>>>>>>> KSP Object:          (mg_coarse_sub_)         1 MPI processes
>>>>>>>>>>>>>>>>          type: preonly
>>>>>>>>>>>>>>>>          maximum iterations=1, initial guess is zero
>>>>>>>>>>>>>>>>                tolerances:  relative=1e-05, absolute=1e-50, 
>>>>>>>>>>>>>>>> divergence=10000
>>>>>>>>>>>>>>>> KSP Object:        (mg_coarse_sub_)            left 
>>>>>>>>>>>>>>>> preconditioning
>>>>>>>>>>>>>>>>          using NONE norm type for convergence test
>>>>>>>>>>>>>>>>          PC Object:        (mg_coarse_sub_)       1 MPI 
>>>>>>>>>>>>>>>> processes
>>>>>>>>>>>>>>>>          type: preonly
>>>>>>>>>>>>>>>>         1 MPI processes
>>>>>>>>>>>>>>>>          type: lu
>>>>>>>>>>>>>>>>          maximum iterations=1, initial guess is zero
>>>>>>>>>>>>>>>>          tolerances:  relative=1e-05, absolute=1e-50, 
>>>>>>>>>>>>>>>> divergence=10000
>>>>>>>>>>>>>>>>          LU: out-of-place factorization
>>>>>>>>>>>>>>>>            left preconditioning
>>>>>>>>>>>>>>>>          using NONE norm type for convergence test
>>>>>>>>>>>>>>>>          PC Object:        (mg_coarse_sub_)         1 MPI 
>>>>>>>>>>>>>>>> processes
>>>>>>>>>>>>>>>>          type: lu
>>>>>>>>>>>>>>>>          tolerance for zero pivot 2.22045e-14
>>>>>>>>>>>>>>>>            using diagonal shift on blocks to prevent zero pivot
>>>>>>>>>>>>>>>>            matrix ordering: nd
>>>>>>>>>>>>>>>>            LU: out-of-place factorization
>>>>>>>>>>>>>>>>            tolerance for zero pivot 2.22045e-14
>>>>>>>>>>>>>>>>            using diagonal shift on blocks to prevent zero pivot
>>>>>>>>>>>>>>>>            matrix ordering: nd
>>>>>>>>>>>>>>>>            factor fill ratio given 5, needed 0
>>>>>>>>>>>>>>>>              Factored matrix follows:
>>>>>>>>>>>>>>>>            factor fill ratio given 5, needed 4.13207
>>>>>>>>>>>>>>>>              Factored matrix follows:
>>>>>>>>>>>>>>>>                  Matrix Object:              Matrix Object:    
>>>>>>>>>>>>>>>>              1 MPI processes
>>>>>>>>>>>>>>>>                  type: seqaij
>>>>>>>>>>>>>>>>                    rows=395, cols=395
>>>>>>>>>>>>>>>>                    package used to perform factorization: petsc
>>>>>>>>>>>>>>>>                  total: nonzeros=132379, allocated 
>>>>>>>>>>>>>>>> nonzeros=132379
>>>>>>>>>>>>>>>>                  total number of mallocs used during 
>>>>>>>>>>>>>>>> MatSetValues calls =0
>>>>>>>>>>>>>>>>                        not using I-node routines
>>>>>>>>>>>>>>>>           1 MPI processes
>>>>>>>>>>>>>>>>                  type: seqaij
>>>>>>>>>>>>>>>>          linear system matrix = precond matrix:
>>>>>>>>>>>>>>>>                    rows=0, cols=0
>>>>>>>>>>>>>>>>                    package used to perform factorization: petsc
>>>>>>>>>>>>>>>>                  total: nonzeros=1, allocated nonzeros=1
>>>>>>>>>>>>>>>>                    total number of mallocs used during 
>>>>>>>>>>>>>>>> MatSetValues calls =0
>>>>>>>>>>>>>>>>                      not using I-node routines
>>>>>>>>>>>>>>>>              linear system matrix = precond matrix:
>>>>>>>>>>>>>>>>  Matrix Object:             1 MPI processes
>>>>>>>>>>>>>>>>            type: seqaij
>>>>>>>>>>>>>>>>          Matrix Object:KSP Object:           1 MPI processes
>>>>>>>>>>>>>>>>            type: seqaij
>>>>>>>>>>>>>>>>            rows=0, cols=0
>>>>>>>>>>>>>>>>            total: nonzeros=0, allocated nonzeros=0
>>>>>>>>>>>>>>>>            total number of mallocs used during MatSetValues 
>>>>>>>>>>>>>>>> calls =0
>>>>>>>>>>>>>>>>                not using I-node routines
>>>>>>>>>>>>>>>>          rows=395, cols=395
>>>>>>>>>>>>>>>>            total: nonzeros=32037, allocated nonzeros=32037
>>>>>>>>>>>>>>>>            total number of mallocs used during MatSetValues 
>>>>>>>>>>>>>>>> calls =0
>>>>>>>>>>>>>>>>              not using I-node routines
>>>>>>>>>>>>>>>>          - - - - - - - - - - - - - - - - - -
>>>>>>>>>>>>>>>>          KSP Object:        (mg_coarse_sub_)         1 MPI 
>>>>>>>>>>>>>>>> processes
>>>>>>>>>>>>>>>>          type: preonly
>>>>>>>>>>>>>>>>          maximum iterations=1, initial guess is zero
>>>>>>>>>>>>>>>>          tolerances:  relative=1e-05, absolute=1e-50, 
>>>>>>>>>>>>>>>> divergence=10000
>>>>>>>>>>>>>>>>          left preconditioning
>>>>>>>>>>>>>>>>          using NONE norm type for convergence test
>>>>>>>>>>>>>>>>        PC Object:        (mg_coarse_sub_)         1 MPI 
>>>>>>>>>>>>>>>> processes
>>>>>>>>>>>>>>>>          type: lu
>>>>>>>>>>>>>>>>            LU: out-of-place factorization
>>>>>>>>>>>>>>>>            tolerance for zero pivot 2.22045e-14
>>>>>>>>>>>>>>>>            using diagonal shift on blocks to prevent zero pivot
>>>>>>>>>>>>>>>>            matrix ordering: nd
>>>>>>>>>>>>>>>>            factor fill ratio given 5, needed 0
>>>>>>>>>>>>>>>>              Factored matrix follows:
>>>>>>>>>>>>>>>>                Matrix Object:                 1 MPI processes
>>>>>>>>>>>>>>>>                  type: seqaij
>>>>>>>>>>>>>>>>                  rows=0, cols=0
>>>>>>>>>>>>>>>>                  package used to perform factorization: petsc
>>>>>>>>>>>>>>>>                  total: nonzeros=1, allocated nonzeros=1
>>>>>>>>>>>>>>>>                  total number of mallocs used during 
>>>>>>>>>>>>>>>> MatSetValues calls =0
>>>>>>>>>>>>>>>>                    not using I-node routines
>>>>>>>>>>>>>>>>          linear system matrix = precond matrix:
>>>>>>>>>>>>>>>>          Matrix Object:           1 MPI processes
>>>>>>>>>>>>>>>>            type: seqaij
>>>>>>>>>>>>>>>>            rows=0, cols=0
>>>>>>>>>>>>>>>>            total: nonzeros=0, allocated nonzeros=0
>>>>>>>>>>>>>>>>            total number of mallocs used during MatSetValues 
>>>>>>>>>>>>>>>> calls =0
>>>>>>>>>>>>>>>>              not using I-node routines
>>>>>>>>>>>>>>>>  (mg_coarse_sub_)         1 MPI processes
>>>>>>>>>>>>>>>>          type: preonly
>>>>>>>>>>>>>>>>          maximum iterations=1, initial guess is zero
>>>>>>>>>>>>>>>>          tolerances:  relative=1e-05, absolute=1e-50, 
>>>>>>>>>>>>>>>> divergence=10000
>>>>>>>>>>>>>>>>          left preconditioning
>>>>>>>>>>>>>>>>          using NONE norm type for convergence test
>>>>>>>>>>>>>>>>        PC Object:        (mg_coarse_sub_)         1 MPI 
>>>>>>>>>>>>>>>> processes
>>>>>>>>>>>>>>>>          type: lu
>>>>>>>>>>>>>>>>            LU: out-of-place factorization
>>>>>>>>>>>>>>>>            tolerance for zero pivot 2.22045e-14
>>>>>>>>>>>>>>>>            using diagonal shift on blocks to prevent zero pivot
>>>>>>>>>>>>>>>>            matrix ordering: nd
>>>>>>>>>>>>>>>>            factor fill ratio given 5, needed 0
>>>>>>>>>>>>>>>>              Factored matrix follows:
>>>>>>>>>>>>>>>>                Matrix Object:                 1 MPI processes
>>>>>>>>>>>>>>>>                  type: seqaij
>>>>>>>>>>>>>>>>                  rows=0, cols=0
>>>>>>>>>>>>>>>>                  package used to perform factorization: petsc
>>>>>>>>>>>>>>>>                  total: nonzeros=1, allocated nonzeros=1
>>>>>>>>>>>>>>>>                  total number of mallocs used during 
>>>>>>>>>>>>>>>> MatSetValues calls =0
>>>>>>>>>>>>>>>>                    not using I-node routines
>>>>>>>>>>>>>>>>          linear system matrix = precond matrix:
>>>>>>>>>>>>>>>>          Matrix Object:           1 MPI processes
>>>>>>>>>>>>>>>>            type: seqaij
>>>>>>>>>>>>>>>>            rows=0, cols=0
>>>>>>>>>>>>>>>>            total: nonzeros=0, allocated nonzeros=0
>>>>>>>>>>>>>>>>            total number of mallocs used during MatSetValues 
>>>>>>>>>>>>>>>> calls =0
>>>>>>>>>>>>>>>>              not using I-node routines
>>>>>>>>>>>>>>>>      [1] number of local blocks = 1, first local block number 
>>>>>>>>>>>>>>>> = 1
>>>>>>>>>>>>>>>>        [1] local block number 0
>>>>>>>>>>>>>>>>        - - - - - - - - - - - - - - - - - -
>>>>>>>>>>>>>>>>      [2] number of local blocks = 1, first local block number 
>>>>>>>>>>>>>>>> = 2
>>>>>>>>>>>>>>>>        [2] local block number 0
>>>>>>>>>>>>>>>>        - - - - - - - - - - - - - - - - - -
>>>>>>>>>>>>>>>>      [3] number of local blocks = 1, first local block number 
>>>>>>>>>>>>>>>> = 3
>>>>>>>>>>>>>>>>        [3] local block number 0
>>>>>>>>>>>>>>>>        - - - - - - - - - - - - - - - - - -
>>>>>>>>>>>>>>>>      linear system matrix = precond matrix:
>>>>>>>>>>>>>>>>      Matrix Object:       4 MPI processes
>>>>>>>>>>>>>>>>        type: mpiaij
>>>>>>>>>>>>>>>>        rows=395, cols=395
>>>>>>>>>>>>>>>>        total: nonzeros=32037, allocated nonzeros=32037
>>>>>>>>>>>>>>>>        total number of mallocs used during MatSetValues calls 
>>>>>>>>>>>>>>>> =0
>>>>>>>>>>>>>>>>          not using I-node (on process 0) routines
>>>>>>>>>>>>>>>>  Down solver (pre-smoother) on level 1 
>>>>>>>>>>>>>>>> -------------------------------
>>>>>>>>>>>>>>>>    KSP Object:    (mg_levels_1_)     4 MPI processes
>>>>>>>>>>>>>>>>      type: chebyshev
>>>>>>>>>>>>>>>>        Chebyshev: eigenvalue estimates:  min = 0.0636225, max 
>>>>>>>>>>>>>>>> = 1.33607
>>>>>>>>>>>>>>>>      maximum iterations=2
>>>>>>>>>>>>>>>>      tolerances:  relative=1e-05, absolute=1e-50, 
>>>>>>>>>>>>>>>> divergence=10000
>>>>>>>>>>>>>>>>      left preconditioning
>>>>>>>>>>>>>>>>      using nonzero initial guess
>>>>>>>>>>>>>>>>      using NONE norm type for convergence test
>>>>>>>>>>>>>>>>    PC Object:    (mg_levels_1_)     4 MPI processes
>>>>>>>>>>>>>>>>      type: jacobi
>>>>>>>>>>>>>>>>      linear system matrix = precond matrix:
>>>>>>>>>>>>>>>>      Matrix Object:       4 MPI processes
>>>>>>>>>>>>>>>>        type: mpiaij
>>>>>>>>>>>>>>>>        rows=23918, cols=23918
>>>>>>>>>>>>>>>>        total: nonzeros=818732, allocated nonzeros=818732
>>>>>>>>>>>>>>>>        total number of mallocs used during MatSetValues calls 
>>>>>>>>>>>>>>>> =0
>>>>>>>>>>>>>>>>          not using I-node (on process 0) routines
>>>>>>>>>>>>>>>>  Up solver (post-smoother) same as down solver (pre-smoother)
>>>>>>>>>>>>>>>>  Down solver (pre-smoother) on level 2 
>>>>>>>>>>>>>>>> -------------------------------
>>>>>>>>>>>>>>>>    KSP Object:    (mg_levels_2_)     4 MPI processes
>>>>>>>>>>>>>>>>      type: chebyshev
>>>>>>>>>>>>>>>>        Chebyshev: eigenvalue estimates:  min = 0.0971369, max 
>>>>>>>>>>>>>>>> = 2.03987
>>>>>>>>>>>>>>>>      maximum iterations=2
>>>>>>>>>>>>>>>>      tolerances:  relative=1e-05, absolute=1e-50, 
>>>>>>>>>>>>>>>> divergence=10000
>>>>>>>>>>>>>>>>      left preconditioning
>>>>>>>>>>>>>>>>      using nonzero initial guess
>>>>>>>>>>>>>>>>      using NONE norm type for convergence test
>>>>>>>>>>>>>>>>    PC Object:    (mg_levels_2_)     4 MPI processes
>>>>>>>>>>>>>>>>      type: jacobi
>>>>>>>>>>>>>>>>      linear system matrix = precond matrix:
>>>>>>>>>>>>>>>>      Matrix Object:       4 MPI processes
>>>>>>>>>>>>>>>>        type: mpiaij
>>>>>>>>>>>>>>>>        rows=262144, cols=262144
>>>>>>>>>>>>>>>>        total: nonzeros=1835008, allocated nonzeros=1835008
>>>>>>>>>>>>>>>>        total number of mallocs used during MatSetValues calls 
>>>>>>>>>>>>>>>> =0
>>>>>>>>>>>>>>>>  Up solver (post-smoother) same as down solver (pre-smoother)
>>>>>>>>>>>>>>>>  linear system matrix = precond matrix:
>>>>>>>>>>>>>>>>  Matrix Object:   4 MPI processes
>>>>>>>>>>>>>>>>    type: mpiaij
>>>>>>>>>>>>>>>>    rows=262144, cols=262144
>>>>>>>>>>>>>>>>    total: nonzeros=1835008, allocated nonzeros=1835008
>>>>>>>>>>>>>>>>    total number of mallocs used during MatSetValues calls =0
>>>>>>>>>>>>>>>> #PETSc Option Table entries:
>>>>>>>>>>>>>>>> -ksp_view
>>>>>>>>>>>>>>>> -options_left
>>>>>>>>>>>>>>>> -pc_gamg_agg_nsmooths 1
>>>>>>>>>>>>>>>> -pc_type gamg
>>>>>>>>>>>>>>>> #End of PETSc Option Table entries
>>>>>>>>>>>>>>>> There are no unused options.
>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>> Thank you,
>>>>>>>>>>>>>>>> Michele
>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>> 
>>>> <test_poisson_solver.tar.gz><2decomp_fft-1.5.847-modified.tar.gz>
>>>> 
>>> 
>> 
> 

Reply via email to