(Putting this back on the list) I see. My commands for the parallel ilu
might be out of date. Does anyone know the current syntax ?

Thanks.


On Thu, May 29, 2014 at 7:28 PM, walter kou <[email protected]> wrote:

> Yes,
> in serial, I put "pc_type ilu" or just put "pc_type lu", the residual is
> small.
>
> In parallel, I will put "-sub_pc_type ilu", the residual is still big.
>
>
> On Thu, May 29, 2014 at 6:26 PM, Vikram Garg <[email protected]>
> wrote:
>
>> So it works in serial, but not in parallel ?
>>
>>
>> On Thu, May 29, 2014 at 7:25 PM, walter kou <[email protected]>
>> wrote:
>>
>>> -sub_pc_type ilu:
>>>
>>> The residual is big.
>>>
>>> 250 KSP Residual norm 1.289444373237e-05 % max 3.713155970657e+05 min
>>> 2.280681360715e-03 max/min 1.628090637568e+08
>>>
>>> Linear solver converged at step: 25, final residual: 1.28944e-05
>>>
>>>
>>> On Thu, May 29, 2014 at 6:21 PM, Vikram Garg <[email protected]>
>>> wrote:
>>>
>>>> Right, I dont remember the syntax for ilu in parallel, but I believe it
>>>> is
>>>>
>>>> -sub_pc_type ilu
>>>>
>>>> Try that.
>>>>
>>>>
>>>> On Thu, May 29, 2014 at 7:18 PM, walter kou <[email protected]>
>>>> wrote:
>>>>
>>>>> The above is mpiexec -np 2,
>>>>> Running with single processor is OK.
>>>>>
>>>>> 249 KSP Residual norm 4.369357920020e-19 % max 5.529938049388e+01 min
>>>>> 3.238293945443e-16 max/min 1.707670193797e+17
>>>>> 250 KSP Residual norm 4.369357920020e-19 % max 5.530031976618e+01 min
>>>>> 5.037092721368e-16 max/min 1.097861858520e+17
>>>>>
>>>>> Linear solver converged at step: 29, final residual: 4.36936e-19
>>>>>
>>>>>
>>>>> On Thu, May 29, 2014 at 6:15 PM, walter kou <[email protected]>
>>>>> wrote:
>>>>>
>>>>>> It seems I do not have PETSc ILU:
>>>>>>
>>>>>> [0]PETSC ERROR: --------------------- Error Message
>>>>>> ------------------------------------
>>>>>> [0]PETSC ERROR: No support for this operation for this object type!
>>>>>> [0]PETSC ERROR: Matrix format mpiaij does not have a built-in PETSc
>>>>>>
>>>>>>
>>>>>> On Thu, May 29, 2014 at 6:11 PM, Vikram Garg <[email protected]
>>>>>> > wrote:
>>>>>>
>>>>>>> That looks good. So it most likely a linear solver settings issue.
>>>>>>> Try these options:
>>>>>>>
>>>>>>> -ksp_monitor_singular_value -ksp_gmres_modifiedgramschmidt
>>>>>>> -ksp_gmres_restart 500 -pc_type ilu -pc_factor_levels 4
>>>>>>>
>>>>>>>
>>>>>>> On Thu, May 29, 2014 at 7:10 PM, walter kou <[email protected]>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> I can only try single processor : -pc_type lu
>>>>>>>> The residual is good.
>>>>>>>>
>>>>>>>> Linear solver converged at step: 53, final residual: 4.67731e-30
>>>>>>>> begin solve: iteration #54
>>>>>>>>
>>>>>>>> Linear solver converged at step: 54, final residual: 1.35495e-30
>>>>>>>> begin solve: iteration #55
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On Thu, May 29, 2014 at 6:07 PM, Vikram Garg <
>>>>>>>> [email protected]> wrote:
>>>>>>>>
>>>>>>>>> I see. What happens with LU ?
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Thu, May 29, 2014 at 7:06 PM, walter kou <[email protected]
>>>>>>>>> > wrote:
>>>>>>>>>
>>>>>>>>>> tried -ksp_gmres_restart 5000, with residual as below
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> Linear solver converged at step: 53, final residual: 1.5531e-05
>>>>>>>>>> begin solve: iteration #54
>>>>>>>>>>
>>>>>>>>>> Linear solver converged at step: 54, final residual: 1.55013e-05
>>>>>>>>>> begin solve: iteration #55
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On Thu, May 29, 2014 at 6:01 PM, Vikram Garg <
>>>>>>>>>> [email protected]> wrote:
>>>>>>>>>>
>>>>>>>>>>> Try using a higher number of restart steps, say 500,
>>>>>>>>>>>
>>>>>>>>>>> -ksp_gmres_restart 500
>>>>>>>>>>>
>>>>>>>>>>> Thanks.
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On Thu, May 29, 2014 at 7:00 PM, walter kou <
>>>>>>>>>>> [email protected]> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> output from the -ksp_monitor_singular_value ?
>>>>>>>>>>>>
>>>>>>>>>>>> 250 KSP Residual norm 8.867989901517e-05 % max
>>>>>>>>>>>> 1.372837955534e+05 min 1.502965355599e+01 max/min 
>>>>>>>>>>>> 9.134195611495e+03
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> On Thu, May 29, 2014 at 5:54 PM, Vikram Garg <
>>>>>>>>>>>> [email protected]> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> Try the following solver options:
>>>>>>>>>>>>>
>>>>>>>>>>>>> -pc_type lu -pc_factor_mat_solver_package superlu
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> What was the output from the -ksp_monitor_singular_value ?
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> Thanks.
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Thu, May 29, 2014 at 6:52 PM, walter kou <
>>>>>>>>>>>>> [email protected]> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> Hi Vikram,
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> How to try using just lu or super lu? I am pretty ignorant in
>>>>>>>>>>>>>> playing with proper ksp options in the command line.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Could you point out any introductory materials on this?
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Thanks so much.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Wed, May 28, 2014 at 11:54 AM, Vikram Garg <
>>>>>>>>>>>>>> [email protected]> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Hey Walter,
>>>>>>>>>>>>>>>                      Have you tried using just lu or super
>>>>>>>>>>>>>>> lu ? You might also want to check and see whats the output for
>>>>>>>>>>>>>>> -ksp_monitor_singular_value and increase the gmres restart 
>>>>>>>>>>>>>>> steps.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Thanks.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On Mon, May 26, 2014 at 1:57 PM, walter kou <
>>>>>>>>>>>>>>> [email protected]> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Hi all,
>>>>>>>>>>>>>>>> I ran a larger case: with elements = 200, and found for
>>>>>>>>>>>>>>>> each calculation in
>>>>>>>>>>>>>>>> the iteration, system.final_linear_residual() is about 0.5%.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> 1) Is the system.final_linear_residual()  r = b -A X* (X*
>>>>>>>>>>>>>>>> is the solution)
>>>>>>>>>>>>>>>> ? right?
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> 2) It seems final residual is too big, and the equation is
>>>>>>>>>>>>>>>> not solved well
>>>>>>>>>>>>>>>> (here |b| is about 1e-4).  Does anyone have suggestion in
>>>>>>>>>>>>>>>> playing with
>>>>>>>>>>>>>>>> solvers of Ax=b?
>>>>>>>>>>>>>>>> Here my case is on nonlinear elasticity, and A is almost
>>>>>>>>>>>>>>>> symmetrical
>>>>>>>>>>>>>>>> positive definite (only components influenced by boundary
>>>>>>>>>>>>>>>> conditions will
>>>>>>>>>>>>>>>> break the symmetry).
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Also, following the suggestion of Paul, I use -ksp_view and
>>>>>>>>>>>>>>>> find my solver
>>>>>>>>>>>>>>>> information is as below:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> KSP Object: 4 MPI processes
>>>>>>>>>>>>>>>>   type: gmres
>>>>>>>>>>>>>>>>     GMRES: restart=30, using Classical (unmodified)
>>>>>>>>>>>>>>>> Gram-Schmidt
>>>>>>>>>>>>>>>> Orthogonalization with no iterative refinement
>>>>>>>>>>>>>>>>     GMRES: happy breakdown tolerance 1e-30
>>>>>>>>>>>>>>>>   maximum iterations=250
>>>>>>>>>>>>>>>>   tolerances:  relative=1e-08, absolute=1e-50,
>>>>>>>>>>>>>>>> divergence=10000
>>>>>>>>>>>>>>>>   left preconditioning
>>>>>>>>>>>>>>>>   using nonzero initial guess
>>>>>>>>>>>>>>>>   using PRECONDITIONED norm type for convergence test
>>>>>>>>>>>>>>>> PC Object: 4 MPI processes
>>>>>>>>>>>>>>>>   type: bjacobi
>>>>>>>>>>>>>>>>     block Jacobi: number of blocks = 4
>>>>>>>>>>>>>>>>     Local solve is same for all blocks, in the following
>>>>>>>>>>>>>>>> KSP and PC objects:
>>>>>>>>>>>>>>>>   KSP Object:  (sub_)   1 MPI processes
>>>>>>>>>>>>>>>>     type: preonly
>>>>>>>>>>>>>>>>     maximum iterations=10000, initial guess is zero
>>>>>>>>>>>>>>>>     tolerances:  relative=1e-05, absolute=1e-50,
>>>>>>>>>>>>>>>> divergence=10000
>>>>>>>>>>>>>>>>     left preconditioning
>>>>>>>>>>>>>>>>     using NONE norm type for convergence test
>>>>>>>>>>>>>>>>   PC Object:  (sub_)   1 MPI processes
>>>>>>>>>>>>>>>>     type: ilu
>>>>>>>>>>>>>>>>       ILU: out-of-place factorization
>>>>>>>>>>>>>>>>       0 levels of fill
>>>>>>>>>>>>>>>>       tolerance for zero pivot 2.22045e-14
>>>>>>>>>>>>>>>>       using diagonal shift to prevent zero pivot
>>>>>>>>>>>>>>>>       matrix ordering: natural
>>>>>>>>>>>>>>>>       factor fill ratio given 1, needed 1
>>>>>>>>>>>>>>>>         Factored matrix follows:
>>>>>>>>>>>>>>>>           Matrix Object:           1 MPI processes
>>>>>>>>>>>>>>>>             type: seqaij
>>>>>>>>>>>>>>>>             rows=324, cols=324
>>>>>>>>>>>>>>>>             package used to perform factorization: petsc
>>>>>>>>>>>>>>>>             total: nonzeros=16128, allocated nonzeros=16128
>>>>>>>>>>>>>>>>             total number of mallocs used during
>>>>>>>>>>>>>>>> MatSetValues calls =0
>>>>>>>>>>>>>>>>               not using I-node routines
>>>>>>>>>>>>>>>>     linear system matrix = precond matrix:
>>>>>>>>>>>>>>>>     Matrix Object:    ()     1 MPI processes
>>>>>>>>>>>>>>>>       type: seqaij
>>>>>>>>>>>>>>>>       rows=324, cols=324
>>>>>>>>>>>>>>>>       total: nonzeros=16128, allocated nonzeros=19215
>>>>>>>>>>>>>>>>       total number of mallocs used during MatSetValues
>>>>>>>>>>>>>>>> calls =0
>>>>>>>>>>>>>>>>         not using I-node routines
>>>>>>>>>>>>>>>>   linear system matrix = precond matrix:
>>>>>>>>>>>>>>>>   Matrix Object:  ()   4 MPI processes
>>>>>>>>>>>>>>>>     type: mpiaij
>>>>>>>>>>>>>>>>     rows=990, cols=990
>>>>>>>>>>>>>>>>     total: nonzeros=58590, allocated nonzeros=64512
>>>>>>>>>>>>>>>>     total number of mallocs used during MatSetValues calls
>>>>>>>>>>>>>>>> =0
>>>>>>>>>>>>>>>>       not using I-node (on process 0) routines
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> /*********************************************************
>>>>>>>>>>>>>>>> Thanks,
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Walter
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> On Thu, May 22, 2014 at 12:18 PM, Paul T. Bauman <
>>>>>>>>>>>>>>>> [email protected]> wrote:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> >
>>>>>>>>>>>>>>>> >
>>>>>>>>>>>>>>>> >
>>>>>>>>>>>>>>>> > On Thu, May 22, 2014 at 12:11 PM, walter kou <
>>>>>>>>>>>>>>>> [email protected]>wrote:
>>>>>>>>>>>>>>>> >
>>>>>>>>>>>>>>>> >> OK, but libMesh calls a library, defaulting to PETSc if
>>>>>>>>>>>>>>>> it's installed.
>>>>>>>>>>>>>>>> >> Which library are you using?
>>>>>>>>>>>>>>>> >>
>>>>>>>>>>>>>>>> >> PETSc-3.3
>>>>>>>>>>>>>>>> >>
>>>>>>>>>>>>>>>> >
>>>>>>>>>>>>>>>> > I recommend checking out the PETSc documentation (
>>>>>>>>>>>>>>>> > http://www.mcs.anl.gov/petsc/petsc-as/documentation/)
>>>>>>>>>>>>>>>> and tutorials. But
>>>>>>>>>>>>>>>> > you'll want to start with -ksp_view to get the parameters
>>>>>>>>>>>>>>>> PETSc is using.
>>>>>>>>>>>>>>>> >
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> ------------------------------------------------------------------------------
>>>>>>>>>>>>>>>> The best possible search technologies are now affordable
>>>>>>>>>>>>>>>> for all companies.
>>>>>>>>>>>>>>>> Download your FREE open source Enterprise Search Engine
>>>>>>>>>>>>>>>> today!
>>>>>>>>>>>>>>>> Our experts will assist you in its installation for $59/mo,
>>>>>>>>>>>>>>>> no commitment.
>>>>>>>>>>>>>>>> Test it for FREE on our Cloud platform anytime!
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> http://pubads.g.doubleclick.net/gampad/clk?id=145328191&iu=/4140/ostg.clktrk
>>>>>>>>>>>>>>>> _______________________________________________
>>>>>>>>>>>>>>>> Libmesh-users mailing list
>>>>>>>>>>>>>>>> [email protected]
>>>>>>>>>>>>>>>> https://lists.sourceforge.net/lists/listinfo/libmesh-users
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>> Vikram Garg
>>>>>>>>>>>>>>> Postdoctoral Associate
>>>>>>>>>>>>>>> Center for Computational Engineering
>>>>>>>>>>>>>>> Massachusetts Institute of Technology
>>>>>>>>>>>>>>> http://web.mit.edu/vikramvg/www/
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> http://www.runforindia.org/runners/vikramg
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> --
>>>>>>>>>>>>> Vikram Garg
>>>>>>>>>>>>> Postdoctoral Associate
>>>>>>>>>>>>> Center for Computational Engineering
>>>>>>>>>>>>> Massachusetts Institute of Technology
>>>>>>>>>>>>> http://web.mit.edu/vikramvg/www/
>>>>>>>>>>>>>
>>>>>>>>>>>>> http://www.runforindia.org/runners/vikramg
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> --
>>>>>>>>>>> Vikram Garg
>>>>>>>>>>> Postdoctoral Associate
>>>>>>>>>>> Center for Computational Engineering
>>>>>>>>>>> Massachusetts Institute of Technology
>>>>>>>>>>> http://web.mit.edu/vikramvg/www/
>>>>>>>>>>>
>>>>>>>>>>> http://www.runforindia.org/runners/vikramg
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>> Vikram Garg
>>>>>>>>> Postdoctoral Associate
>>>>>>>>> Center for Computational Engineering
>>>>>>>>> Massachusetts Institute of Technology
>>>>>>>>> http://web.mit.edu/vikramvg/www/
>>>>>>>>>
>>>>>>>>> http://www.runforindia.org/runners/vikramg
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Vikram Garg
>>>>>>> Postdoctoral Associate
>>>>>>> Center for Computational Engineering
>>>>>>> Massachusetts Institute of Technology
>>>>>>> http://web.mit.edu/vikramvg/www/
>>>>>>>
>>>>>>> http://www.runforindia.org/runners/vikramg
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> Vikram Garg
>>>> Postdoctoral Associate
>>>> Center for Computational Engineering
>>>> Massachusetts Institute of Technology
>>>> http://web.mit.edu/vikramvg/www/
>>>>
>>>> http://www.runforindia.org/runners/vikramg
>>>>
>>>
>>>
>>
>>
>> --
>> Vikram Garg
>> Postdoctoral Associate
>> Center for Computational Engineering
>> Massachusetts Institute of Technology
>> http://web.mit.edu/vikramvg/www/
>>
>> http://www.runforindia.org/runners/vikramg
>>
>
>


-- 
Vikram Garg
Postdoctoral Associate
Center for Computational Engineering
Massachusetts Institute of Technology
http://web.mit.edu/vikramvg/www/

http://www.runforindia.org/runners/vikramg
------------------------------------------------------------------------------
Time is money. Stop wasting it! Get your web API in 5 minutes.
www.restlet.com/download
http://p.sf.net/sfu/restlet
_______________________________________________
Libmesh-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/libmesh-users

Reply via email to