tried -ksp_gmres_restart 5000, with residual as below

Linear solver converged at step: 53, final residual: 1.5531e-05
begin solve: iteration #54

Linear solver converged at step: 54, final residual: 1.55013e-05
begin solve: iteration #55




On Thu, May 29, 2014 at 6:01 PM, Vikram Garg <[email protected]>
wrote:

> Try using a higher number of restart steps, say 500,
>
> -ksp_gmres_restart 500
>
> Thanks.
>
>
> On Thu, May 29, 2014 at 7:00 PM, walter kou <[email protected]> wrote:
>
>> output from the -ksp_monitor_singular_value ?
>>
>> 250 KSP Residual norm 8.867989901517e-05 % max 1.372837955534e+05 min
>> 1.502965355599e+01 max/min 9.134195611495e+03
>>
>>
>>
>> On Thu, May 29, 2014 at 5:54 PM, Vikram Garg <[email protected]>
>> wrote:
>>
>>> Try the following solver options:
>>>
>>> -pc_type lu -pc_factor_mat_solver_package superlu
>>>
>>>
>>> What was the output from the -ksp_monitor_singular_value ?
>>>
>>>
>>> Thanks.
>>>
>>>
>>> On Thu, May 29, 2014 at 6:52 PM, walter kou <[email protected]>
>>> wrote:
>>>
>>>> Hi Vikram,
>>>>
>>>> How to try using just lu or super lu? I am pretty ignorant in playing
>>>> with proper ksp options in the command line.
>>>>
>>>> Could you point out any introductory materials on this?
>>>>
>>>> Thanks so much.
>>>>
>>>>
>>>>
>>>>
>>>> On Wed, May 28, 2014 at 11:54 AM, Vikram Garg <[email protected]>
>>>> wrote:
>>>>
>>>>> Hey Walter,
>>>>>                      Have you tried using just lu or super lu ? You
>>>>> might also want to check and see whats the output for
>>>>> -ksp_monitor_singular_value and increase the gmres restart steps.
>>>>>
>>>>> Thanks.
>>>>>
>>>>>
>>>>> On Mon, May 26, 2014 at 1:57 PM, walter kou <[email protected]>
>>>>> wrote:
>>>>>
>>>>>> Hi all,
>>>>>> I ran a larger case: with elements = 200, and found for each
>>>>>> calculation in
>>>>>> the iteration, system.final_linear_residual() is about 0.5%.
>>>>>>
>>>>>> 1) Is the system.final_linear_residual()  r = b -A X* (X* is the
>>>>>> solution)
>>>>>> ? right?
>>>>>>
>>>>>> 2) It seems final residual is too big, and the equation is not solved
>>>>>> well
>>>>>> (here |b| is about 1e-4).  Does anyone have suggestion in playing with
>>>>>> solvers of Ax=b?
>>>>>> Here my case is on nonlinear elasticity, and A is almost symmetrical
>>>>>> positive definite (only components influenced by boundary conditions
>>>>>> will
>>>>>> break the symmetry).
>>>>>>
>>>>>>
>>>>>> Also, following the suggestion of Paul, I use -ksp_view and find my
>>>>>> solver
>>>>>> information is as below:
>>>>>>
>>>>>> KSP Object: 4 MPI processes
>>>>>>   type: gmres
>>>>>>     GMRES: restart=30, using Classical (unmodified) Gram-Schmidt
>>>>>> Orthogonalization with no iterative refinement
>>>>>>     GMRES: happy breakdown tolerance 1e-30
>>>>>>   maximum iterations=250
>>>>>>   tolerances:  relative=1e-08, absolute=1e-50, divergence=10000
>>>>>>   left preconditioning
>>>>>>   using nonzero initial guess
>>>>>>   using PRECONDITIONED norm type for convergence test
>>>>>> PC Object: 4 MPI processes
>>>>>>   type: bjacobi
>>>>>>     block Jacobi: number of blocks = 4
>>>>>>     Local solve is same for all blocks, in the following KSP and PC
>>>>>> objects:
>>>>>>   KSP Object:  (sub_)   1 MPI processes
>>>>>>     type: preonly
>>>>>>     maximum iterations=10000, initial guess is zero
>>>>>>     tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
>>>>>>     left preconditioning
>>>>>>     using NONE norm type for convergence test
>>>>>>   PC Object:  (sub_)   1 MPI processes
>>>>>>     type: ilu
>>>>>>       ILU: out-of-place factorization
>>>>>>       0 levels of fill
>>>>>>       tolerance for zero pivot 2.22045e-14
>>>>>>       using diagonal shift to prevent zero pivot
>>>>>>       matrix ordering: natural
>>>>>>       factor fill ratio given 1, needed 1
>>>>>>         Factored matrix follows:
>>>>>>           Matrix Object:           1 MPI processes
>>>>>>             type: seqaij
>>>>>>             rows=324, cols=324
>>>>>>             package used to perform factorization: petsc
>>>>>>             total: nonzeros=16128, allocated nonzeros=16128
>>>>>>             total number of mallocs used during MatSetValues calls =0
>>>>>>               not using I-node routines
>>>>>>     linear system matrix = precond matrix:
>>>>>>     Matrix Object:    ()     1 MPI processes
>>>>>>       type: seqaij
>>>>>>       rows=324, cols=324
>>>>>>       total: nonzeros=16128, allocated nonzeros=19215
>>>>>>       total number of mallocs used during MatSetValues calls =0
>>>>>>         not using I-node routines
>>>>>>   linear system matrix = precond matrix:
>>>>>>   Matrix Object:  ()   4 MPI processes
>>>>>>     type: mpiaij
>>>>>>     rows=990, cols=990
>>>>>>     total: nonzeros=58590, allocated nonzeros=64512
>>>>>>     total number of mallocs used during MatSetValues calls =0
>>>>>>       not using I-node (on process 0) routines
>>>>>>
>>>>>>
>>>>>>
>>>>>> /*********************************************************
>>>>>> Thanks,
>>>>>>
>>>>>> Walter
>>>>>>
>>>>>>
>>>>>> On Thu, May 22, 2014 at 12:18 PM, Paul T. Bauman <[email protected]>
>>>>>> wrote:
>>>>>>
>>>>>> >
>>>>>> >
>>>>>> >
>>>>>> > On Thu, May 22, 2014 at 12:11 PM, walter kou <[email protected]
>>>>>> >wrote:
>>>>>> >
>>>>>> >> OK, but libMesh calls a library, defaulting to PETSc if it's
>>>>>> installed.
>>>>>> >> Which library are you using?
>>>>>> >>
>>>>>> >> PETSc-3.3
>>>>>> >>
>>>>>> >
>>>>>> > I recommend checking out the PETSc documentation (
>>>>>> > http://www.mcs.anl.gov/petsc/petsc-as/documentation/) and
>>>>>> tutorials. But
>>>>>> > you'll want to start with -ksp_view to get the parameters PETSc is
>>>>>> using.
>>>>>> >
>>>>>>
>>>>>> ------------------------------------------------------------------------------
>>>>>> The best possible search technologies are now affordable for all
>>>>>> companies.
>>>>>> Download your FREE open source Enterprise Search Engine today!
>>>>>> Our experts will assist you in its installation for $59/mo, no
>>>>>> commitment.
>>>>>> Test it for FREE on our Cloud platform anytime!
>>>>>>
>>>>>> http://pubads.g.doubleclick.net/gampad/clk?id=145328191&iu=/4140/ostg.clktrk
>>>>>> _______________________________________________
>>>>>> Libmesh-users mailing list
>>>>>> [email protected]
>>>>>> https://lists.sourceforge.net/lists/listinfo/libmesh-users
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Vikram Garg
>>>>> Postdoctoral Associate
>>>>> Center for Computational Engineering
>>>>> Massachusetts Institute of Technology
>>>>> http://web.mit.edu/vikramvg/www/
>>>>>
>>>>> http://www.runforindia.org/runners/vikramg
>>>>>
>>>>
>>>>
>>>
>>>
>>> --
>>> Vikram Garg
>>> Postdoctoral Associate
>>> Center for Computational Engineering
>>> Massachusetts Institute of Technology
>>> http://web.mit.edu/vikramvg/www/
>>>
>>> http://www.runforindia.org/runners/vikramg
>>>
>>
>>
>
>
> --
> Vikram Garg
> Postdoctoral Associate
> Center for Computational Engineering
> Massachusetts Institute of Technology
> http://web.mit.edu/vikramvg/www/
>
> http://www.runforindia.org/runners/vikramg
>
------------------------------------------------------------------------------
Time is money. Stop wasting it! Get your web API in 5 minutes.
www.restlet.com/download
http://p.sf.net/sfu/restlet
_______________________________________________
Libmesh-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/libmesh-users

Reply via email to