Hi Ben,

Could you please give an example of some of the things you
typically cache in libMesh ?


thanks,
df



On Thu, 24 Jun 2010, Kirk, Benjamin (JSC-EG311) wrote:

> In my case I cache what I can, but still the behavior is as follows:
>
> 1) calculate residual
> 2) calculate jacobian. solve for update using (1) as rhs.
> 3) compute residual again and check against (1)
>
> In my case it is common to do this once at each time step - that is,
> solve the nonlinear problem very approximately.
>
> In this case, why bother with the second residual evaluation? Is (3)
> necessary?
>
> (Yes, Jed, I'm asking a question on the libMesh mailing list.)
>
> Thanks,
>
> -Ben
>
> On Jun 24, 2010, at 7:09 PM, "David Fuentes" 9<[email protected]>
> wrote:
>
>>
>>
>> thanks jed. I can't seem to find a stored profile. I'd have to
>> recreate
>> one. But i'm thinking roughly twice as many functions evaluations as
>> jacobian evaluations.
>>
>>
>>
>>
>>
>> On Thu, 24 Jun 2010, Jed Brown wrote:
>>
>>> On Thu, 24 Jun 2010 14:28:03 -0500, David Fuentes <[email protected]
>>>> wrote:
>>>> On 6/24/10, Jed Brown <[email protected]> wrote:
>>>>> On Thu, 24 Jun 2010 13:59:21 -0500 (CDT), David Fuentes
>>>>> <[email protected]> wrote:
>>>>>>
>>>>>> I typically use Petsc Nonlinear Solvers in 3D and my bottle neck
>>>>>> is
>>>>>> typically in the assembly with Petsc SNESSolve taking ~10% of
>>>>>> the time
>>>>>> about ~50% in the jacobian, ~30% in the residual, and the
>>>>>> rest is distributed.
>>>>>
>>>>> So the linear solves are really easy.  Are you caching a lot of
>>>>> stuff in
>>>>> the residual evaluation, it's not normal for it to be so much
>>>>> compared
>>>>> to Jacobian assembly unless you don't use an analytic Jacobian
>>>>> (e.g. -snes_mf_operator).
>>>>
>>>> Not sure, what do you typically cache ?
>>>
>>> I cache a local linearization at quadrature points, but that is for
>>> fast
>>> matrix-free Jacobian application of high-order operators.  It doesn't
>>> pay off in terms of time or storage for Q1 or P1, even in 3D.  But if
>>> you had e.g. an expensive constitutive relation involving lookup
>>> tables,
>>> and you weren't overly concerned about using the minimum possible
>>> memory, but didn't want to do the extra work to integrate and
>>> insert the
>>> element matrices (because there was a high chance of the line search
>>> shortening the step), then you might cache the local linearization
>>> even
>>> for lowest-order elements.  It sounds like this is not the case.
>>>
>>> What does -log_summary show?  Are you doing a lot more function
>>> evaluations than Jacobian assemblies?  It's surprising to me that
>>> they
>>> would cost almost the same amount per call, perhaps there is a hot
>>> spot
>>> somewhere in your residual evaluation.
>>>
>>> Jed
>>>
>>
>> ---
>> ---
>> ---
>> ---------------------------------------------------------------------
>> ThinkGeek and WIRED's GeekDad team up for the Ultimate
>> GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the
>> lucky parental unit.  See the prize list and enter to win:
>> http://p.sf.net/sfu/thinkgeek-promo
>> _______________________________________________
>> Libmesh-users mailing list
>> [email protected]
>> https://lists.sourceforge.net/lists/listinfo/libmesh-users
>

------------------------------------------------------------------------------
ThinkGeek and WIRED's GeekDad team up for the Ultimate 
GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the 
lucky parental unit.  See the prize list and enter to win: 
http://p.sf.net/sfu/thinkgeek-promo
_______________________________________________
Libmesh-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/libmesh-users

Reply via email to