Hi John,

Thank you for your information. I am using the recent version of libMesh.

Also I finally figured out the culprit last weekend, which is due to hypre
preconditioner used in one of my fieldsplit variables.
The BoomerAMG uses very large memory during the KSPSolve()

-Xujun

On Mon, Jun 13, 2016 at 11:48 AM, John Peterson <jwpeter...@gmail.com>
wrote:

>
>
> On Thu, Jun 9, 2016 at 3:29 PM, Xujun Zhao <xzha...@gmail.com> wrote:
>
>> Derek,
>>
>> Excellent analysis! It really helps.
>> I further looked at the PETSc log summary when running on 1 CPU, the max
>> memory Petsc allocated is about 22.2G (close to what you predicted).
>> However, the total memory usage is up to 100G. This is much more than
>> expected. I think there should be something wrong with my API that takes
>> huge memory. I will double check it. Thanks again.
>>
>
> Was your memory spike during a solution output phase?
>
> This was an issue in the past, but we fixed some things (mainly avoiding
> broadcasting copies of solution vectors to all procs) and it shouldn't be
> as bad of a problem as it once was.
>
> If you are using an older version of libmesh, however, you might not have
> those fixes...
>
> --
> John
>
------------------------------------------------------------------------------
What NetFlow Analyzer can do for you? Monitors network bandwidth and traffic
patterns at an interface-level. Reveals which users, apps, and protocols are 
consuming the most bandwidth. Provides multi-vendor support for NetFlow, 
J-Flow, sFlow and other flows. Make informed decisions using capacity 
planning reports. https://ad.doubleclick.net/ddm/clk/305295220;132659582;e
_______________________________________________
Libmesh-users mailing list
Libmesh-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/libmesh-users

Reply via email to