Here's a plot after applying John's "avoid serializing to all processes"
patch:

https://drive.google.com/file/d/0B8csupg5nQaaVVpOUFRHZ0FJX28/edit?usp=sharing

Cody


On Mon, Nov 11, 2013 at 4:30 PM, Derek Gaston <fried...@gmail.com> wrote:

> On Mon, Nov 11, 2013 at 5:23 PM, Cody Permann <codyperm...@gmail.com>wrote:
>
>> This is a run with 2 systems in it - the first one has 40 variables and
>> totals about 25 million DoFs... the second one is two variables and comes
>> out to a little over 1 million DoFs.  This job is spread out across 160 MPI
>> processes (and we're going to be looking at the aggregate memory across all
>> of those).
>>
>> Quick correction:  We are only looking at the memory of the rank 0 MPI
>> process in this graph.  The memory profile of all the other ranks pretty
>> much match this one though.
>>
>
> Quicker correction: This is all of the memory for 4 processes out of the
> 160 aggregated together (because only 4 were running on the node the memory
> logger was launched on).
>
> The memory logger can aggregate across all processes (even across nodes)
> but only when run using our batch job system (PBS) which wasn't being
> utilized for these runs....
>
> Derek
>
------------------------------------------------------------------------------
DreamFactory - Open Source REST & JSON Services for HTML5 & Native Apps
OAuth, Users, Roles, SQL, NoSQL, BLOB Storage and External API Access
Free app hosting. Or install the open source package on any LAMP server.
Sign up and see examples for AngularJS, jQuery, Sencha Touch and Native!
http://pubads.g.doubleclick.net/gampad/clk?id=63469471&iu=/4140/ostg.clktrk
_______________________________________________
Libmesh-devel mailing list
Libmesh-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/libmesh-devel

Reply via email to