On Tue, Aug 30, 2011 at 12:34 PM, John Peterson <[email protected]> wrote:
> On Tue, Aug 30, 2011 at 12:23 PM, robert <[email protected]> wrote:
>>
>>> 32 nodes or 32 cores?  I don't know the details of your cluster so it
>>> may be obvious, but make sure you aren't accidentally running too many
>>> MPI processes on a given node.
>>>
>> As far as I understood it it is:
>>
>> 1 node = 4cores
>>
>> 4GB/node
>
> This doesn't match the output of the top command you posted below.
> The total memory given there is 31 985 140 kilobytes = 30.5034065
> gigabytes.
>
> Does the cluster you are on have a public information web page?  That
> would probably help clear things up...

The 32GB from your top command is for the head node.

It does appear that there are 4GBs of physical memory for each compute node.


> It is possible to run 1, 2 or 4 processes per node. If I run 2 or 4 processes 
> I get:
> Error! ***Memory allocation failed for SetUpCoarseGraph: gdata.
> Requested size: 107754020 bytesError! ***Memory allocation failed for
> SetUpCoarseGraph: gdata. Requested size: 107754020 bytesError!

This function is in Metis, so you are running out of memory during the
mesh partitioning stage.

For one process, it's possible you are not running out of memory but
are going into swap... which is making the code run really slowly.

I'd try recompiling libmesh with parallel mesh enabled... but I'm
still surprised that 4GB is not enough memory.

Are the 2,994,336 elements in the mesh you posted before or after
uniform refinement?

-- 
John

------------------------------------------------------------------------------
Special Offer -- Download ArcSight Logger for FREE!
Finally, a world-class log management solution at an even better 
price-free! And you'll get a free "Love Thy Logs" t-shirt when you
download Logger. Secure your free ArcSight Logger TODAY!
http://p.sf.net/sfu/arcsisghtdev2dev
_______________________________________________
Libmesh-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/libmesh-users

Reply via email to