On 2012-05-04, at 11:42 PM, Colin Faber wrote:
> Yes, this is somewhat expected behavior.  More client memory can help this as 
> it allows for more local caching. 

Actually, in this case the problem is that the client is repeatedly opening and 
closing the executable file, and that forces an RPC to the MDs each time, 
because the MDS has to prevent a file that is open for execution from being 
written to or truncated.

Lustre clients do have the ability to get a DLM lock from the MDS so that they 
can cache an open reference, but it isn't enabled for normal application opens, 
only for NFS.  I don't kow how to enable it per-se, but Oleg does.



> -----Original message-----
> From: Christian Becker <[email protected]>
> To: "[email protected]" <[email protected]>
> Sent: Fri, May 4, 2012 22:34:26 MDT
> Subject: [Lustre-discuss] Frequent system calls cause high load on meta data  
> server
> 
> Dear colleagues,
> 
> one of our users calls within his R application an external program via 
> the system function very often (/home is a lustre file system):
> 
> 
> for (i=0;i<10000;i++)
> {
> system("/home/user/bin/program arg0 arg1 ....");
> }
> 
> The runtime for the external program is only a few seconds. With hundred 
> of these jobs running, the load on the meta data server goes up to 25 
> (2way quad core with lustre 1.8.3).
> 
> 
> Is such a behavior known, if yes, how can I avoid it?
> 
> 
> best regards,
> Christian
> 
> 
> _______________________________________________
> Lustre-discuss mailing list
> [email protected]
> http://lists.lustre.org/mailman/listinfo/lustre-discuss
> _______________________________________________
> Lustre-discuss mailing list
> [email protected]
> http://lists.lustre.org/mailman/listinfo/lustre-discuss


Cheers, Andreas
--
Andreas Dilger                       Whamcloud, Inc.
Principal Lustre Engineer            http://www.whamcloud.com/




_______________________________________________
Lustre-discuss mailing list
[email protected]
http://lists.lustre.org/mailman/listinfo/lustre-discuss

Reply via email to