In CGG we have had some more progress with our investigations into poor
IO performance with small IO sizes using Lustre 1.4.6/x and 1.4.7.x and
we created a new bug on this issue which I shall summarise here.

Our clients are RHEL3.4 kernel 2.4.21 and we believe the issue is due to
ll_writepage being called by the kernel vm daemons to release vm cache
memory too aggressively, this results in the cache being flushed within
a few milliseconds of being written so small IO's are not aggregated
into large rpcs.

We had some success in tuning the vm tunables but the problem comes back
at some point later.  There may be some later 2.4 vm patches we can
apply or we
may just use a 2.6 kernel which does not seem to suffer from the same
problems.



J Belshaw CGG Redhill

-----Original Message-----
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Peter Bojanic
Sent: 15 November 2006 17:36
To: Richard Shane Canon
Cc: [email protected]
Subject: [Lustre-discuss] Read performance issues at ORNL

Shane,

Regarding the read performance issues you mentioned today, could you  
take a look at this bug, resolved in Lustre 1.4.7, to see if it  
resembles the problem you're seeing:

https://bugzilla.clusterfs.com/show_bug.cgi?id=10265

I've heard a few reports today at SC about slow read performance use  
cases in 1.4.6 and 1.4.7, which I'd like to get on top of  
immediately. If this is not a match for your case, please file a  
Bugzilla ticket and let me know the number. I'd like to hand this off  
promptly to a senior IO specialist.

Thanks,
Peter

_______________________________________________
Lustre-discuss mailing list
[email protected]
https://mail.clusterfs.com/mailman/listinfo/lustre-discuss


_______________________________________________
Lustre-discuss mailing list
[email protected]
https://mail.clusterfs.com/mailman/listinfo/lustre-discuss

Reply via email to