While performing a single copy, single client write/read test using dd, we are 
finding that our Nehalem clients running
2.6.18-92.1.10.el5-lustre-1.6.5.1
write about half the speed of our Nehalem clients running
2.6.18-53.1.13.el5_lustre.1.6.4.3 to three different lustre file systems.
This is true even though the slower clients have the same processors and more 
RAM, 18GB for the slow writers and 12GB for the fast writers. Both systems use 
OFED 1.3.1. All benchmarks we use perform better on the slow-write clients and 
read speed from LFS is comparable across all clients.
Max_rpcs_in_flight and max_pages_per_rpc are default on both systems.
They are on the same IB network, with the same QDR cards and IB connectivity 
has been verified with the IB utilities. They are almost identical in bandwidth 
and latency.

We're also using the same modprobe.conf and openibd.conf files on both systems.
We're using 34GB file size on the 12GB and 18GB RAM systems, 137GB file on the 
96GB RAM system. So it's not a matter of caching in RAM.

Are there known issues with our 2.6.18-92.1.10.el5-lustre-1.6.5.1 combination?

This is not a problem with the lustre file system as we get the same type of 
results no matter which of our three lustre systems the test is being written 
to.

Here are the summaries from several runs of ost-survey on our new Lustre 
system. Please comment on the worst/best deltas of the read and write 
operations.
Number of Active OST devices : 96
Worst Read        38.167753   38.932928   39.006537   39.782153   38.717915
Best Read         61.704534   61.832461   63.284999   65.000491   61.836016
Read Average:     51.433847   51.281630   51.297278   51.582327   51.318410
Worst Write       34.311237   49.009757   55.272744   51.532331   51.816523
Best Write       94.001170   96.033483   93.401792   93.081544   91.030717
Write Average:    74.248683   71.831019   75.179863   74.723100   74.930529

/bob


Bob Hayes
System Administrator
SSG-DRD-DP
Office:  253-371-3040
Cell:     253-441-5482
e-mail: [email protected]<mailto:[email protected]>

_______________________________________________
Lustre-discuss mailing list
[email protected]
http://lists.lustre.org/mailman/listinfo/lustre-discuss

Reply via email to