Confirmed: 1.6.3 striped write performance sux. With 1.6.2, I see this:
[EMAIL PROTECTED] ~]$ lfs setstripe /lustre/162 0 0 3 [EMAIL PROTECTED] ~]$ lmdd.linux of=/lustre/162 bs=1024k time=180 fsync=1 157705.8304 MB in 180.0225 secs, 876.0341 MB/sec I.e. 1.6.2 had nicely joined the aggregate bw of three OSTs of 300 MB/sec each into the almost 900 MB/sec. Andrei. On Nov 26, 2007 4:58 PM, Andrei Maslennikov <[EMAIL PROTECTED]> wrote: > On Nov 26, 2007 3:32 PM, Robin Humble <[EMAIL PROTECTED]> wrote: > > > >> I'm seeing what can only be described as dismal striped write > > >> performance from lustre 1.6.3 clients :-/ > > >> 1.6.2 and 1.6.1 clients are fine. 1.6.4rc3 clients (from cvs a couple > > >> of days ago) are also terrible. > > I have 3 OSTs capable to deliver 300+ MB/sec each for large streaming writes > with 1M blocksize. On one client, with one OST I can see almost all > this bandwidth over Infiniband. If I run three processes in parallel on this > very client, > each writing into a separate OST, I arrive to 520 MB/sec aggregate (3 streams > at > approx 170+ MB/sec each). > > If I try to stripe over these three OSTs on this client, performance of one > stream drops to 60+ MB/sec. Changing stripesize to a smaller one (1/3 MB) > makes things worse. Writing with larger block sizes (9M, 30M) does not improve > things. Increasing the stripesize to 25 MB allows to approach the speed > of a single OST, as one would expect (blocks are round robined over all three > OSTs). But never more. Zeroing checksums on the client does not help. > > Will now be downgrading the client to 1.6.2 to see if this helps. > > Andrei. > _______________________________________________ Lustre-discuss mailing list [email protected] https://mail.clusterfs.com/mailman/listinfo/lustre-discuss
