Anyone know if there is something inherently slow about o_direct writes on linux?
I am running lustre 1.6.5.1 on few dell 2950s. Each OST is capable of 300MB/s and I have 8 OST on my FS. Using buffers I can max out the bandwidth fine but soon as I try a single file o_direct write I get only 135MB/s no matter what stripecount, rpc flight number or linux sectorsize I use. I can get a little bit more bandwidth using larger stripesize but that only takes me up to 200mb/s. I cant help wonder if there something thats holding up the IOPS using single client, single file write. Trying to see if its the lustre client or just the way linux handles IO... Anyone have any settings I might be forgetting on the linux server/client? I have /sys/fs/block/sd*/max_sectorsize set and elevator set to noop. I cant think of anything on lustre side since I'm not even using more then 1-2 RPC in flight when running. Any help would be really appreciated, -Alex _______________________________________________ Lustre-discuss mailing list [email protected] http://lists.lustre.org/mailman/listinfo/lustre-discuss
