Makia Minich wrote:
How much luck did you have with this tuning and OFED's SRP? What performance are you seeing? We had done quite a bit of testing playing with this option, but saw very little improvement in performance (if I remember correctly, the block sizes did increase, but performance was still down).

That's what I saw as well. I eventually got great performance writing with /dev/sg* devices by tuning srp_sg_tablesize (it defaults to 12 which sent 48KB io's to the array) the but I could never get /dev/sd* devices to perform and reading was always stuck at 128KB io's no matter what I passed into to srp_sg_tablesize.



On Friday 18 May 2007 10:38:29 am chas williams - CONTRACTOR wrote:
In message <[EMAIL PROTECTED]>,"John R. Dunning"
wri

tes:
I tried incorporating the blkdev-max-io-size-selection and
increase-sglist-size patches from cfs, but that didn't really help, my
reads are still maxing out at 256K.
the srp initator creates a virtual scsi device driver.  this virtual
device driver has a .max_sectors paramters associated with it.  you can
tune this with the max_sect= during login for the openfabrics stack.
no idea, how this is tuned on ibgold.

take a look at
/sys/block/sd<whatever>/queue/{max_hw_sectors_kb,max_sectors_kb}

if you arent using direct i/o, use direct i/o.  you could just tune
the page size of the ddn to 256k.

_______________________________________________
Lustre-discuss mailing list
[email protected]
https://mail.clusterfs.com/mailman/listinfo/lustre-discuss


_______________________________________________
Lustre-discuss mailing list
[email protected]
https://mail.clusterfs.com/mailman/listinfo/lustre-discuss

Reply via email to