On 09/25/2009 05:19 PM, Vasu Dev wrote:
> Sometime max IO retries exhausts under heavy stress with
> large size IOs (>512K) and that leads to following IO errors.
>
> [ 2522.907984] sd 3:0:2:50: [sdu] Unhandled error code
> [ 2522.907990] sd 3:0:2:50: [sdu] Result: hostbyte=DID_ERROR 
> driverbyte=DRIVER_OK
> [ 2522.907995] sd 3:0:2:50: [sdu] CDB: Write(10): 2a 00 00 00 08 00 00 04 00 
> 00
> [ 2522.908012] end_request: I/O error, dev sdu, sector 2048
>
> The FCoE stack doesn't have any direct flow control to fcoe stack
> for congestion due to temporary condition of longer link pauses
> or end-to-end delay between I-T-L, in turn under large IOs stress
> test sometimes max SCSI IO retries exhausts and leads to above IO
> errors and thrash of several retry attempts.
>
> Currently stack is configured with .sg_tablesize as SG_ALL (128)
> and that limits large IO up to 512K, so this patch reduces this
> to 64 to reduce cost of retrying on any failed large IO as this
> will limit large IOs to only upto 256K max.

You probably get 256K with 64 sgs because you get lots of pages that are 
not mergable.

With sg_tablesize = 128 and ENABLE_CLUSTERING set, you can get a IO with 
128 * 64K. With 64 you can get 64 * 64K. 64K is the default max segment 
size set by the block layer when a request_queue is created but 
overridable in your sht->slave_configure callout.

If you want to limit the IO size you want to set the max_sectors setting.

Also you do not have to set max_sectors in the sht. You can do it from 
userspace in /sys/block/sdX/queue. There is a hw max and max sectors.
- hw max sectors is the limit that is set in the sht. For pass through 
and tape (SG IO, sg, scsi_ioctl, st) this is the limit that is used. You 
normally want a larger value for pass through commands. For tape (yes, 
it will probably out live fcoe :)) you want large IO sizes.

- max sectors is a FS/block limit. IO coming from the non passthrough 
paths like if you just did a read/write through a FS or with dd, this 
limit is used.
_______________________________________________
devel mailing list
[email protected]
http://www.open-fcoe.org/mailman/listinfo/devel

Reply via email to