Roy Sigurd Karlsbakk wrote:
> device r/s w/s kr/s kw/s wait actv svc_t %w %b 
> cmdk0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 
> cmdk1 0.0 163.6 0.0 20603.7 1.6 0.5 12.9 24 24 
> fd0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 
> sd0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 
> sd1 0.5 140.3 0.3 2426.3 0.0 1.0 7.2 0 14 
> sd2 0.0 138.3 0.0 2476.3 0.0 1.5 10.6 0 18 
> sd3 0.0 303.9 0.0 2633.8 0.0 0.4 1.3 0 7 
> sd4 0.5 306.9 0.3 2555.8 0.0 0.4 1.2 0 7 
> sd5 1.0 308.5 0.5 2579.7 0.0 0.3 1.0 0 7 
> sd6 1.0 304.9 0.5 2352.1 0.0 0.3 1.1 1 7 
> sd7 1.0 298.9 0.5 2764.5 0.0 0.6 2.0 0 13 
> sd8 1.0 304.9 0.5 2400.8 0.0 0.3 0.9 0 6 

Something is going on with how these writes are ganged together.  The first two 
drives average 17KB per write and the other six 8.7KB per write.

The aggregate statistics listed show less of a disparity, but one still exists.

I have to wonder if there is some "max transfer length" type of setting on each 
drive which is different, allowing the Hitachi drives to allow larger 
transfers, resulting in fewer I/O operations, each having a longer service time.

Just to avoid confusion, the svc_t field it "service time" and not "seek time." 
 The service time is the total time to service a request, including seek time, 
controller overhead, time for the data to transit the SATA bus and time to 
write the data.  If the requests are larger, all else being equal, the service 
time will ALWAYS be higher, but that does NOT imply the drive is slower.  On 
the contrary, it often implies a faster drive which can service more data per 
request.

At any rate, there is a reason that the Hitachi drives are handling larger 
requests than the WD drives.  I glanced at the code for a while but could not 
figure out where the max transfer size is determined or used.
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to