> Manoj Nayak wrote:
>> Hi All.
>>
>> ZFS document says ZFS schedules it's I/O in such way that it manages to 
>> saturate a single disk bandwidth  using enough concurrent 128K I/O.
>> The no of concurrent I/O is decided by vq_max_pending.The default value 
>> for  vq_max_pending is 35.
>>
>> We have created 4-disk raid-z group inside ZFS pool on Thumper.ZFS record 
>> size is set to 128k.When we read/write a 128K record ,it issue a
>> 128K/3 I/O to each of the 3 data disks in the 4-disk raid-z group.
>>
>
> Yes, this is how it works for a read without errors.  For a write, you
> should see 4 writes, each 128KBytes/3.  Writes may also be
> coalesced, so you may see larger physical writes.
>
>> We need to saturate all three data disk bandwidth in the Raidz group.Is 
>> it required to set vq_max_pending value to 35*3=135  ?
>>
>
> No.  vq_max_pending applies to each vdev.

4 disk raidz group issues 128k/3=42.6k io to each individual data disk.If 35 
concurrent 128k IO is enough to saturate a disk( vdev ) ,
then 35*3=105 concurrent 42k io will be required to saturates the same disk.

Thanks
Manoj Nayak

Use iostat to see what
> the device load is.  For the commonly used Hitachi 500 GByte disks
> in a thumper, the read media bandwidth is 31-64.8 MBytes/s.  Writes
> will be about 80% of reads, or 24.8-51.8 MBytes/s.  In a thumper,
> the disk bandwidth will be the limiting factor for the hardware.
> -- richard
>
> 

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to