On 05/31/2010 04:45 PM, Bob Friesenhahn wrote:
> On Mon, 31 May 2010, Sandon Van Ness wrote:
>>
>> I think I have came to the conclusion that the problem here is CPU due
>> to the fact that its only doing this with parity raid. I would think if
>> it was I/O based then it would be the same as if anything its heavier on
>> I/O on non parity raid due to the fact that it is no longer CPU
>> bottlenecked (dd write test gives me near 700 megabytes/sec vs 450 with
>> parity raidz2).
>
> The "parity RAID" certainly does impose more computational overhead,
> but not because of the parity calcuation.  You should put that out of
> your mind right away.  With raidz, each 128K block is chopped into
> smaller chunks which are written across the disks in the vdev.  This
> is less efficient (in many ways, but least of which is "parity") than
> writing 128K blocks to each disk in turn.  You are creating a blast of
> smaller I/Os to the various disks which may seem like more CPU but
> could be related to PCI-E access, interrupts, or a controller bottleneck.
>
> Bob

With sequential writes I don't see how parity writing would be any
different from when I just created a 20 disk zpool which is doing the
same writes every 5 seconds but the only difference is it isn't maxing
out CPU usage when doing the writes and and I don't see the transfer
stall during the writes like I did on raidz2.
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to