On 06/02/2010 02:38, Ross Walker wrote:
On Feb 5, 2010, at 10:49 AM, Robert Milkowski <mi...@task.gda.pl> wrote:

Actually, there is.
One difference is that when writing to a raid-z{1|2} pool compared to raid-10 pool you should get better throughput if at least 4 drives are used. Basically it is due to the fact that in RAID-10 the maximum you can get in terms of write throughput is a total aggregated throughput of half the number of used disks and only assuming there are no other bottlenecks between the OS and disks especially as you need to take into account that you are double the bandwidth requirements due to mirroring. In case of RAID-Zn you have some extra overhead for writing additional checksum but other than that you should get a write throughput closer to of T-N (where N is a RAID-Z level) instead of T/2 in RAID-10.

That hasn't been my experience with raidz. I get a max read and write IOPS of the slowest drive in the vdev.

Which makes sense because each write spans all drives and each read spans all drives (except the parity drives) so they end up having the performance characteristics of a single drive.



Please note that I was writing about write *throughput* in terms of MB/s instead of IOPS. But even in terms of write IOPS RAID-Z can be faster than RAID-10 assuming asynchronous I/O is issued and there is enough memory to buffer them for up-to 30s - well if it is the case from app point of view it will be as fast as writing to memory in both raid-z and raid-10 cases but because zfs will aggregate writes and make them basically a sequential writes raid-z could provide more throughput than raid-10 when you need to write your data twice, so raid-z could be able to more quickly flush transactions to disks.

See http://milek.blogspot.com/2006/04/software-raid-5-faster-tha_114588672235104990.html


--
Robert Milkowski
http://milek.blogspot.com

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to