>> Was my raidz2 performance comment above correct?
>>  That the write speed is that of the slowest disk?
>>  That is what I believe I have
>> read.

> You are
> sort-of-correct that its the write speed of the
> slowest disk.

My experience is not in line with that statement.  RAIDZ will write a complete 
stripe plus parity (RAIDZ2 -> two parities, etc.).  The write speed of the 
entire stripe will be brought down to that of the slowest disk, but only for 
its portion of the stripe.  In the case of a 5 spindle RAIDZ2, 1/3 of the 
stripe will be written to each of three disks and parity info on the other two 
disks.  The throughput would be 3x the slowest disk for read or write.

> Mirrored drives will be faster, especially for
> random I/O. But you sacrifice storage for that
> performance boost.

Is that really true?  Even after glancing at the code, I don't know if zfs 
overlaps mirror reads across devices.  Watching my rpool mirror leads me to 
believe that it does not.  If true, then mirror reads would be no faster than a 
single disk.  Mirror writes are no faster than the slowest disk.

As a somewhat related rant, there seems to be confusion about mirror IOPS vs. 
RAIDZ[123] IOPS.  Assuming mirror reads are not overlapped, then a mirror vdev 
will read and write at roughly the same throughput and IOPS as a single disk 
(ignoring bus and cpu constraints).

Also ignoring bus and cpu constraints, a RAIDZ[123] vdev will read and write at 
roughly the same throughput of a single disk, multiplied by the number of data 
drives: three in the config being discussed.  Also, a RAIDZ[123] vdev will have 
IOPS performance similar to that of a single disk.

A stack of mirror vdevs will, of course, perform much better than a single 
mirror vdev in terms of throughput and IOPS.

A stack of RAIDZ[123] vdevs will also perform much better than a single 
RAIDZ[123] vdev in terms of throughput and IOPS.

RAIDZ tends to have more CPU overhead and provides more flexibility in choosing 
the optimal data to redundancy ratio.

Many read IOPS problems can be mitigated by L2ARC, even a set of small, fast 
disk drives.  Many write IOPS problems can be mitigated by ZIL.

My anecdotal conclusions backed by zero science,
Marty
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to