> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Robert Milkowski
> >
> I had a quick look at your results a moment ago.
> The problem is that you used a server with 4GB of RAM + a raid card
> with
> a 256MB of cache.
> Then your filesize for iozone was set to 4GB - so random or not you
> probably had a relatively good cache hit ratio for random reads. And

Look again in the raw_results.  I ran it with 4G, and also with 12G.  There
was no significant difference between the two, so I only compiled the 4G
results into a spreadsheet PDF.


> even then a random read from 8 threads gave you only about 40% more
> IOPS
> than for a RAID-Z made out of 5 disks than a single drive. The poor
> result for HW-R5 is surprising though but it might be that a stripe
> size
> was not matched to ZFS recordsize and iozone block size in this case.

I think what you're saying is "With 5 disks performing well, you should
expect 4x higher iops than a single disk," and "the measured result was only
40% higher, which is a poor result."

I agree.  I guess the 128k recordsize used in iozone is probably large
enough that it frequently causes blocks to span disks?  I don't know.


> The issue with raid-z and random reads is that as cache hit ratio goes
> down to 0 the IOPS approaches IOPS of a single drive. For a little bit
> more information see http://blogs.sun.com/roch/entry/when_to_and_not_to

I don't think that's correct, unless you're using a single thread.  As long
as multiple threads are issuing random reads on raidz, and those reads are
small enough that each one is entirely written on a single disk, then you
should be able to get n-1 disk operating simultaneously, to achieve (n-1)x
performance of a single disk.

Even if blocks are large enough to span disks, you should be able to get
(n-1)x performance of a single disk for large sequential operations.  

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to