On Mon, 20 Jul 2009, Marion Hakanson wrote:

Bob, have you tried changing your benchmark to be multithreaded?  It
occurs to me that maybe a single cpio invocation is another bottleneck.
I've definitely experienced the case where a single bonnie++ process was
not enough to max out the storage system.

It is likely that adding more cpios would cause more data to be read, but it would also thrash the disks with many more conflicting IOPS.

I'm not suggesting that the bug you're demonstrating is not real.  It's

It is definitely real. Sun has opened internal CR 6859997. It is now in Dispatched state at High priority.

that points out a problem.  Rather, I'm thinking that maybe the timing
comparisons between low-end and high-end storage systems on this particular
test are not revealing the whole story.

The similarity of performance between the low-end and high-end storage systems is a sign that the rotating rust is not a whole lot faster on the high-end storage systems. Since zfs is failing to use pre-fetch, only one (or maybe two) disks are accessed at a time. If more read I/Os are issued in parallel, then the data read rate will be vastly higher on the higher-end systems.

With my 12 disk array and a large sequential read, zfs can issue 12 requests for 128K at once and since it can also queue pending I/Os, it can request many more than that. Care is required since over-reading will penalize the system. It is not an easy thing to get right.

Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,    http://www.GraphicsMagick.org/
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to