Am 09.02.10 09:58, schrieb Felix Buenemann:
Am 09.02.10 02:30, schrieb Bob Friesenhahn:
On Tue, 9 Feb 2010, Felix Buenemann wrote:

Well to make things short: Using JBOD + ZFS Striped Mirrors vs.
controller's RAID10, dropped the max. sequential read I/O from over
400 MByte/s to below 300 MByte/s. However random I/O and sequential
writes seemed to perform

Much of the difference is likely that your controller implements true
RAID10 wereas ZFS "striped" mirrors are actually load-shared mirrors.
Since zfs does not use true striping across vdevs, it relies on
sequential prefetch requests to get the sequential read rate up.
Sometimes zfs's prefetch is not aggressive enough.

I have observed that there may still be considerably more read
performance available (to another program/thread) even while a benchmark
program is reading sequentially as fast as it can.

Try running two copies of your benchmark program at once and see what
happens.

Yes, JBOD + ZFS load-balanced mirrors does seem to work better under
heavy load. I tried rebooting a Windows VM from NFS, which took about 43
sec with hot cache in both cases. But when doing this during a bonnie++
benchmark run, the ZFS mirrors would win big time, taking just 2:47sec
instead of over 4min to reboot the VM.
So I think in a real world scenario, the ZFS mirrors will win.

On a sitenote however I noticed that small sequential I/O (copying a
150MB sourcetree to NFS), the ZFS mirrors where 50% slower than the
controllers RAID10.

I had a hunch that the controllers volume read ahead would interfere with the ZFS load-shared mirrors and voilà: sequential reads jumped from 270 MByte/s to 420 MByte/s, which checks out nicely, because writes are about 200 MByte/s.


Bob

- Felix

- Felix

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to