more below...

On Nov 25, 2009, at 7:10 AM, Paul Kraus wrote:

I posted baseline stats at http://www.ilk.org/~ppk/Geek/

baseline test was 1 thread, 3 GiB file, 64KiB to 512 KiB record size

480-3511-baseline.xls is an iozone output file

iostat-baseline.txt is the iostat output for the device in use (annotated)

I also noted an odd behavior yesterady and have not had a chance to
better qualify it. I was testing various combinations of vdev
quantities and mirror quantities.

As I changed the number of vdevs (stripes) from 1 through 8 (all
backed buy paritions on the same logical disk on the 3511) there was
no real change in sequential write, random write, or random read
performance. Sequential read performance did show a drop from 216
MiB/sec at 1 vdev to 180 MiB/sec. at 8 vdevs. This was about as
expected.

As I changed the number of mirro components things got interesting.
Keep in mind that I only have one 3511 for testing right now, I had to
use partitions from two other production 3511's to get three mirror
components on different arrays. As expected, as I went from 1 to 2 to
3 mirror components the write performance did not change, but the read
performance was interesting... see below:

read performance
mirrors  sequential  random
1  174 MiB/sec.  23 MiB/sec.
2  229 MiB/sec.  30 MiB/sec.
3  223 MiB/sec.  125 MiB/sec.

What they heck happened here ? 1 to 2 mirrors saw a large increase in
sequential read perfromance and from 2 to 3 mirrors show a HUGE
increase in random read performance. It "feels" like the behavior of
the zfs code changed between 2 and 3 mirrors for the random read data.

I can't explain this.  It may require a detailed understanding of the
hardware configuration to identify the potential bottleneck.

The ZFS mirroring code doesn't care how many mirrors there are, it
just goes through the list.  If the performance is not symmetrical from
all sides of the mirror, then YMMV.

Now to investigate further, I tried multiple mirrors components on the
same array (my test 3511), not that you would do this in production,
but I was curious what would happen. In this case the throughput
degraded across the board as I added mirror components, as one would
expect. In the random read case the array was delivering less overall
performance than it was when it was one part of the earlier test (16
MiB/sec. combined vs. 1/3 of 125 MiB/sec.) See sheet 7 of
http://www.ilk.org/~ppk/Geek/throughput-summary.ods for these test
results. Sheet 8 is the last test I did last night, using the NRAID
logical disk type to try to get the 3511 to pass a disk through to
zfs, but get the advantage of the cache on the 3511. I'm not sure what
to read into those numbers.

I read it as the single array, as configured, with 10+1 RAID-5 can deliver
around 130 random read IOPS @ 128 KB.
 -- richard

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to