> The scrub I/O has lower priority than other I/O.
>
> In later ZFS releases, scrub I/O is also throttled. When the throttle
> kicks in, the scrub can drop to 5-10 IOPS. This shouldn't be much of
> an issue, scrubs do not need to be, and are not intended to be, run
> very often -- perhaps once a quarter or so.
I understand the lower priority I/O and such but what confuses me is this:
On my primary head:
 scan: scrub in progress since Fri May 13 14:04:46 2011
    24.5G scanned out of 14.2T at 340K/s, (scan is slow, no estimated time)
    0 repaired, 0.17% done

I have a second NAS head, also running OI 147 on the same type of
server, with the same SAS card, connected to the same type of disk
shelf- and a zpool scrub over there is showing :
 scan: scrub in progress since Sat May 14 11:10:51 2011
    29.0G scanned out of 670G at 162M/s, 1h7m to go
    0 repaired, 4.33% done

Obviously there is less data on the second server- but the first
server has 88 x SAS drives and the second one has 10 x 7200 SATA
drives. I would expect those 88 SAS drives to be able to outperform 10
SATA drives- but they aren't.

On the first server iostat -Xn is showing 30-40 IOPS max per drive,
while on the second server iostat -Xn is showing 400 IOPS per drive.

On the first server the disk busy numbers never climb higher than 30%
while on the secondary they will spike to 96%.

This performance problem isn't just related to scrubbing either. I see
mediocre performance when trying to write to the array as well. If I
were seeing hardware errors, high service times, high load, or other
errors, then that might make sense. Unfortunately I seem to have
mostly idle disks that don't get used. It's almost as if ZFS is just
sitting around twiddling its thumbs instead of writing data.

I'm happy to provide real numbers, suffice it to say none of these
numbers make any sense to me.

The array actually has 88 disks + 4 hot spares (1 each of two sizes
per controller channel) + 4 Intel X-25E 32GB SSD's (2 x 2 way mirror
split across controller channels).

Any ideas or things I should test and I will gladly look into them.

-Don
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to