Hello!
I've got a ZFS-based file server running FreeBSD 7.1-STABLE, with two
raidz1's of 3 x 500gb.
Recently my system has exhibited exceptionally poor performance in a
way that confuses me. None of the drives are in a degraded state, and
zpool iostat reports typical performance figures, while actual
applications (dd, in this case) only receive data at much lower rates.
My basic benchmark, and once that I'd used before, is to dd a big file
and output it to /dev/null. What's occurring now is mystifying, as dd
is reporting much lower figures than zpool iostat.
--
n...@ice2:~uname -a
FreeBSD ice2 7.1-STABLE FreeBSD 7.1-STABLE #0: Sun Feb 8 20:31:17 EST
2009 r...@ice2:/usr/obj/usr/src/sys/GENERIC amd64
n...@ice2:/media/nf/videosdd if=sph-kingok.avi of=/dev/null
1433288+0 records in
1433288+0 records out
733843456 bytes transferred in 61.124812 secs (12005656 bytes/sec)
(and, while that's running:)
n...@ice2:~sudo zpool iostat -v 2
capacity operationsbandwidth
pool used avail read write read write
--- - - - - - -
data 1.19T 1.53T 13 29 990K 153K
raidz1 1.01T 361G 9 11 641K 58.8K
ad4s1d - - 3 8 283K 30.2K
ad8s1d - - 3 8 282K 30.2K
ad10s1d - - 3 8 284K 30.2K
raidz1 191G 1.17T 4 17 349K 94.5K
ad6s1d - - 1 12 138K 48.3K
ad10s1e - - 1 12 134K 48.3K
ad12s1d - - 2 12 141K 48.3K
--- - - - - - -
capacity operationsbandwidth
pool used avail read write read write
--- - - - - - -
data 1.19T 1.53T 1.41K 0 179M 0
raidz1 1.01T 361G 1.40K 0 179M 0
ad4s1d - -676 0 72.1M 0
ad8s1d - -453 0 45.1M 0
ad10s1d - -605 0 61.7M 0
raidz1 191G 1.17T 4 0 127K 0
ad6s1d - - 0 0 49.9K 0
ad10s1e - - 1 0 74.4K 0
ad12s1d - - 1 0 92.1K 0
--- - - - - - -
capacity operationsbandwidth
pool used avail read write read write
--- - - - - - -
data 1.19T 1.53T 1.19K102 152M 517K
raidz1 1.01T 361G 1.19K 42 152M 149K
ad4s1d - -369 16 39.2M 75.4K
ad8s1d - -436 17 48.7M 77.2K
ad10s1d - -563 19 65.3M 76.2K
raidz1 191G 1.17T 5 59 116K 368K
ad6s1d - - 0 32 63.9K 187K
ad10s1e - - 2 34 146K 188K
ad12s1d - - 1 34 114K 190K
--- - - - - - -
^C
--
So I presume you discount the first output of iostat as a measuring
error, and interpret the subsequent outputs as cumulative over the
repeat interval (in this case each two seconds). What I'm seeing is
the ZFS pool being read from at 70-90mb/sec, while dd is only getting
data at 12mb/sec. Where is all the data that's being read from the
disk going?
As I said before, on previous attempts iostat and dd reported figures
that corresponded with each other. I'm not sure what's changed between
those attempts.
(There is nothing else reading from disk at the same time. As soon as
the dd concludes, zpool iostat reports the arrays are idle.)
FWIW, here's what top reports while I'm running the dd:
--
CPU: 0.4% user, 0.0% nice, 14.3% system, 4.5% interrupt, 80.8% idle
Mem: 1950M Active, 868M Inact, 537M Wired, 238M Cache, 363M Buf, 87M Free
Swap: 1024M Total, 101M Used, 923M Free, 9% Inuse
PID USERNAME THR PRI NICE SIZERES STATE C TIME WCPU COMMAND
8402 nf 1 500 4604K 788K zio-i 0 0:03 8.59% dd
--
If anyone can shed some light on what's going on here, I'd be most appreciated.
Thanks,
nf
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org