Greetings zfs-discuss@
I was trying to narrow this down for some quite time. The problem is
resides on couple of osol/sxce boxes that are used as dom0 hosts. Under
high disk load on domU guests (backup process for example) domU
performance is terrible. The worst thing is that iostat shows *very*
high %w numbers, while zpool iostat showing quite low numbers.
Couple things that to mention:
1. /etc/system tune: set zfs:zfs_arc_max = 524288000
2. dom0 is pinned to dedicated CPU, also memory is capped to 1GB.
3. no hardware raid involved, raw SATA drives fed to dom0 under rpool.
4. domUs are on top of zvols, 8K blocksize
5. iostat: http://pastebin.com/m4bf1c409
6. zpool iostat: http://pastebin.com/m179269e2
7. domU definition: http://pastebin.com/m48f18a76
8. dom0 bits are snv_115, snv_124, snv_126 and snv_130
9. domUs have ext3 mounted with: noatime,commit=120
10. there are ~4 domUs per dom0 host, each having dedicated cpu(s).
Any hint would be apreciated where should I go from here.
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss