Hi all, first post to this mailing list so please forgive me if I miss 
something obvious. Earlier this year I went over 80% disk utilisation on my 
home server and saw performance start to degrade. I migrated from the old pool 
of 4 x 1TB WD RE2-GPs (raidz1) to a new pool made of 6 x 2TB WD EURS (raidz2). 
My original plan of swapping one disk at a time was swiftly banjaxed when I 
found out about the change in sector size, but I got there in the end.

Things have been fine for a couple of months - files transferred between 
filesystems on the pool at 60-80MiB/sec. In the meantime I read about appalling 
zfs performance on disks with 4k sectors, and breathed a sigh of relief as it 
seemed I'd somehow managed to avoid it. However, for the last week the average 
transfer rate has dropped to 8.6MiB/sec with no obvious changes to the system, 
other than free space ticking down a little (3.76TB free from 8TB usable). 
zpool status reports no errors, a scrub took around 8.5 hours (and repaired 0), 
and generally the rest of the systems seems as normal. I'm running oi148, zfs 
v28, 4GiB ECC RAM on an old Athlon BE-4050. The rpool is on a separate SSD, no 
recent hardware or software changes I can think of.

I'd be really interested to hear of potential causes (and, with any luck, 
remedies!) for this behaviour, as all of a sudden moving ISOs around has become 
something I have to plan. I'm not especially experienced with this sort of 
thing (or zfs in general beyond following setup guides), but I'm keen to learn 
from the best. 

Thanks very much for your time,
This message posted from opensolaris.org
zfs-discuss mailing list

Reply via email to