Hi all 

I have had a ZFS file server for a while now. I recently
upgraded it, giving it 16GB RAM and an SSD for L2ARC. This allowed me to
evaluate dedupe on certain datasets, which worked pretty well. 

main reason for the upgrade was that something wasn't working quite
right, and I was getting errors on the disks (all of them) leading to
occasional data loss. The first thing I did, therefore, was schedule in
regular scrubbing. 

It was not long before I cut this down from daily
to weekly, as no errors were being found but performance during the
scrub was, obviously, not so hot. The scrub was scheduled for 4am Sunday
morning, when it would have least impact on use, and normally ran for
approx 4-6hrs. 

Recently, however, it has started taking over 20hours
to complete. Not much has happened to it in that time: A few extra files
added, maybe a couple of deletions, but not a huge amount. I am finding
it difficult to understand why performance would have dropped so

FYI the server is my dev box running Solaris 11 express,
2 mirrored pairs of 1.5GB SATA disks for data (at v28), a separate root
pool and a 64GB SSD for L2ARC. The data pool has 1.2TB allocated. 

anyone shed some light on this? 


zfs-discuss mailing list
  • [zfs-discuss] Scr... Karl Wagner

Reply via email to