Hi Karl,

Recently, however, it has started taking over 20hours to complete. Not much has 
happened to it in that time: A few extra files added, maybe a couple of 
deletions, but not a huge amount. I am finding it difficult to understand why 
performance would have dropped so dramatically.

FYI the server is my dev box running Solaris 11 express, 2 mirrored pairs of 
1.5GB SATA disks for data (at v28), a separate root pool and a 64GB SSD for 
L2ARC. The data pool has 1.2TB allocated.

Can anyone shed some light on this?

all I can tell you is that I've had terrible scrub rates when I used dedup. The 
DDT was a bit too big to fit in my memory (I assume according to some very 
basic debugging). Only two of my datasets were deduped. On scrubs and resilvers 
I noticed that sometimes I had terrible rates with < 10MB/sec. Then later it 
rose up to < 70MB/sec. After upgrading some discs (same speeds observed) I got 
rid of the deduped datasets (zfs send/receive them) and guess what: All of the 
sudden scrub goes to 350MB/sec steady and only take a fraction of the time.

While I certainly cannot deliver all the necessary explanations I can only tell 
you that from my personal observation simply getting rid of dedup speeded up my 
scrub times by factor 7 or so (same server, same discs, same data).

Kind regards,

zfs-discuss mailing list
  • [zfs-discuss] Scr... Karl Wagner
    • Re: [zfs-dis... Koopmann, Jan-Peter
      • Re: [zfs... Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
        • Re: ... Koopmann, Jan-Peter
        • Re: ... Jim Klimov
          • ... Karl Wagner
            • ... Koopmann, Jan-Peter
            • ... Jim Klimov
        • Re: ... Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)

Reply via email to