[zfs-discuss] b131 OpenSol x86, 4 disk RAIDZ-1 scrub performance way down.

2010-02-01 Thread Jake Carroll
Hi all.

Recent builds (b129, b130 and b131) have had me noticing some zpool performance 
issues when scrubbing.

Running bare into some cheap SATA controllers, on a cheap mobo, running 6GB 
of DDR2 + an Intel Q6600, with 4 * 1TB Samsung consumer grade SATA drives, I've 
been accustomed to seeing around 150 to 180MB/sec scrubs on a single pool. 

Until b129, 130 and 131 hit. I've got dedup=on, compression=on (default, not 
gzip), no dedupe verify etc.

I now see maybe 10MB/sec across 4 drives on scrub. Turning dedupe off seemingly 
didn't help.

A mate has managed to replicate it on a totally different system, with 
different HDD's, different SATA card (LSI 3081 series), different mobo et al.

Interestingly, scrubbing the singular rpool (root boot) in a single drive 
config seems to show normal performance (60+ MB/sec maintained). It's only when 
a multi-disk RAIDZ-1 is scrubbed does the performance problem show itself.

Are we seeing an issue of dedupe here, or something less complex entirely?

Thoughts/comments/issues/ideas to help troubleshoot?

Thanks all.

z
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] b131 OpenSol x86, 4 disk RAIDZ-1 scrub performance way down.

2010-02-01 Thread Eric D. Mudama

On Mon, Feb  1 at 16:12, Jake Carroll wrote:

Hi all.

Recent builds (b129, b130 and b131) have had me noticing some zpool
performance issues when scrubbing.

Running bare into some cheap SATA controllers, on a cheap mobo,
running 6GB of DDR2 + an Intel Q6600, with 4 * 1TB Samsung consumer
grade SATA drives, I've been accustomed to seeing around 150 to
180MB/sec scrubs on a single pool.

Until b129, 130 and 131 hit. I've got dedup=on, compression=on
(default, not gzip), no dedupe verify etc.


I think with dedupe, you've turned your scrub into a mostly random
operation.


I now see maybe 10MB/sec across 4 drives on scrub. Turning dedupe
off seemingly didn't help.


Disabling dedupe doesn't change the state of existing data.  Unless
you've disabled dedupe, then re-copied all your data, I believe your
existing data is all still in the dedupe state.


Are we seeing an issue of dedupe here, or something less complex
entirely?


sounds like dedupe to me... My non-dedupe zpools are scrubbing at the
same rate as ever in b130 on multiple servers.

--eric

--
Eric D. Mudama
edmud...@mail.bounceswoosh.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] b131 OpenSol x86, 4 disk RAIDZ-1 scrub performance way down.

2010-02-01 Thread Lutz Schumann
When you send data (as the mate did) , all data is rewritten and the settings 
you made (dedupe etc.) are effectivly applied. 

If you change a parameter (dedupe, compression) this holds only true for NEWLY 
written data. If you do not cange data, all data is still duped. 

Also when you send, all Data is recreated sequencially. This means with 
send/receive your data gets more sequencial. Could be another reason. 

Regards, 
Robert
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss