As a followup:

I ran the same DD test as earlier- but this time I stopped the scrub:

pool0       14.1T  25.4T     88  4.81K   709K   262M
pool0       14.1T  25.4T    104  3.99K   836K   248M
pool0       14.1T  25.4T    360  5.01K  2.81M   230M
pool0       14.1T  25.4T    305  5.69K  2.38M   231M
pool0       14.1T  25.4T    389  5.85K  3.05M   293M
pool0       14.1T  25.4T    376  5.38K  2.94M   328M
pool0       14.1T  25.4T    295  3.29K  2.31M   286M

~# dd if=/dev/zero of=/pool0/ds.test bs=1024k count=2000 2000+0 records in
2000+0 records out
2097152000 bytes (2.1 GB) copied, 6.50394 s, 322 MB/s

Stopping the scrub seemed to increase my performance by another 60%
over the highest numbers I saw just from the metaslab change earlier
(That peak was 201 MB/s).

This is the performance I was seeing out of this array when newly built.

I have two follow up questions:

1. We changed the metaslab size from 10M to 4k- that's a pretty
drastic change. Is there some median value that should be used instead
and/or is there a downside to using such a small metaslab size?

2. I'm still confused by the poor scrub performance and it's impact on
the write performance. I'm not seeing a lot of IO's or processor load-
so I'm wondering what else I might be missing.

-Don
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to