On May 16, 2011, at 7:32 PM, Donald Stahl wrote:

> As a followup:
> 
> I ran the same DD test as earlier- but this time I stopped the scrub:
> 
> pool0       14.1T  25.4T     88  4.81K   709K   262M
> pool0       14.1T  25.4T    104  3.99K   836K   248M
> pool0       14.1T  25.4T    360  5.01K  2.81M   230M
> pool0       14.1T  25.4T    305  5.69K  2.38M   231M
> pool0       14.1T  25.4T    389  5.85K  3.05M   293M
> pool0       14.1T  25.4T    376  5.38K  2.94M   328M
> pool0       14.1T  25.4T    295  3.29K  2.31M   286M
> 
> ~# dd if=/dev/zero of=/pool0/ds.test bs=1024k count=2000 2000+0 records in
> 2000+0 records out
> 2097152000 bytes (2.1 GB) copied, 6.50394 s, 322 MB/s
> 
> Stopping the scrub seemed to increase my performance by another 60%
> over the highest numbers I saw just from the metaslab change earlier
> (That peak was 201 MB/s).
> 
> This is the performance I was seeing out of this array when newly built.
> 
> I have two follow up questions:
> 
> 1. We changed the metaslab size from 10M to 4k- that's a pretty
> drastic change. Is there some median value that should be used instead
> and/or is there a downside to using such a small metaslab size?

metaslab_min_alloc_size is not the metaslab size. From the source

http://src.illumos.org/source/xref/illumos-gate/usr/src/uts/common/fs/zfs/metaslab.c#57

/*
 * A metaslab is considered "free" if it contains a contiguous
 * segment which is greater than metaslab_min_alloc_size.
 */

By reducing this value, it is easier for the allocator to identify a metaslab 
for
allocation as the file system becomes full.

> 
> 2. I'm still confused by the poor scrub performance and it's impact on
> the write performance. I'm not seeing a lot of IO's or processor load-
> so I'm wondering what else I might be missing.

For slow disks with the default zfs_vdev_max_pending, the IO scheduler becomes
ineffective. Consider reducing zfs_vdev_max_pending to see if performance 
improves.
Based on recent testing I've done on a variety of disks, a value of 1 or 2 can 
be better
for 7,200 rpm disks or slower. The tradeoff is a few IOPS for much better 
average latency.
 -- richard


_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to