Re: [zfs-discuss] Sudden and Dramatic Performance Drop-off

2012-10-04 Thread Cindy Swearingen

Hi Charles,

Yes, a faulty or failing disk can kill performance.

I would see if FMA has generated any faults:

# fmadm faulty

Or, if any of the devices are collecting errors:

# fmdump -eV | more

Thanks,

Cindy

On 10/04/12 11:22, Knipe, Charles wrote:

Hey guys,

I’ve run into another ZFS performance disaster that I was hoping someone
might be able to give me some pointers on resolving. Without any
significant change in workload write performance has dropped off
dramatically. Based on previous experience we tried deleting some files
to free space, even though we’re not near 60% full yet. Deleting files
seemed to help for a little while, but now we’re back in the weeds.

We already have our metaslab_min_alloc_size set to 0x500, so I’m
reluctant to go lower than that. One thing we noticed, which is new to
us, is that zio_state shows a large number of threads in
CHECKSUM_VERIFY. I’m wondering if that’s generally indicative of
anything in particular. I’ve got no errors on any disks, either in zpool
status or iostat –e. Any ideas as to where else I might want to dig in
to figure out where my performance has gone?

Thanks

-Charles



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Sudden and Dramatic Performance Drop-off

2012-10-04 Thread Schweiss, Chip
Sounds similar to the problem discussed here:

http://blogs.everycity.co.uk/alasdair/2011/05/adjusting-drive-timeouts-with-mdb-on-solaris-or-openindiana/

Check 'iostat -xn' and see if one or more disks is stuck at 100%.

-Chip

On Thu, Oct 4, 2012 at 3:42 PM, Cindy Swearingen 
cindy.swearin...@oracle.com wrote:

 Hi Charles,

 Yes, a faulty or failing disk can kill performance.

 I would see if FMA has generated any faults:

 # fmadm faulty

 Or, if any of the devices are collecting errors:

 # fmdump -eV | more

 Thanks,

 Cindy


 On 10/04/12 11:22, Knipe, Charles wrote:

 Hey guys,

 I’ve run into another ZFS performance disaster that I was hoping someone
 might be able to give me some pointers on resolving. Without any
 significant change in workload write performance has dropped off
 dramatically. Based on previous experience we tried deleting some files
 to free space, even though we’re not near 60% full yet. Deleting files
 seemed to help for a little while, but now we’re back in the weeds.

 We already have our metaslab_min_alloc_size set to 0x500, so I’m
 reluctant to go lower than that. One thing we noticed, which is new to
 us, is that zio_state shows a large number of threads in
 CHECKSUM_VERIFY. I’m wondering if that’s generally indicative of
 anything in particular. I’ve got no errors on any disks, either in zpool
 status or iostat –e. Any ideas as to where else I might want to dig in
 to figure out where my performance has gone?

 Thanks

 -Charles



 __**_
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/**mailman/listinfo/zfs-discusshttp://mail.opensolaris.org/mailman/listinfo/zfs-discuss

 __**_
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/**mailman/listinfo/zfs-discusshttp://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss