Sounds similar to the problem discussed here:
Check 'iostat -xn' and see if one or more disks is stuck at 100%.
On Thu, Oct 4, 2012 at 3:42 PM, Cindy Swearingen <
> Hi Charles,
> Yes, a faulty or failing disk can kill performance.
> I would see if FMA has generated any faults:
> # fmadm faulty
> Or, if any of the devices are collecting errors:
> # fmdump -eV | more
> On 10/04/12 11:22, Knipe, Charles wrote:
>> Hey guys,
>> I’ve run into another ZFS performance disaster that I was hoping someone
>> might be able to give me some pointers on resolving. Without any
>> significant change in workload write performance has dropped off
>> dramatically. Based on previous experience we tried deleting some files
>> to free space, even though we’re not near 60% full yet. Deleting files
>> seemed to help for a little while, but now we’re back in the weeds.
>> We already have our metaslab_min_alloc_size set to 0x500, so I’m
>> reluctant to go lower than that. One thing we noticed, which is new to
>> us, is that zio_state shows a large number of threads in
>> CHECKSUM_VERIFY. I’m wondering if that’s generally indicative of
>> anything in particular. I’ve got no errors on any disks, either in zpool
>> status or iostat –e. Any ideas as to where else I might want to dig in
>> to figure out where my performance has gone?
>> zfs-discuss mailing list
> zfs-discuss mailing list
zfs-discuss mailing list