I'm trying to work out the case a remedy for a very sick iSCSI pool on a Solaris 11 host.

The volume is exported from an Oracle storage appliance and there are no errors reported there. The host has no entries in its logs relating to the network connections.

Any zfs or zpool commands the change the state of the pool (such as zfs mount or zpool export) hang and can't be killed.

fmadm faulty reports:

Jun 27 14:04:24 536fb2ad-1fca-c8b2-fc7d-f5a4a94c165d  ZFS-8000-FD    Major

Host        : taitaklsc01
Platform    : SUN-FIRE-X4170-M2-SERVER  Chassis_id  : 1142FMM02N
Product_sn  : 1142FMM02N

Fault class : fault.fs.zfs.vdev.io
Affects     : zfs://pool=fileserver/vdev=68c1bdefa6f97db8
                  faulted but still in service
Problem in  : zfs://pool=fileserver/vdev=68c1bdefa6f97db8
                  faulted but still in service

Description : The number of I/O errors associated with a ZFS device exceeded
acceptable levels. Refer to http://sun.com/msg/ZFS-8000-FD
              for more information.

The zpool status paints a very gloomy picture:

  pool: fileserver
 state: ONLINE
status: One or more devices is currently being resilvered.  The pool will
        continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
  scan: resilver in progress since Fri Jun 29 11:59:59 2012
    858K scanned out of 15.7T at 43/s, (scan is slow, no estimated time)
    567K resilvered, 0.00% done

        NAME                                     STATE     READ WRITE CKSUM
        fileserver                               ONLINE       0 1.16M     0
c0t600144F096C94AC700004ECD96F20001d0 ONLINE 0 1.16M 0 (resilvering)

errors: 1557164 data errors, use '-v' for a list

Any ideas how to determine the cause of the problem and remedy it?


zfs-discuss mailing list

Reply via email to