I've got a Thumper running snv_57 and a large ZFS pool.  I recently noticed a 
drive throwing some read errors, so I did the right thing and zfs replaced it 
with a spare.

Everything went well, but the resilvering process seems to be taking an 
eternity:

# zpool status
  pool: bigpool
 state: ONLINE
status: One or more devices has experienced an unrecoverable error.  An
        attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
        using 'zpool clear' or replace the device with 'zpool replace'.
   see: http://www.sun.com/msg/ZFS-8000-9P
 scrub: resilver in progress, 4.66% done, 12h16m to go
config:

        NAME          STATE     READ WRITE CKSUM
        bigpool       ONLINE       0     0     0
          raidz2      ONLINE       0     0     0
            c4t4d0    ONLINE       0     0     0
            c7t4d0    ONLINE       0     0     0
            c6t4d0    ONLINE       0     0     0
            c1t4d0    ONLINE       0     0     0
            c0t4d0    ONLINE       0     0     0
            c4t0d0    ONLINE       0     0     0
            c7t0d0    ONLINE       0     0     0
            c6t0d0    ONLINE       0     0     0
            c1t0d0    ONLINE       0     0     0
            c0t0d0    ONLINE       0     0     0
          raidz2      ONLINE       0     0     0
            c5t5d0    ONLINE       0     0     0
            c4t5d0    ONLINE       0     0     0
            c7t5d0    ONLINE       0     0     0
            c6t5d0    ONLINE       0     0     0
            c1t5d0    ONLINE       0     0     0
            c0t5d0    ONLINE       0     0     0

** Heres the resilver
           spare     ONLINE       0     0     0
              c4t1d0  ONLINE      18     0     0
              c5t1d0  ONLINE       0     0     0
***            

c7t1d0    ONLINE       0     0     0
            c6t1d0    ONLINE       0     0     0
            c1t1d0    ONLINE       0     0     0
            c0t1d0    ONLINE       0     0     0
          raidz2      ONLINE       0     0     0
            c5t6d0    ONLINE       0     0     0
            c4t6d0    ONLINE       0     0     0
            c7t6d0    ONLINE       0     0     0
            c6t6d0    ONLINE       0     0     0
            c1t6d0    ONLINE       0     0     0
            c0t6d0    ONLINE       0     0     0
            c4t2d0    ONLINE       0     0     0
            c7t2d0    ONLINE       0     0     0
            c6t2d0    ONLINE       0     0     0
            c1t2d0    ONLINE       0     0     0
            c0t2d0    ONLINE       0     0     0
          raidz2      ONLINE       0     0     0
            c5t7d0    ONLINE       0     0     0
            c4t7d0    ONLINE       0     0     0
            c7t7d0    ONLINE       0     0     0
            c6t7d0    ONLINE       0     0     0
            c1t7d0    ONLINE       0     0     0
            c0t7d0    ONLINE       0     0     0
            c5t3d0    ONLINE       0     0     0
            c4t3d0    ONLINE       0     0     0
            c7t3d0    ONLINE       0     0     0
            c6t3d0    ONLINE       0     0     0
            c1t3d0    ONLINE       0     0     0
            c0t3d0    ONLINE       0     0     0
        spares
          c5t1d0      INUSE     currently in use
          c5t2d0      AVAIL   

Looks just fine except its been running for 3 days already!  These are 500gb 
drives.

Should I have removed the bad drive and just replaced it vs. trying to swap in 
a spare?  Is there some sort of contention issue because the spare and the 
original drive are still both up?

Not sure what to think here...
-- 
This message posted from opensolaris.org

Reply via email to