On Mon, Mar 8, 2010 at 3:33 PM, Tim Cook <t...@cook.ms> wrote: > Is there a way to manually trigger a hot spare to kick in? Mine doesn't > appear to be doing so. What happened is I exported a pool to reinstall > solaris on this system. When I went to re-import it, one of the drives > refused to come back online. So, the pool imported degraded, but it doesn't > seem to want to use the hot spare... I've tried triggering a scrub to see if > that would give it a kick, but no-go.
uts/common/fs/zfs/vdev.c says: /* * If we fail to open a vdev during an import, we mark it as * "not available", which signifies that it was never there to * begin with. Failure to open such a device is not considered * an error. */ If there is no error then the fault management code probably doesn't kick in and autoreplace isn't triggered. > > r...@fserv:~$ zpool status > pool: fserv > state: DEGRADED > status: One or more devices could not be opened. Sufficient replicas exist > for > the pool to continue functioning in a degraded state. > action: Attach the missing device and online it using 'zpool online'. > see: http://www.sun.com/msg/ZFS-8000-2Q > scrub: scrub completed after 3h19m with 0 errors on Mon Mar 8 02:28:08 > 2010 > config: > > NAME STATE READ WRITE CKSUM > fserv DEGRADED 0 0 0 > raidz2-0 DEGRADED 0 0 0 > c2t0d0 ONLINE 0 0 0 > c2t1d0 ONLINE 0 0 0 > c2t2d0 ONLINE 0 0 0 > c2t3d0 ONLINE 0 0 0 > c2t4d0 ONLINE 0 0 0 > c2t5d0 ONLINE 0 0 0 > c3t0d0 ONLINE 0 0 0 > c3t1d0 ONLINE 0 0 0 > c3t2d0 ONLINE 0 0 0 > c3t3d0 ONLINE 0 0 0 > c3t4d0 ONLINE 0 0 0 > 12589257915302950264 UNAVAIL 0 0 0 was > /dev/dsk/c7t5d0s0 > spares > c3t6d0 AVAIL > That crazy device name is guid (you can see that with eg. zdb -l /dev/rdsk/c3t1d0s0) I was able to replicate your situation here. # uname -a SunOS osol-dev 5.11 snv_133 i86pc i386 i86pc Solaris # zpool status tank pool: tank state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c6t0d0 ONLINE 0 0 0 c6t1d0 ONLINE 0 0 0 cache c6t2d0 ONLINE 0 0 0 spares c6t3d0 AVAIL errors: No known data errors # zpool export tank <removed c6t1d0> # zpool import tank # zpool status tank pool: tank state: DEGRADED status: One or more devices could not be opened. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Attach the missing device and online it using 'zpool online'. see: http://www.sun.com/msg/ZFS-8000-2Q scrub: none requested config: NAME STATE READ WRITE CKSUM tank DEGRADED 0 0 0 mirror-0 DEGRADED 0 0 0 6462738093222634405 UNAVAIL 0 0 0 was /dev/dsk/c6t0d0s0 c6t1d0 ONLINE 0 0 0 cache c6t2d0 ONLINE 0 0 0 spares c6t3d0 AVAIL errors: No known data errors # zpool get autoreplace tank NAME PROPERTY VALUE SOURCE tank autoreplace on local # fmdump -e -t 08Mar2010 TIME CLASS As you can see, no error report was posted. You can try to import the pool again and see if `fmdump -e` lists any errors afterwards. You use the spare with `zpool replace`. -- Giovanni Tirloni sysdroid.com
_______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss