Hi,

I have a machine running 2009.06 with 8 SATA drives in SCSI connected enclosure.

I had a drive fail and accidentally replaced the wrong one, which 
unsurprisingly caused the rebuild to fail. The status of the zpool then ended 
up as:

 pool: storage2
 state: FAULTED
status: An intent log record could not be read.
        Waiting for adminstrator intervention to fix the faulted pool.
action: Either restore the affected device(s) and run 'zpool online',
        or ignore the intent log records by running 'zpool clear'.
   see: http://www.sun.com/msg/ZFS-8000-K4
 scrub: none requested
config:

    NAME           STATE     READ WRITE CKSUM
    storage2       FAULTED      0     0     1  bad intent log
        raidz1       ONLINE       0     0     0
            c9t4d2     ONLINE       0     0     0
            c9t4d3     ONLINE       0     0     0
            c10t4d2    ONLINE       0     0     0
            c10t4d4    ONLINE       0     0     0
        raidz1       DEGRADED     0     0     6
            c10t4d0    UNAVAIL      0     0     0  cannot open
            replacing  ONLINE       0     0     0
                c9t4d0   ONLINE       0     0     0
                c10t4d3  ONLINE       0     0     0
            c10t4d1    ONLINE       0     0     0
            c9t4d1     ONLINE       0     0     0

running "zpool clear storage2" caused the machine to dump and reboot.
I've tried removing the spare and putting back the faulty drive to give:

  pool: storage2
 state: FAULTED
status: An intent log record could not be read.
        Waiting for adminstrator intervention to fix the faulted pool.
action: Either restore the affected device(s) and run 'zpool online',
        or ignore the intent log records by running 'zpool clear'.
   see: http://www.sun.com/msg/ZFS-8000-K4
 scrub: none requested
config:

    NAME           STATE     READ WRITE CKSUM
    storage2       FAULTED      0     0     1  bad intent log
        raidz1       ONLINE       0     0     0
            c9t4d2     ONLINE       0     0     0
            c9t4d3     ONLINE       0     0     0
            c10t4d2    ONLINE       0     0     0
            c10t4d4    ONLINE       0     0     0
        raidz1       DEGRADED     0     0     6
            c10t4d0    FAULTED      0     0     0  corrupted data
            replacing  DEGRADED     0     0     0
                c9t4d0   ONLINE       0     0     0
                c9t4d4   UNAVAIL      0     0     0  cannot open
            c10t4d1    ONLINE       0     0     0
            c9t4d1     ONLINE       0     0     0

Again this core dumps when I try to do "zpool clear storage2"

Does anyone have any suggestions what would be the best course of action now?
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to