I guess you could try 'zpool import -f'.  This is a pretty odd status,
I think.  I'm pretty sure raidz1 should survive a single disk failure.

Perhaps a more knowledgeable list member can explain.

On Sat, Jan 24, 2009 at 12:48 PM, Brad Hill <b...@thosehills.com> wrote:
>> I've seen reports of a recent Seagate firmware update
>> bricking drives again.
>>
>> What's the output of 'zpool import' from the LiveCD?
>>  It sounds like
>> ore than 1 drive is dropping off.
>
>
> r...@opensolaris:~# zpool import
>  pool: tank
>    id: 16342816386332636568
>  state: FAULTED
> status: The pool was last accessed by another system.
> action: The pool cannot be imported due to damaged devices or data.
>        The pool may be active on another system, but can be imported using
>        the '-f' flag.
>   see: http://www.sun.com/msg/ZFS-8000-EY
> config:
>
>        tank        FAULTED  corrupted data
>          raidz1    DEGRADED
>            c6t0d0  ONLINE
>            c6t1d0  ONLINE
>            c6t2d0  ONLINE
>            c6t3d0  UNAVAIL  cannot open
>            c6t4d0  ONLINE
>
>  pool: rpool
>    id: 9891756864015178061
>  state: ONLINE
> status: The pool was last accessed by another system.
> action: The pool can be imported using its name or numeric identifier and
>        the '-f' flag.
>   see: http://www.sun.com/msg/ZFS-8000-EY
> config:
>
>        rpool       ONLINE
>          c3d0s0    ONLINE
> --
> This message posted from opensolaris.org
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to