The "vms" pool was created in a non-redundant way, so there is no way to
get the data off of it unless you can put back the original c0t3d0 disk.
If you can still plug in the disk, you can always do a zpool replace on it
If not, you'll need to restore from backup, preferably to a pool with
raidz or mirroring so zfs can repair faults automatically.
On Mon, 15 Aug 2011, Doug Schwabauer wrote:
Help - I've got a bad disk in a zpool and need to replace it. I've got an
extra drive that's not being used, although it's still marked like it's in a
So I need to get the "xvm" pool destroyed, c0t5d0 marked as available, and
replace c0t3d0 with c0t5d0.
root@kc-x4450a # zpool status -xv
status: One or more devices are faulted in response to IO failures.
action: Make sure the affected devices are connected, then run 'zpool clear'.
scrub: none requested
NAME STATE READ WRITE CKSUM
vms UNAVAIL 0 3 0 insufficient replicas
c0t2d0 ONLINE 0 0 0
c0t3d0 UNAVAIL 0 6 0 experienced I/O failures
c0t4d0 ONLINE 0 0 0
errors: Permanent errors have been detected in the following files:
root@kc-x4450a # zpool replace -f vms c0t3d0 c0t5d0
cannot replace c0t3d0 with c0t5d0: pool I/O is currently suspended
root@kc-x4450a # zpool import
status: The pool was last accessed by another system.
action: The pool can be imported despite missing or damaged devices. The
fault tolerance of the pool may be compromised if imported.
c0t4d0 FAULTED corrupted data
zfs-discuss mailing list