I have a 10 drive raidz, recently one of the disks appeared to be generating
errors (this later turned out to be a cable), I removed the disk from the
array, ran vendor diagnostics (which zeroed it). Upon reinstalling the disk
however zfs will not resilver it, it gets referred to numerically instead of
by device name, and when i try to replace it, i get:
# zpool replace data 17096229131581286394 c0t2d0
cannot replace 17096229131581286394 with c0t2d0: cannot replace a replacing
device
if i try to detach it i get:
# zpool detach data 17096229131581286394
cannot detach 17096229131581286394: no valid replicas
current zpool output looks like:
# zpool status -v
pool: data
state: DEGRADED
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
data DEGRADED 0 0 0
- raidz1 DEGRADED 0 0 0
--- c0t0d0 ONLINE 0 0 0
--- c0t1d0 ONLINE 0 0 0
--- replacing UNAVAIL 0 543 0 insufficient
replicas
------ 17096229131581286394 FAULTED 0 581 0 was
/dev/dsk/c0t2d0s0/old
------ 11342560969745958696 FAULTED 0 582 0 was /dev/dsk/c0t2d0s0
--- c0t3d0 ONLINE 0 0 0
--- c0t4d0 ONLINE 0 0 0
--- c0t5d0 ONLINE 0 0 0
--- c0t6d0 ONLINE 0 0 0
--- c0t7d0 ONLINE 0 0 0
--- c2t2d0 ONLINE 0 0 0
--- c2t3d0 ONLINE 0 0 0
errors: No known data errors
i have also tried exporting and reimporting the pool, any help would greatly
appreciated.
--
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss