zpool status shows:
        NAME         STATE     READ WRITE CKSUM
        external     DEGRADED     0     0     0
          raidz1     DEGRADED     0     0     0
            c18t0d0  ONLINE       0     0     0
            c18t0d0  FAULTED      0     0     0  corrupted data

I used to have a normal raidz1 with devices c18t0d0 and c19t0d0 but c19t0d0 
broke. So, I plugged a new drive in its slot.  But attach and replace give 
errors:

# zpool replace external c19t0d0
cannot replace c19t0d0 with c19t0d0: no such device in pool

# zpool attach external c18t0d0 c19t0d0
cannot attach c19t0d0 to c18t0d0: can only attach to mirrors and top-level disks

Why does zpool status show the same drive twice?  How can I clear the fault and 
attach a new good drive in c19t0d0?

Any help appreciated,
- Jeff
 
 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to