In the process of replacing a raidz1 of four 500GB drives with four 1.5TB 
drives on the third one I ran into an interesting issue.  The process was to 
remove the old drive, put the new drive in and let it rebuild.

The problem was the third drive I put in had a hardware fault.  That caused 
both drives (c4t2d0) to show as FAULTED.  I couldn't put a new 1.5TB drive in 
as a replacement - it'd still show as a faulted drive.  I couldn't remove the 
faulted since you can't remove a drive without enough replicas. You also can't 
do anything to a pool in the process of replacing.

The remedy was to put the original drive back in and let it resilver.  Once 
complete, a new 1.5TB drive was put in and the process was able to complete.

If I didn't have the original drive (or it was broken) I think I would have 
been in a tough spot.

Has anyone else experienced this - and if so, is there a way to force the 
replacement of drive that failed during resilvering?
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to