I appreciate everybody's feedback, but I am still unclear on how to proceed. 
Here is a little more information about my setup. Specifically, here is my 
zpool status: 

tank DEGRADED 0 0 0 
mirror-0 ONLINE 0 0 0 
c10t0d0 ONLINE 0 0 0 
c11t0d0 ONLINE 0 0 0 
mirror-1 ONLINE 0 0 0 
c10t1d0 ONLINE 0 0 0 
c11t1d0 ONLINE 0 0 0 
mirror-2 ONLINE 0 0 0 
c10t2d0 ONLINE 0 0 0 
c11t2d0 ONLINE 0 0 0 
mirror-3 ONLINE 0 0 0 
c10t3d0 ONLINE 0 0 0 
c11t3d0 ONLINE 0 0 0 
mirror-4 DEGRADED 0 0 0 
c10t4d0 ONLINE 0 0 0 
spare-1 DEGRADED 0 0 0 
c11t4d0 REMOVED 0 0 0 
c10t6d0 ONLINE 0 0 0 
mirror-5 ONLINE 0 0 0 
c10t5d0 ONLINE 0 0 0 
c11t5d0 ONLINE 0 0 0 
c8d1 ONLINE 0 0 0 
c10t6d0 INUSE currently in use 

In the past I have physically replaced the failed drive and then run these 

# zpool replace tank c11t4d0 
# zpool clear tank 

In this situation I would like to know if I can hold off on physically 
replacing the drive. Is there a safe method to test it or put it back into 
service and see if it fails again? 

Thank you, 

----- Original Message -----

From: "Chris Dunbar - Earthside, LLC" <cdun...@earthside.net> 
To: zfs-discuss@opensolaris.org 
Sent: Tuesday, November 27, 2012 8:56:35 PM 
Subject: [zfs-discuss] Question about degraded drive 


I have a degraded mirror set and this is has happened a few times (not always 
the same drive) over the last two years. In the past I replaced the drive and 
and ran zpool replace and all was well. I am wondering, however, if it is safe 
to run zpool replace without replacing the drive to see if it is in fact 
failed. On traditional RAID systems I have had drives drop out of an array, but 
be perfectly fine. Adding them back to the array returned the drive to service 
and all was well. Does that approach work with ZFS? If not, is there another 
way to test the drive before making the decision to yank and replace? 

Thank you! 
zfs-discuss mailing list 

zfs-discuss mailing list

Reply via email to