Sorry, I was skipping bits to get to the main point. I did use replace (as 
previously instructed on the list). I think that worked because my spare had 
taken over for the failed drive. That's the same situation now - spare in 
service for the failed drive. 

Sent from my iPhone

On Nov 27, 2012, at 9:08 PM, Freddie Cash <fjwc...@gmail.com> wrote:

> You don't use replace on mirror vdevs.
> 
> 'zpool detach' the failed drive. Then 'zpool attach' the new drive.
> 
> On Nov 27, 2012 6:00 PM, "Chris Dunbar - Earthside, LLC" 
> <cdun...@earthside.net> wrote:
>> Hello,
>> 
>>  
>> 
>> I have a degraded mirror set and this is has happened a few times (not 
>> always the same drive) over the last two years. In the past I replaced the 
>> drive and and ran zpool replace and all was well. I am wondering, however, 
>> if it is safe to run zpool replace without replacing the drive to see if it 
>> is in fact failed. On traditional RAID systems I have had drives drop out of 
>> an array, but be perfectly fine. Adding them back to the array returned the 
>> drive to service and all was well. Does that approach work with ZFS? If not, 
>> is there another way to test the drive before making the decision to yank 
>> and replace?
>> 
>>  
>> 
>> Thank you!
>> Chris
>> 
>> 
>> _______________________________________________
>> zfs-discuss mailing list
>> zfs-discuss@opensolaris.org
>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to