Gino wrote:
>>>> "cfgadm -al" or "devfsadm -C" didn't solve the
>>>>         
>> problem.
>>     
>>>> After a reboot  ZFS recognized the drive as failed
>>>>         
>> and all worked well.
>>     
>>>> Do we need to restart Solaris after a drive
>>>>         
>> failure??
>>
>> It depends...
>> ... on which version of Solaris you are running.  ZFS
>> FMA phase 2 was
>> integrated into SXCE build 68.  Prior to that
>> release, ZFS had a limited
>> view of the (many) disk failure modes -- it would say
>> a disk was failed
>> if it could not be opened.  In phase 2, the ZFS
>> diagnosis engine was
>> enhanced to look for per-vdev soft error rate
>> discriminator (SERD) engines.
>>     
>  
> Richard, thank you for your detailed reply.
> Unfortunately an other reason to stay with UFS in production ..
>
>   
IMHO, maturity is the primary reason to stick with UFS.  To look at
this through the maturity lens, UFS is the great grandfather living on
life support (prune juice and oxygen) while ZFS is the late adolescent,
soon to bloom into a young adult. The torch will pass when ZFS
becomes the preferred root file system.
 -- richard

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to