> >> "cfgadm -al" or "devfsadm -C" didn't solve the
> problem.
> >> After a reboot  ZFS recognized the drive as failed
> and all worked well.
> >>
> >> Do we need to restart Solaris after a drive
> failure??
> 
> It depends...
> ... on which version of Solaris you are running.  ZFS
> FMA phase 2 was
> integrated into SXCE build 68.  Prior to that
> release, ZFS had a limited
> view of the (many) disk failure modes -- it would say
> a disk was failed
> if it could not be opened.  In phase 2, the ZFS
> diagnosis engine was
> enhanced to look for per-vdev soft error rate
> discriminator (SERD) engines.
 
Richard, thank you for your detailed reply.
Unfortunately an other reason to stay with UFS in production ..

Gino
 
 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to