Yes.  What happened is that you had a transient error which resulted in
EIO being returned to the application.  We dutifully recorded this fact
in the persistent error log.  When you ran a scrub, it verified that the
blocks were in fact still readable, and hence removed them from the
error log.  Methinks the recommended action should request a scrub
first.  However, it's bizarre that your drives all showed zero errors.
Are you ruynning build 36 or later?  Can you send me the contents of
/var/fm/fmd/{err,flt}log and /var/adm/messages?

Thanks,

- Eric
 
On Tue, May 09, 2006 at 01:14:31PM -0700, Alan Romeril wrote:
> Eh maybe it's not a problem after all, the scrub has completed well...
> 
> --a
> 
> bash-3.00# zpool status -v
>   pool: raidpool
>  state: ONLINE
>  scrub: scrub completed with 0 errors on Tue May  9 21:10:55 2006
> config:
> 
>         NAME        STATE     READ WRITE CKSUM
>         raidpool    ONLINE       0     0     0
>           raidz     ONLINE       0     0     0
>             c2d0    ONLINE       0     0     0
>             c3d0    ONLINE       0     0     0
>             c4d0    ONLINE       0     0     0
>             c5d0    ONLINE       0     0     0
>             c6d0    ONLINE       0     0     0
>             c6d1    ONLINE       0     0     0
>             c7d0    ONLINE       0     0     0
>             c7d1    ONLINE       0     0     0
> 
> errors: No known data errors
>  
>  
> This message posted from opensolaris.org
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

--
Eric Schrock, Solaris Kernel Development       http://blogs.sun.com/eschrock
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to