> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Stephan Budach
> 
> today I issued a scrub on one of my zpools and after some time I noticed that
> one of the vdevs became degraded due to some drive having cksum errors.
> The spare kicked in and the drive got resilvered, but why does the spare
> drive now also show almost the same number of cksum errors, as the
> degraded drive?
> 
> What would be the best way to proceed? The drive c9t2100001378AC02BFd1
> is the spare drive, that is tagged as ONLINE, but it shows 23 cksum errors,
> while the drive that became degraded only shows 22 cksum errors.
> 
> What would be the best procedure to continue? Would one now first run
> another scrub and detach the degraded drive afterwards, or detach the
> degrades drive immediately and run a scrub afterwards?

Either you have two bad disks, or you have a problem (or had a problem) 
somewhere that can span disks (such as bus, or host bus adapter, or ram.)  
Remember, you don't necessarily need to have the problem now - If there was a 
problem in the past and corrupted data got written to disk, then later (meaning 
now) you would run your scrub and get cksum errors, because of having 
previously written corrupted data.

Your first step should be to look for obvious hardware conditions, like 
overheat or huge piles of dust.  Assuming you find none, get a new disk, and 
zpool replace it.  If the problem persists, you have to assume you have (or 
had) a problem with something that spans multiple disks.

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to