On 12/18/2011 4:23 PM, Jan-Aage Frydenbø-Bruvoll wrote:
Hi,

On Sun, Dec 18, 2011 at 22:14, Nathan Kroenert<nat...@tuneunix.com>  wrote:
  I know some others may already have pointed this out - but I can't see it
and not say something...

Do you realise that losing a single disk in that pool could pretty much
render the whole thing busted?

At least for me - the rate at which _I_ seem to lose disks, it would be
worth considering something different ;)
Yeah, I have thought that thought myself. I am pretty sure I have a
broken disk, however I cannot for the life of me find out which one.
zpool status gives me nothing to work on, MegaCli reports that all
virtual and physical drives are fine, and iostat gives me nothing
either.

What other tools are there out there that could help me pinpoint
what's going on?


One choice would be to take a single drive that you believe is in good working condition, and add it as a mirror to each single disk in turn. If there is a bad disk, you will find out if the mirror fails because of a read error. Scrub, though, should really be telling you everything that you need to know about disk failings, once the surface becomes corrupted enough that it can't be corrected by re-reading enough times.

It looks like you've started mirroring some of the drives. That's really what you should be doing for the other non-mirror drives.

Gregg Wonderly
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to