2012-05-12 4:26, Jim Klimov wrote:
Wonder if things would get better or worse if I kick one of the
drives (i.e. hotspare c5t6d0) out of the equation:

raidz1 ONLINE 0 0 0
c0t1d0 ONLINE 0 0 0
spare ONLINE 0 0 0
c1t2d0 ONLINE 0 0 0 6.72G resilvered
c5t6d0 ONLINE 0 0 0
c4t3d0 ONLINE 0 0 0
c6t5d0 ONLINE 0 0 0
c7t6d0 ONLINE 0 0 0

While googling around for similar reports, I found the rumour
that in a situation like mine the original and (partial) hot
spare disk can conflict causing the restarts I am seeing, and
to a poster on some FreeBSD list kicking the spare from the
pool had helped reportedly.

Can some real zfs gurus please confirm or deny the rumor,
before I do a possibly stupid thing by kicking the hotspare
drive out of the equation (and the pool)?

Also, would booting from OI LiveCD help resilver the pool in
one go, as it is (without upgrading the pool on-disk), and
would the snv_117 server be able to work with it afterwards? ;)

Thanks,
//Jim
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to