additionally, when you DO replace a volume, a resilver occurs (which has
all the effect of a scrub as well). Latent errors might be noticed in
rarely used data during this process... If that's the case a raidz1 might
not be able to recover them (as you already have one drive pulled being
replaced), while a raidz2 likely would.

Andrew Hettinger
http://Prominic.NET  ||  [EMAIL PROTECTED]
Tel:  866.339.3169 (toll free) -or- +1.217.356.2888 x.110 (int'l)
Fax: 866.372.3356 (toll free) -or- +1.217.356.3356            (int'l)
Mobile direct: 1.217.621.2540
CompTIA A+, CompTIA Network+, MCP

[EMAIL PROTECTED] wrote on 07/28/2008 03:39:32 PM:

> On Mon, Jul 28, 2008 at 16:32, Ceri Davies <[EMAIL PROTECTED]> wrote:
> > I'd be interested in any recommendations you might have on raidz
> > configurations in the x4500.
> > for i in 1 2 3 5 6 7; do zpool create poolt$i raidz c0t${i}d0 c1t$
> {i}d0 c5t${i}d0 c6t${i}d0 c7t${i}d0 c8t${i}d0; done
> AIUI, you'd be better off with one large pool, and split it into
> volumes that are shared to the clients.  Then the volumes can share
> bandwidth, which helps when one client is busy and the others are
> idle.  Worth a try, anyways.  Increasing the level of redundancy
> (raidz2, perhaps?) might also be worth looking into if you do go that
> way, since a vdev failure means an entire pool failure in that case.
>
> Will
> _______________________________________________
> storage-discuss mailing list
> [email protected]
> http://mail.opensolaris.org/mailman/listinfo/storage-discuss
_______________________________________________
storage-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/storage-discuss

Reply via email to