On Wed, Sep 30, 2009 at 7:06 PM, Brandon High <bh...@freaks.com> wrote:

> I might have this mentioned already on the list and can't find it now,
> or I might have misread something and come up with this ...
>
> Right now, using hot spares is a typical method to increase storage
> pool resiliency, since it minimizes the time that an array is
> degraded. The downside is that drives assigned as hot spares are
> essentially wasted. They take up space & power but don't provide
> usable storage.
>
> Depending on the number of spares you've assigned, you could have 7%
> of your purchased capacity idle, assuming 1 spare per 14-disk shelf.
> This is on top of the RAID6 / raidz[1-3] overhead.
>
> What about using the free space in the pool to cover for the failed drive?
>
> With bp rewrite, would it be possible to rebuild the vdev from parity
> and simultaneously rewrite those blocks to a healthy device? In other
> words, when there is free space, remove the failed device from the
> zpool, resizing (shrinking) it on the fly and restoring full parity
> protection for your data. If online shrinking doesn't work, create a
> phantom file that accounts for all the space lost by the removal of
> the device until an export / import.
>
> It's not something I'd want to do with less than raidz2 protection,
> and I imagine that replacing the failed device and expanding the
> stripe width back to the original would have some negative performance
> implications that would not occur otherwise. I also imagine it would
> take a lot longer to rebuild / resilver at both device failure and
> device replacement. You wouldn't be able to share a spare among many
> vdevs either, but you wouldn't always need to if you leave some space
> free on the zpool.
>
> Provided that bp rewrite is committed, and vdev & zpool shrinks are
> functional, could this work? It seems like a feature most applicable
> to SOHO users, but I'm sure some enterprise users could find an
> application for nearline storage where available space trumps
> performance.
>
> -B
>
> --
> Brandon High : bh...@freaks.com
> Always try to do things in chronological order; it's less confusing that
> way.
>


What are you hoping to accomplish?  You're still going to need a drives
worth of free space, and if you're so performance strapped that one drive
makes the difference, you've got some bigger problems on your hands.

To me it sounds like complexity for complexity's sake, and leaving yourself
with a far less flexible option in the face of a drive failure.

BTW, you shouldn't need one disk per tray of 14 disks.  Unless you've got
some known bad disks/environmental issues, every 2-3 should be fine.  Quite
frankly, if you're doing raid-z3, I'd feel comfortable with one per thumper.

--Tim
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to