On Tue, Sep 27, 2011 at 1:21 PM, Matt Banks <mattba...@gmail.com> wrote:

> Also, maybe I read it wrong, but why is it that (in the previous thread about
> hw raid and zpools) zpools with large numbers of physical drives (eg 20+)
> were frowned upon? I know that ZFS!=WAFL but it's so common in the
> NetApp world that I was surprised to read that. A 20 drive RAID-Z2 pool
> really wouldn't/couldn't recover (resilver) from a drive failure? That seems
> to fly in the face of the x4500 boxes from a few years ago.

    There is a world of difference between a zpool with 20+ drives and
a single vdev with 20+ drives. What has been frowned upon is a single
vdev with more than about 8 drives. I have a zpool with 120 drives, 22
vdevs each with 5 drives in a raidz2 and 10 hot spares. The only
failures I had to resilver were before it went production (and I  had
little data in it at the time), but I expect resilver times to be
reasonable based on experience with other configurations I have had.

    Keep in mind that random read I/O is proportional to the number of
vdevs, NOT the number of drives. See
for the results of some of my testing.

Paul Kraus
-> Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
-> Sound Designer: Frankenstein, A New Musical
-> Sound Coordinator, Schenectady Light Opera Company (
http://www.sloctheater.org/ )
-> Technical Advisor, RPI Players
zfs-discuss mailing list

Reply via email to