Thanks to all who have responded.  I spent 2 weekends working through
the best practices tthat Jerome recommended -- it's quite a mouthful.

On 8/17/06, Roch <[EMAIL PROTECTED]> wrote:
My general principles are:

        If you can, to improve you 'Availability' metrics,
        let ZFS handle one level of redundancy;

Cool.  This is a good way to take advantage of the
error-detection/correcting feature in ZFS.  We will definitely take
this suggestion!

        For Random Read performance prefer mirrors over
        raid-z. If you use raid-z, group together a smallish
        number of volumes.

        setup volumes that correspond to small number of
        drives (smallest   you   can bear) with  a  volume
        interlace that is in the [1M-4M] range.

I have a hard time picturing this wrt the 6920 storage pool.  The
internal disks in the 6920 presents up to 2 VD per array (6-7 disk
each?).  The storage pool will be built from a bunch of these VD and
may be futher partitioned into several volumes and each volume is
presented to a ZFS host.  What should the storage profile look like?
I can probably do a stripe profile since I can leave the redundancy to
ZFS.

To complicate matters, we are likely going to attach all our 3510 into
the 6920 and use some of these for the ZFS volumes so futher
restrictions may apply.  Are we better off doing a direct attach?

And next, a very very important thing that we will have to
pursue with Storage Manufacturers including ourself:

        In cases where the storage cache is to be considered
        "stable storage" in the face of power failure, we
        have to be able to configure the storage to ignore
        the "flush write cache" commands that ZFS issues.

        Some  Storage  do ignore the flush  out  of the box,
        others don't.  It   should  be easy to  verify   the
        latency of a small O_DSYNC write. On a quiet system,
        I expect sub  millisec  response.  5ms to  a battery
        protected cache should be red-flagged.

        This was just filed to track the issue:
        6460889 zil shouldn't send write-cache-flush command to <some> devices

Noted.

Note also that S10U2 has already been greatly improved
performance wise, tracking releases is very important.

-r


--
Just me,
Wire ...
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to