Darren J Moffat wrote:

Peter Rival wrote:

storage arrays with the same arguments over and over without providing an answer to the customer problem doesn't do anyone any good. So. I'll restate the question. I have a 10TB database that's spread over 20 storage arrays that I'd like to migrate to ZFS. How should I configure the storage array? Let's at least get that conversation moving...


I'll answer your question with more questions:

What do you do just now, ufs, ufs+svm, vxfs+vxvm, ufs+vxvm, other ?

What of that doesn't work for you ?

What functionality of ZFS is it that you want to leverage ?

It seems that the big thing we all want (relative to the discussion of moving HW RAID to ZFS) from ZFS is the block checksumming (i.e. how to reliabily detect that a given block is bad, and have ZFS compensate). Now, how do we get things when using HW arrays, and not just treat them like JBODs (which is impractical for large SAN and similar arrays that are already configured).

Since the best way to get this is to use a Mirror or RAIDZ vdev, I'm assuming that the proper way to get benefits from both ZFS and HW RAID is the following:

(1) ZFS mirror of HW stripes, i.e. "zpool create tank mirror hwStripe1 hwStripe2" (2) ZFS RAIDZ of HW mirrors, i.e. "zpool create tank raidz hwMirror1, hwMirror2" (3) ZFS RAIDZ of HW stripes, i.e. "zpool create tank raidz hwStripe1, hwStripe2"

mirrors of mirrors and raidz of raid5 is also possible, but I'm pretty sure they're considerably less useful than the 3 above.

Personally, I can't think of a good reason to use ZFS with HW RAID5; case (3) above seems to me to provide better performance with roughly the same amount of redundancy (not quite true, but close).

I'd vote for (1) if you need high performance, at the cost of disk space, (2) for maximum redundancy, and (3) as maximum space with reasonable performance.


I'm making a couple of assumptions here:

(a) you have the spare cycles on your hosts to allow for using ZFS RAIDZ, which is a non-trivial cost (though not that big, folks). (b) your HW RAID controller uses NVRAM (or battery-backed cache), which you'd like to be able to use to speed up writes (c) you HW RAID's NVRAM speeds up ALL writes, regardless of the configuration of arrays in the HW (d) having your HW controller present individual disks to the machines is a royal pain (way too many, the HW does other nice things with arrays, etc)



Erik Trimble
Java System Support
Mailstop:  usca14-102
Phone:  x17195
Santa Clara, CA


_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to