Robert Milkowski wrote:
Hello Peter,

Wednesday, June 28, 2006, 1:11:29 AM, you wrote:

PT> On Tue, 2006-06-27 at 17:50, Erik Trimble wrote:

PT> You really need some level of redundancy if you're using HW raid.
PT> Using plain stripes is downright dangerous. 0+1 vs 1+0 and all
PT> that. Seems to me that the simplest way to go is to use zfs to mirror
PT> HW raid5, preferably with the HW raid5 LUNs being completely
PT> independent disks attached to completely independent controllers
PT> with no components or datapaths in common.

well, it will give you less than half your raw storage.
Due to costs I belive in most cases it won't be acceptable.
People are using raid-5 mostly due to costs and you are proposing
something worse (in terms of available logical storage) than
mirroring.
The main reason I don't see ZFS mirror / HW RAID5 as useful is this:

ZFS mirror/ RAID5:      capacity =  (N / 2) -1
                                    speed <<  N / 2 -1
minimum # disks to lose before loss of data: 4 maximum # disks to lose before loss of data: (N / 2) + 2

ZFS mirror / HW Stripe   capacity =  (N / 2)
                                    speed >=  N / 2
minimum # disks to lose before loss of data: 2 maximum # disks to lose before loss of data: (N / 2) + 1

Given a reasonable number of hot-spares, I simply can't see the (very) marginal increase in safety give by using HW RAID5 as out balancing the considerable speed hit using RAID5 takes.

Robert -

I would definitely like to see the difference between read on HW RAID5 vs read on RAIDZ. Naturally, one of the big concerns I would have is how much RAM is needed to avoid any cache starvation on the ZFS machine. I'd discount the NVRAM on the RAID controller, since I'd assume that it would be dedicated to write acceleration, and not for read. My big problem right now is that I only have an old A3500FC to do testing on, as all my other HW RAID controllers are IBM ServerRAIDs, for which the Solaris driver isn't really the best.


-Erik



_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to