I don't like to top-post, but there's no better way right now.  This issue has 
recurred several times and there have been no answers to it that cover the 
bases.  The question is, say I as a customer have a database, let's say it's 
around 8 TB, all built on a series of high end storage arrays that _don't_ 
support the JBOD everyone seems to want - what is the preferred configuration 
for my storage arrays to present LUNs to the OS for ZFS to consume?

Let's say our choices are RAID0, RAID1, RAID0+1 (or 1+0) and RAID5 - that spans 
the breadth of about as good as it gets.  What should I as a customer do?  
Should I create RAID0 sets and let ZFS self-heal via its own mirroring or RAIDZ 
when a disk blows in the set?  Should I use RAID1 and eat the disk space used?  
RAID5 and be thankful I have a large write cache - and then which type of ZFS 
pool should I create over it?

See, telling folks "you should just use JBOD" when they don't have JBOD and 
have invested millions to get to state they're in where they're efficiently utilizing 
their storage via a SAN infrastructure is just plain one big waste of everyone's time.  
Shouting down the advantages of storage arrays with the same arguments over and over 
without providing an answer to the customer problem doesn't do anyone any good.  So.  
I'll restate the question.  I have a 10TB database that's spread over 20 storage arrays 
that I'd like to migrate to ZFS.  How should I configure the storage array?  Let's at 
least get that conversation moving...

- Pete

Gregory Shaw wrote:
Yes, but the idea of using software raid on a large server doesn't make sense in modern systems. If you've got a large database server that runs a large oracle instance, using CPU cycles for RAID is counter productive. Add to that the need to manage the hardware directly (drive microcode, drive brownouts/restarts, etc.) and the idea of using JBOD in modern systems starts to lose value in a big way.

You will detect any corruption when doing a scrub. It's not end-to-end, but it's no worse than today with VxVM.

On Jun 26, 2006, at 6:09 PM, Nathanael Burton wrote:

If you've got hardware raid-5, why not just run
regular (non-raid)
pools on top of the raid-5?

I wouldn't go back to JBOD.   Hardware arrays offer a
number of
advantages to JBOD:
    - disk microcode management
    - optimized access to storage
    - large write caches
- RAID computation can be done in specialized
d hardware
- SAN-based hardware products allow sharing of
f storage among
multiple hosts.  This allows storage to be utilized
more effectively.


I'm a little confused by the first poster's message as well, but you lose some benefits of ZFS if you don't create your pools with either RAID1 or RAIDZ, such as data corruption detection. The array isn't going to detect that because all it knows about are blocks.

-Nate


This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

-----
Gregory Shaw, IT Architect
Phone: (303) 673-8273        Fax: (303) 673-8273
ITCTO Group, Sun Microsystems Inc.
1 StorageTek Drive MS 4382              [EMAIL PROTECTED] (work)
Louisville, CO 80028-4382                 [EMAIL PROTECTED] (home)
"When Microsoft writes an application for Linux, I've Won." - Linus Torvalds


_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to