On 9/4/07 4:34 PM, "Richard Elling" <[EMAIL PROTECTED]> wrote:

> Hi Andy,
> my comments below...
> note that I didn't see zfs-discuss@opensolaris.org in the CC for the
> original...
> 
> Andy Lubel wrote:
>> Hi All,
>> 
>> I have been asked to implement a zfs based solution using storedge 6130 and
>> im chasing my own tail trying to decide how best to architect this.  The
>> storage space is going to be used for database dumps/backups (nearline
>> storage).  What is killing me is that I must mix hardware raid and zfs..
> 
> Why should that be killing you?  ZFS works fine with RAID arrays.

What kills me is the fact that I have a choice and it was hard to decide on
which one was going to be at the top of the totem pole.  From now on I only
want JBOD!

Works even better when I export each disk in my array as a single raid0 x14
then create the zpool :)

#zpool create -f vol0 c2t1d12 c2t1d11 c2t1d10 c2t1d9 c2t1d8 c2t1d7 c2t1d6
c2t1d5 c2t1d4 c2t1d3 c2t1d2 c2t1d1 c2t1d0 spare c2t1d13
> 
>> The storedge shelf has 14 FC 72gb disks attached to a solaris snv_68.
>> 
>> I was thinking that since I cant export all the disks un-raided out to the
>> solaris system that I would instead:
>> 
>> (on the 6130)
>> Create 3 raid5 volumes of 200gb each using the "Sun_ZFS" pool (128k segment
>> size, read ahead enabled 4 disk).
>> 
>> (On the snv_68)
>> Create a raid0 using zfs of the 3 volumes from the 6130, using the same 128k
>> stripe size.
> 
> OK
> 
>> It seemed to me that if I was going to go for redundancy with a mixture of
>> zfs and hardware raid that I would put the redundancy into the hardware raid
>> and use striping at the zfs level, is that methodology the best way to think
>> of it?
> 
> The way to think about this is that ZFS can only correct errors when it has
> redundancy.  By default, for dynamic stripes, only metadata is redundant.
> You can set the copies parameter to add redundancy on a per-file system basis,
> so you could set a different policy for data you really care about.
> 
Makes perfect sense.  Since this is a nearline backup solution, I think we
will be OK with a dynamic stripe.  Once I get approved for thumper im
definitely going to go raidz2.  Since we are a huge Sun partner.. It should
be easier than its been :(

>> The only requirement ive gotten so far is that it can be written to and read
>> from at a minimum of 72mb/s locally and 1gb/35sec via nfs.  I suspect I
>> would need at least 600gb of storage.
> 
> I hope you have a test case for this.  It is difficult for us to predict
> that sort of thing because there are a large number of variables.  But in
> general, to get high bandwidth, you need large I/Os.  That implies the
> application
> is responsible for it's use of the system, since the application is the source
> of I/Os.
> 
Its all going to be accessed via NFS and eventually iscsi, as soon as we
figure out how to backup iscsi targets from the SAN itself.

>> Anyone have any recommendations?  The last time tried to create one 13 disk
>> raid5 with zfs filesystem the performance was terrible via nfs..  But when I
>> shared an nfs filesystem via a raidz or mirror things were much better.. So
>> im nervous about doing this with only one volume in the zfs pool.
> 
> 13 disk RAID-5 will suck.  Try to stick with fewer devices in the set.
> 
> See also
> http://mail.opensolaris.org/pipermail/zfs-discuss/2006-December/024194.html
> http://blogs.digitar.com/jjww/?itemid=44
> 

I cant find a santricity download that will work with a 6130, but that's
ok.. I just created 14 volumes per shelf :)  hardware raid is so yesterday.

> That data is somewhat dated, as we now have the ability to put the ZIL
> on a different log device (Nevada b70 or later). This will be more obvious
> if the workload creates a lot of small files, less of a performance problem
> for large files.
>   -- richard
> 

Got my hands on a Ram-San SSD 64gb and I'm using that for the zil.. Its
crazy fast now.

-Andy
-- 


_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to