I also found a very thorough review of the 3Ware 9500S controller that corroborates the 40-100% performance improvement over write-through, depending on workload:

http://www.xbitlabs.com/articles/storage/display/3ware-9500s8.html

So it seems md raid with the write-back cache paches along with a robust ups-notification mechanism to the md driver would make moot the question of performance difference between ups-protected software-raid and bb-cache with hardware raid.

I for one look forward to more movement on the write-back patches to md and how they might be easily incorporated into OF appliances.

-=dave

----- Original Message ----- From: "Dave Johnson" <[EMAIL PROTECTED]>
To: <[email protected]>
Sent: Monday, July 23, 2007 10:34 AM
Subject: Re: [OF-users] Getting I2O RAID to work


You may want to read this thread and try the respective patches but the best difference for writes where cache pressure is low (cache size is much larger than write-back cache fill rate) is claimed by testing at approximately 1.8, aka 80% faster on linux md:

http://thread.gmane.org/gmane.linux.raid/14997

The aformetioned thread is regarding write-back cache patches for md.

I for one fail to see how write-back caching on an UPS-protected system using the linux md driver is any different than a hardware battery-backed write-back cache except that the cost of providing the same amount of protection time for the linux system vs the hardware cache is much higher. Futhermore, nuts can be monitored and md informed to disable write-back cache upon tranistion to UPS power in the event of utility power failure. Comments ?

-=dave

----- Original Message ----- From: "Rafiu Fakunle" <[EMAIL PROTECTED]>
To: "Dave Watkins" <[EMAIL PROTECTED]>
Cc: <[email protected]>
Sent: Monday, July 23, 2007 2:38 AM
Subject: Re: [OF-users] Getting I2O RAID to work


Dave Watkins wrote:
Excuse the top posting, I'm stuck on OWA :)

There are still some additional benefits if you really need speed, and
they are the onboard memory hardware raid cards have, and the ability
to add a BBU so you can turn on write-back caching and geta nice speed
boost for doing it.

It'd be interesting to see some numbers of SAS RAID with cache and
write-back vs softraid....


R.

Dave

------------------------------------------------------------------------
*From:* [EMAIL PROTECTED] on behalf of Rafiu
Fakunle
*Sent:* Sat 7/21/2007 3:13 p.m.
*To:* Dave Johnson
*Cc:* [email protected]
*Subject:* Re: [OF-users] Getting I2O RAID to work

Dave Johnson wrote:
> i for one would like to start the trend of ceasing to call these
> "RAID" cards, since they aren't,

Hear hear.

> and instead calling them exactly what they are,

Lemons! ;)

> storage device controller cards with an integrated -but for all
> intents and purposes- outboard XOR Offload Engine (XOE).
>
> XOE (zo-wee) cards are notorious for failing on you at the most
> inopportune time.  I 2nd Rafiu's recomendation to go with a complete
> RAID subsystem which includes XOE, storage device controller, and data
> transport IO processor all in one complete card.  Or simply not use
> the XOE of the card and use only the storage controller portion,
> relying instead on the RAID support within the LVM2 component, which
> has been designed and vetted for safety by considerably more testing
> than Promise' 5 beta testers in Taiwan.
>
> That XOE can be a biotch ! =P
>

You know, with the average CPU having at minimum 2 cores, the only
advantage to using a RAID controller these days - when you weigh
softraid against the management overhead of hardraid - is for the
hot-swap capability. Otherwise a decent SAS controller + MD RAID and
you're good to go. External RAID is a different matter of course,
especially when you get  into the realm of shared storage.


R.

> -=dave
>
> ----- Original Message ----- From: "Rafiu Fakunle" > <[EMAIL PROTECTED]>
> To: "Jim Kusznir" <[EMAIL PROTECTED]>
> Cc: <[email protected]>
> Sent: Friday, July 20, 2007 3:48 PM
> Subject: Re: [OF-users] Getting I2O RAID to work
>
>
>> Jim Kusznir wrote:
>>> Hi all:
>>>
>>> After over a week of messing with OpenFiler, I think I'm finally >>> close >>> to getting my hardware I2O RAID card working (Promise SX6000). I >>> had >>> to upgrade the kernel, as the version shipping with OF has a bug in >>> it >>> that breaks all I2O RAID cards. I don't need iSCSI target for now, >>> so
>>> I though tthis was acceptable.
>>
>> Don't use i2o cards ;)
>>
>> Stay away from anything named "Promise" or "Highpoint" if you want
OF to
>> play nice with the RAID Controller.
>> Cards we tend to play better with are:
>>
>> 1) 3Ware / AMCC
>> 2) Areca
>> 3) LSI Logic / Intel
>> 4) Adaptec
>> 5) ICP Vortex
>>
>>>
>>> Now that I have it showing up in my system, things are close.  The
>>> problem is I2O raid devices are created in /dev/i2o/hd* (my case:
>>> /dev/i2o/hda -- this is NOT the same disk as my system disk:
>>> /dev/hda). So, under "Physical Volues", it does not see it, and >>> thus
>>> I can't partition or use it.  I have verified that the volume itself
>>> works by partitioning and formatting it directly.  Both operations
>>> completed sucessfully, verifying the raid drivers funcitionality.
>>>
>>> So my question at this point is: how do I get OF to see the disk and
>>> be able to create volumes and such. Or, if I partition and set up >>> LVM
>>> by hand, will it pick it up at that point?  If so, what are its
>>> requirements to make it seen?
>> Here's a patch to /opt/openfiler/sbin/list-disks.pl
>>
>> --- list-disks.pl.orig  2007-07-20 15:18:22.000000000 -0700
>> +++ list-disks.pl       2007-07-20 15:26:24.000000000 -0700
>> @@ -123,6 +123,7 @@
>>                                close(MEDIA);
>>                                if ($media =~ /^disk/ && !$_[0]) {
>>                                        push(@devs, "/dev/hd$n");
>> +                                       push(@devs, "/dev/i2o/hd$n");
>>                                        }
>>                                }
>>                        }
>>
>>
>>
>> R.
>> _______________________________________________
>> Openfiler-users mailing list
>> [email protected]
>> https://lists.openfiler.com/mailman/listinfo/openfiler-users
>>
>
> _______________________________________________
> Openfiler-users mailing list
> [email protected]
> https://lists.openfiler.com/mailman/listinfo/openfiler-users

_______________________________________________
Openfiler-users mailing list
[email protected]
https://lists.openfiler.com/mailman/listinfo/openfiler-users

------------------------------------------------------------------------

_______________________________________________
Openfiler-users mailing list
[email protected]
https://lists.openfiler.com/mailman/listinfo/openfiler-users


_______________________________________________
Openfiler-users mailing list
[email protected]
https://lists.openfiler.com/mailman/listinfo/openfiler-users


_______________________________________________
Openfiler-users mailing list
[email protected]
https://lists.openfiler.com/mailman/listinfo/openfiler-users


_______________________________________________
Openfiler-users mailing list
[email protected]
https://lists.openfiler.com/mailman/listinfo/openfiler-users

Reply via email to