On Sun, Apr 26, 2015 at 10:16:44PM +1000, Russell Coker wrote:
> > stupid crap like this is one of the reasons why HW raid cards should
> > be avoided. this is an anti-feature that serves onlY HP by locking
> > in customers to their products.
>
> I think that HP RAID supports a purported industry standard for such
> things, so it's not just them.  Also if you have the RAID metadata
> at the front of the disk then a RAID volume can't be accidentally
> mounted as non-RAID.  In the early days of Linux Software RAID it was
> a feature that you could mount half of a RAID-1 array as a non-RAID,
> but that had serious potential for data loss if you made a mistake.
> Now Linux Software RAID usually defaults to the version 1.2 format
> which has the metadata at the start.
>
> So your criticism of HP RAID can be applied to Linux Software RAID.

i know for a fact, because i've done it many times, that i can take
software raid drives from one system and put them in another without any
hassle at all. 

have you, or anyone else, actually done that with, say, a raid array
from HP being moved to an adaptec controller? or from any proprietary
HW RAID card to another brand? in my experience it's usually not even
possible when when moving to a newer model of the same brand,


see also my last message on flexibility advantages of SW RAID over HW RAID.

> If you buy a HP server to run something important that needs little
> down-time then you probably have just that.  If your HP server doesn't
> need such support guarantees then you can probably deal with a delay
> in getting a new RAID card.

if you don't need such support guarantees, then why even use a
brand-name server?

you get better performance and much better value for money with
non-branded server hardware that you either build yourself or pay one of
the specialist server companies to build for you.


> > that still doesn't make hardware raid a better or even good
> > solution, just a tolerable one.
> >
> > for raid-1 or 10, software raid beats the hell out of HW raid,
>
> For RAID-5 and RAID-6 a HP hardware RAID with battery backed
> write-back cache vastly outperforms any pure software RAID
> implementation.

i used to have exactly the same opinion - battery-backed or flash-based
write caches meant that HW RAID was not only much better but absolutely
essential for RAID-5 or RAID-6, because write performance on RAID-5/6
really sucks without write caching.

but now ZFS can use an SSD (or other fast block device) as ZIL, and
kernel modules like bcache[1] and facebook's flashcache[2] can provide
the same kind of caching using any fast block device for any filesystem.

so, that one advantage is gone, and has been for several years now.

[1] http://en.wikipedia.org/wiki/Bcache
[2] http://en.wikipedia.org/wiki/Flashcache

at the moment, the fastest available block devices are PCI-e SSDs (or
PCI-e battery-backed RAMdisks). in the not too distant future, they'll
be persistent RAM devices that run at roughly the same speed as current
RAM.  Linux Weekly News[3] has had several articles on linux support for
them over the last few years. ultimately, i expect even bulk storage
will be persistent RAM devices but initially it will be cheaper to have
persistent RAM caching in front of magnetic disks or SSDs.


[3] search for 'NVM' at https://lwn.net/Search/

craig

-- 
craig sanders <[email protected]>
_______________________________________________
luv-main mailing list
[email protected]
http://lists.luv.asn.au/listinfo/luv-main

Reply via email to