Re: [Linux-users] IBM x3400 Server, RAID Decision

2015-04-27 Thread Jim Cheetham
I agree with Volker, but for slightly different reasons.

Software RAID-1 is the only way to go :-)

Unless you already understand the tradeoffs (which includes
understanding that the I in RAID stands for inexpensive, so using
RAID-5 to get more disk space is doomed to messy failure, because
the MTBF for disks these days is out of balance with their storage
capacity; i.e. it takes too long to rebuild RAID5, and almost too long
to rebuild RAID6 even), RAID-1 has the most stable and
understandable failure position.

Hardware RAID is only stable if you have replacement opportunities for
the controller. Once that goes, the whole array is useless unless you
can get an identical replacement (same FW version too, often).

I've replaced a lot of disks over the years, and once I stopped
messing around and went SW RAID-1 everywhere disk failures 
replacements became much less stressful. The last one to go was the
boot disk for my new workstation (SSD lifetimes are terribly low) ...
no problems, nothing stopped working, disk replaced (warranty), array
rebuilt, everything happy again in almost no time ...

-jim


On Mon, Apr 27, 2015 at 4:26 PM, Volker Kuhlmann
list0...@paradise.net.nz wrote:
 On Mon 27 Apr 2015 15:07:37 NZST +1200, Peter Simmonds wrote:

 My own logic with software vs hardware raid is that the Linux SW
 raid drivers may only possibly be written in C, whereas the firmware
 on a raid card is likely written in i386 assembly language and
 packed into a flash ROM. C, while it uses optimizations through the
 compiler, I would be guessing produces much fatter code than asm
 i386 would be.

 Forget about your language thoughts here, you're in the wrong forest.
 You need to distinguish 3 types of raid implementation (and discard
 everything raid manufacturers' marketroidal spins spurt forth on theri
 websites):

 1) Hardware RAID. The data goes over the PCI bus only once, and is
 distributed to the actual disks by a beefy computer on the raid card
 with plenty of RAM and often battery backup (for that RAM). To the OS
 the card appears as a single drive. You buy desktop computers for less
 than the cost of one of those cards. If the card dies you buy a new one
 to get your data back.

 2) Software RAID. Data is duplicated by the main CPU and goes over the
 PCI bus once for each disk (writing, reading if checksumming is needed
 by your chosen RAID level). There is high flexibility in configuring the
 disks, you don't have to use whole disks for RAID, you can use just one
 partition. It doesn't even have to be the same partition on each disk
 either. If the SATA controller dies you plug the disks into another box
 and are back up running. Depending on the raid level you can get access
 to the data by-passing the raid, in a crunch.

 3) Fake RAID. This is combining the worst of 1) and 2). It's what you get
 for 10 bucks extra on a SATA card, or free with your mobo. Promise
 FastCrap(TM) springs to mind prominently (I don't think they've ever
 made anything decent, but have infinite ways of weasle-wording their
 rubbish into hardware RAID). You need a proprietory driver(!!) to be
 able to use it. Your data flows the same way as in 2). If the card dies
 you are close to f..inished. These are consumer products.

 Either spend the money or make do with Linux SW RAID. Don't fake it, or
 don't whinge if you do.

 The RAID checksumming consists of integer operations on modern CPUs. It
 will be highly optimised (there is assembler on some parts of the Linux
 kernel). You can of course just do a quick install and test whether it's
 fast enough for you.

 It also depends on what RAID level you're going for. You could also
 upgrade from SW to HW RAID later. I use SW RAID 1 in all my desktops
 because it means I don't have downtime, and it's possible to mirror
 smaller with larger disks (not mirroring all of the larger disk, which
 is intentional).

 Volker

 --
 Volker Kuhlmann
 http://volker.top.geek.nz/  Please do not CC list postings to me.
 ___
 Linux-users mailing list
 Linux-users@lists.canterbury.ac.nz
 http://lists.canterbury.ac.nz/mailman/listinfo/linux-users
___
Linux-users mailing list
Linux-users@lists.canterbury.ac.nz
http://lists.canterbury.ac.nz/mailman/listinfo/linux-users


Re: [Linux-users] IBM x3400 Server, RAID Decision

2015-04-27 Thread Volker Kuhlmann
On Mon 27 Apr 2015 22:47:23 NZST +1200, Jim Cheetham wrote:

 Hardware RAID is only stable if you have replacement opportunities for
 the controller. Once that goes, the whole array is useless unless you
 can get an identical replacement (same FW version too, often).

Oh, even a $4-digit controller is crappy enough to create arrays that
can't be accessed any more with a similar controller from the same
company 2 years later? That removes much of the reason for buying it
in the first place.

 I've replaced a lot of disks over the years, and once I stopped
 messing around and went SW RAID-1 everywhere disk failures 
 replacements became much less stressful. The last one to go was the
 boot disk for my new workstation (SSD lifetimes are terribly low) ...
 no problems, nothing stopped working, disk replaced (warranty), array
 rebuilt, everything happy again in almost no time ...

Yep, putting any desktop together without SW RAID-1 is not well thought
through. Well, unless $RELATIVE only turns it on twice a week...

Volker

-- 
Volker Kuhlmann
http://volker.top.geek.nz/  Please do not CC list postings to me.
___
Linux-users mailing list
Linux-users@lists.canterbury.ac.nz
http://lists.canterbury.ac.nz/mailman/listinfo/linux-users


[Linux-users] IBM x3400 Server, RAID Decision

2015-04-26 Thread Peter Simmonds

Hi All,

I read the posts about the RAID devices on this particular server. 
Pardon my inaction, but I am still at the stage of deciding whether I 
should upgrade it or not (And see if any particular distributions will 
support it out of the box). Beginner thouh I am, would prefer to get the 
hardware right first!


My own logic with software vs hardware raid is that the Linux SW raid 
drivers may only possibly be written in C, whereas the firmware on a 
raid card is likely written in i386 assembly language and packed into a 
flash ROM. C, while it uses optimizations through the compiler, I would 
be guessing produces much fatter code than asm i386 would be. Given that 
whatever code is running the RAID array, it is fundamental to the 
performance of the whole system.


With this particular server the raid device is on a SIMM-like device. It 
would be preferable to upgrade this over using another precious 
expansion slot. What I would look for in a particular module is a CPU of 
some sort and Memory of some sort. This would be a good indication that 
it does the work in hardware(?). Regarding the use of SW Raid, could I 
suggest that the SW (Probalby C) would introduce a huge latency over the 
speed of the SSD's? Pardon me if I am wrong. Then it goes back to the 
idea of using SATA SSD's with a dedicated controller card.


It would be great to get some directions on this from experienced linux 
users. I will of course do my research using the data sheet for this 
particular server, though thought it more prudent to get some advice 
first rather than chasing the rainbows.


Any advice welcome.

Cheers,

Peter Simmonds

PS Will get onto the SE linux situation later. It is working basically 
at least now.




___
Linux-users mailing list
Linux-users@lists.canterbury.ac.nz
http://lists.canterbury.ac.nz/mailman/listinfo/linux-users