Re: Software raid - controller options

2007-11-07 Thread Goswin von Brederlow
Lyle Schlueter [EMAIL PROTECTED] writes:

 Hello,

 I just started looking into software raid with linux a few weeks ago. I
 am outgrowing the commercial NAS product that I bought a while back.
 I've been learning as much as I can, suscribing to this mailing list,
 reading man pages, experimenting with loopback devices setting up and
 expanding test arrays. 

 I have a few questions now that I'm sure someone here will be able to
 enlighten me about.
 First, I want to run a 12 drive raid 6, honestly, would I be better of
 going with true hardware raid like the areca ARC-1231ML vs software
 raid? I would prefer software raid just for the sheer cost savings. But
 what kind of processing power would it take to match or exceed a mid to
 high-level hardware controller?

We are setting up a lustre storage cluster at the moment.

- 60 external dual channel scsi raid boxes
- 16 750G SATA disks per box.
- Total raw capacity 720 TiB
- 20 8 core server with 3 dual channel scsi controlers each

We run raid6 on the hardware raid and export it to both channels. Each
scsi channel give 150-200MiB/s per raid box.

We partitioned each raid box into 6 luns. As each raid box is
connected to 2 servers there are 3 luns for each. Then each server
runs a 3 raid6 over the 6 raid boxes (one raid6 per lun). [This has 2
reasons: 1) we need 8TiB per filesystem, 2) we need multiple raids so
multiple cores are used in parallel for raid]

Now accessing the 3 raid6s on each server in parallel we get ~500MiB/s
writing and ~800MiB reading. On writes the raid boxes are not fully
utilized. The ram throughput and/or cpu speed is the limit
there. Meaning that 3 cores will be 100% busy just for the raid.


In conclusion: Software raid can compete just fine and outperform
pretty much any hardware raid but you pay for it with cpu time.

But this is raid6, which is rather expensive. A raid0 or raid1 costs
hardly any cpu at all. Just bandwidth to the controler. I also tested
an external 16 box SATA JBOD box [meaning real cheap] with a 4 lane
SAS connector [also quite cheap] with software raid0 and measured
980MiB/s throughput on some test system we had lying around. That is
about the limit of the PCIe (or was it PCI-X?) slot the controler
used.

^^^ JBOD - Just a Bunch Of Disks

 I haven't seen much, if any, discussion of this, but how many drives are
 people putting into software arrays? And how are you going about it?

The MTBF (mean time between failures) goes down eponentially with the
number of disks in an raid. So at some point the chance of 3 disks
failing in your raid6 (and data loss) becomes bigger than a (specific)
single disk failing. I've never actually done the math, just gut
feeling, but I wouldn't do a raid5 over 8 disks and no raid6 over 16
disks. But then again I never had 24 disks in a system yet so I was
never tempted.

But the risk is the same for software and hardware raid. Would you run
a 24 disks hardware raid6?

 Motherboards seem to max out around 6-8 SATA ports. Do you just add SATA
 controllers? Looking around on newegg (and some googling) 2-port SATA
 controllers are pretty easy to find, but once you get to 4 ports the
 cards all seem to include some sort of built in *raid* functionality.
 Are there any 4+ port PCI-e SATA controllers cards? 

Pretty much all the cheap SATA cards include raid support
nowadays. But that is all software raid done via the bios and not
actual hardware raid. Under linux you will just see 2/4/8 disks. All
of them you can run in JBOD mode and you want that. Don't use the
pseudo raid from the cheap cards but use Linux software raid. Just
ignore all the raid stuff in the bios.

 Are there any specific chipsets/brands of motherboards or controller
 cards that you software raid veterans prefer?

I like my promise TX4 at home (pci) and at work I prefer the marvell
chips. We also have a lot of sil chips but they have some bug with
seagate disks (mod 15 bug it is called). Some chip and disk
combinations have to switch to turtle mode (wow, see how fast that
turtle runs :) to avoid data corruption. So do some research before
buying a sil chip.

 Thank you for your time and any info you are able to give me!

 Lyle

MfG
Goswin
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Software raid - controller options

2007-11-07 Thread Goswin von Brederlow
Lyle Schlueter [EMAIL PROTECTED] writes:

 Do you know of any concerns of using all the ports on a motherboard?
 Slowdowns or anything like that?

More likely the opposite. But it depends on how the chips are
connected.

On desktop boards the onboard chip is in the north and/or southbridge
and has a better connect to the ram than the addon controlers can ever
hope for. On server boards the situation is likely the same.

MfG
Goswin
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Software raid - controller options

2007-11-06 Thread Lyle Schlueter
Yes, I must have missed that. I've only been on the mailing list for a
week or so. I did go through some of the archives though. I keep my
kernel up to date, usually within a few days of a release. 

The 3ware and Areca cards sound nice, but I could buy quite a few drives
for the price of those cards (for a 12 port card). Which is what made me
start seriously considering software raid. Plus, from what I understand,
with software raid it is easier to change out server parts than it is
with hardware raid, i.e. swapping controllers or motherboard, etc.

After reading a few responses that I have gotten, it sounds like a
budget based *raid* card from a good vender with good linux support
might be the best option to get a good number of ports on a PCIe
interface, and have it work well with linux, all well being cheaper than
a full blown hardware raid solution.

Thanks for the info and I will have a look at the cards you mentioned.

Lyle


On Tue, 2007-11-06 at 00:41 -0600, Alberto Alonso wrote:
 You've probably missed a discussion on issues I've been having with
 SATA, software RAID and bad drivers. A clear thing from the responses 
 I got is that you really need to use a recent kernel, as they may have
 fixed those problems.
 
 I didn't get clear responses indicating specific cards that are 
 known to work well when hardrives fail. But if you can deal with
 a server crashing and then rebooting manually then software RAID
 is the way to go. I've always been able to get the servers back
 online even with the problematic drivers.
 
 I am happy with the 3ware cards and do use their hardware RAID to
 avoid the problems that I've had. With those I've fully tested
 16 drive systems with 2 arrays using 2 8-port cards. Others have
 recommended the Areca line.
 
 As for cheap dumb interfaces I am now using the RocketRAID 2220,
 which gives you 8 ports on a PCI-X. I believe the built in RAID
 on those is just firmware based so you may as well use them to
 show the drives in normal/legacy mode and use software RAID on
 top. Keep in mind I haven't fully tested this solution nor have
 tested for proper functioning when a drive fails.
 
 Another inexpensive card I've used with good results is the Q-stor
 PCI-X card, but I think this is now obsolete.
 
 Hope this helps,
 
 Alberto
 
 
 On Tue, 2007-11-06 at 05:20 +0300, Lyle Schlueter wrote:
  Hello,
  
  I just started looking into software raid with linux a few weeks ago. I
  am outgrowing the commercial NAS product that I bought a while back.
  I've been learning as much as I can, suscribing to this mailing list,
  reading man pages, experimenting with loopback devices setting up and
  expanding test arrays. 
  
  I have a few questions now that I'm sure someone here will be able to
  enlighten me about.
  First, I want to run a 12 drive raid 6, honestly, would I be better of
  going with true hardware raid like the areca ARC-1231ML vs software
  raid? I would prefer software raid just for the sheer cost savings. But
  what kind of processing power would it take to match or exceed a mid to
  high-level hardware controller?
  
  I haven't seen much, if any, discussion of this, but how many drives are
  people putting into software arrays? And how are you going about it?
  Motherboards seem to max out around 6-8 SATA ports. Do you just add SATA
  controllers? Looking around on newegg (and some googling) 2-port SATA
  controllers are pretty easy to find, but once you get to 4 ports the
  cards all seem to include some sort of built in *raid* functionality.
  Are there any 4+ port PCI-e SATA controllers cards? 
  
  Are there any specific chipsets/brands of motherboards or controller
  cards that you software raid veterans prefer?
  
  Thank you for your time and any info you are able to give me!
  
  Lyle
  
  -
  To unsubscribe from this list: send the line unsubscribe linux-raid in
  the body of a message to [EMAIL PROTECTED]
  More majordomo info at  http://vger.kernel.org/majordomo-info.html

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Software raid - controller options

2007-11-05 Thread Lyle Schlueter
Hello,

I just started looking into software raid with linux a few weeks ago. I
am outgrowing the commercial NAS product that I bought a while back.
I've been learning as much as I can, suscribing to this mailing list,
reading man pages, experimenting with loopback devices setting up and
expanding test arrays. 

I have a few questions now that I'm sure someone here will be able to
enlighten me about.
First, I want to run a 12 drive raid 6, honestly, would I be better of
going with true hardware raid like the areca ARC-1231ML vs software
raid? I would prefer software raid just for the sheer cost savings. But
what kind of processing power would it take to match or exceed a mid to
high-level hardware controller?

I haven't seen much, if any, discussion of this, but how many drives are
people putting into software arrays? And how are you going about it?
Motherboards seem to max out around 6-8 SATA ports. Do you just add SATA
controllers? Looking around on newegg (and some googling) 2-port SATA
controllers are pretty easy to find, but once you get to 4 ports the
cards all seem to include some sort of built in *raid* functionality.
Are there any 4+ port PCI-e SATA controllers cards? 

Are there any specific chipsets/brands of motherboards or controller
cards that you software raid veterans prefer?

Thank you for your time and any info you are able to give me!

Lyle

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Software raid - controller options

2007-11-05 Thread Lyle Schlueter

On Tue, 2007-11-06 at 15:51 +1300, Richard Scobie wrote:
 Lyle Schlueter wrote:
 
 
  Are there any 4+ port PCI-e SATA controllers cards? 
 
 Hi Lyle,
 
 I've been doing a similar exercise here and have been looking at 
 portmultiplier options using the Silicon Image 3124.
Is a port multiplier a decent option? I looked at the 3124 after you
mentioned it and a few of the other controllers offered by Silicon
Image.

I had been looking at the Adaptec 2240900-R PCI Express and HighPoint
RocketRAID 2300 PCI Express. These are both *raid* cards. But if they
can be used as a regular controller card, they both provide 4 SATA ports
and are PCI-e. But sounds like the RocketRAID doesn't work with the
2.6.22+ kernel (according to newegg reviewers). It sounds like the
Adaptec works quite nicely though. 
 
 The other possibility is the Marvell based 8 port dumb SATA controller 
 from Supermicro.
 
 http://www.supermicro.com/products/accessories/addon/AoC-SAT2-MV8.cfm
 
 It is PCI-X though, but there are plenty of boards around still with 
 these slots.
 
 My only concern here is the opening comment in the driver:
 
  1) Needs a full errata audit for all chipsets.  I implemented most
of the errata workarounds found in the Marvell vendor driver, but
I distinctly remember a couple workarounds (one related to PCI-X)
are still needed.
Sounds pretty iffy there. That Adaptec card I mentioned is going for
about 100 USD. Seems like a lot for 4 ports. But sounds like it works
nicely with linux, and I would only need 1 or 2 of them (plus the 6 or 8
ports on the motherboard) to be able to use all 12 drives.

Do you know of any concerns of using all the ports on a motherboard?
Slowdowns or anything like that?
 
 Enquiries here previously have not found anyone using this card.
 
 Regards,
 
 Richard
 -
 To unsubscribe from this list: send the line unsubscribe linux-raid in
 the body of a message to [EMAIL PROTECTED]
 More majordomo info at  http://vger.kernel.org/majordomo-info.html

Thanks,

Lyle

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Software raid - controller options

2007-11-05 Thread Alberto Alonso

You've probably missed a discussion on issues I've been having with
SATA, software RAID and bad drivers. A clear thing from the responses 
I got is that you really need to use a recent kernel, as they may have
fixed those problems.

I didn't get clear responses indicating specific cards that are 
known to work well when hardrives fail. But if you can deal with
a server crashing and then rebooting manually then software RAID
is the way to go. I've always been able to get the servers back
online even with the problematic drivers.

I am happy with the 3ware cards and do use their hardware RAID to
avoid the problems that I've had. With those I've fully tested
16 drive systems with 2 arrays using 2 8-port cards. Others have
recommended the Areca line.

As for cheap dumb interfaces I am now using the RocketRAID 2220,
which gives you 8 ports on a PCI-X. I believe the built in RAID
on those is just firmware based so you may as well use them to
show the drives in normal/legacy mode and use software RAID on
top. Keep in mind I haven't fully tested this solution nor have
tested for proper functioning when a drive fails.

Another inexpensive card I've used with good results is the Q-stor
PCI-X card, but I think this is now obsolete.

Hope this helps,

Alberto


On Tue, 2007-11-06 at 05:20 +0300, Lyle Schlueter wrote:
 Hello,
 
 I just started looking into software raid with linux a few weeks ago. I
 am outgrowing the commercial NAS product that I bought a while back.
 I've been learning as much as I can, suscribing to this mailing list,
 reading man pages, experimenting with loopback devices setting up and
 expanding test arrays. 
 
 I have a few questions now that I'm sure someone here will be able to
 enlighten me about.
 First, I want to run a 12 drive raid 6, honestly, would I be better of
 going with true hardware raid like the areca ARC-1231ML vs software
 raid? I would prefer software raid just for the sheer cost savings. But
 what kind of processing power would it take to match or exceed a mid to
 high-level hardware controller?
 
 I haven't seen much, if any, discussion of this, but how many drives are
 people putting into software arrays? And how are you going about it?
 Motherboards seem to max out around 6-8 SATA ports. Do you just add SATA
 controllers? Looking around on newegg (and some googling) 2-port SATA
 controllers are pretty easy to find, but once you get to 4 ports the
 cards all seem to include some sort of built in *raid* functionality.
 Are there any 4+ port PCI-e SATA controllers cards? 
 
 Are there any specific chipsets/brands of motherboards or controller
 cards that you software raid veterans prefer?
 
 Thank you for your time and any info you are able to give me!
 
 Lyle
 
 -
 To unsubscribe from this list: send the line unsubscribe linux-raid in
 the body of a message to [EMAIL PROTECTED]
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
-- 
Alberto AlonsoGlobal Gate Systems LLC.
(512) 351-7233http://www.ggsys.net
Hardware, consulting, sysadmin, monitoring and remote backups

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html