Re: RAID suggestions?
On Fri, 28 Mar 2008, Stephan Seitz wrote: On Thu, Mar 20, 2008 at 10:41:27PM -0300, Henrique de Moraes Holschuh wrote: Then DROP the idea of hw-raid. Get a damn good SATA/SCSI/SAS HBA, and use software raid. BTW, damn good means no VIA, SiS, nVidia, or other el-cheap-o half-broken SATA Can you give some examples for a good SATA HBA? No, sorry. Usually the ones with the latest SIL devices, or those with hybrid SAS/SATA bridges are good. While I???m quite convinced that software raid is more flexible than hardware raid (at least for RAID 1), I know that I can do hotplug stuff with my 3ware (or the PERC 5/i in our Dell servers). And the last time I A 3ware board is probably a damn good SATA HBA when in JBOD mode... checked with the kernel SATA support, hotplugging disks was not very well supported. Hmm? It works perfectly, it just complains a damn big lot if you hot-remove a disk *without* issuing a command to detach it from the logical SCSI bus first. What is damn bad is that any late interrupts from the SATA HBA, regardless of the reason, may cause the kernel to kill an IRQ line, and send the entire system into a spiral of ugly death. This is a general Linux issue re. interrupts, though. Maybe MSI-capable HBAs avoid this Linux shortcoming... Note that *any* PCI board using normal PCI IRQs are affected, this includes any HW RAID card. Only, HW RAID cards have something else between the SATA bridges and the host, which will usually eat up stray interrupts :-) With my 3ware controller I can use tw_cli or the GUI to rescan for a new disk or to remove it and I use this feature for backup. How would I do this with a ???normal??? SATA controller? Using the Linux SCSI layer, and mdadm. Look for the scsiadd and mdadm manpages, and also read the documentation on SCSI sysfs (which can do what scsiadd does using IOCTLs). Udev can be used for hotplug notification (insertion). The hot-UN-plug is the problem, the system doesn't differentiate it from a disk gone bad yet, IME, so you have to scsiadd -r the disk before you pull it out. -- One disk to rule them all, One disk to find them. One disk to bring them all and in the darkness grind them. In the Land of Redmond where the shadows lie. -- The Silicon Valley Tarot Henrique Holschuh -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: RAID suggestions?
On Thu, Mar 20, 2008 at 10:41:27PM -0300, Henrique de Moraes Holschuh wrote: Then DROP the idea of hw-raid. Get a damn good SATA/SCSI/SAS HBA, and use software raid. BTW, damn good means no VIA, SiS, nVidia, or other el-cheap-o half-broken SATA Can you give some examples for a good SATA HBA? While I’m quite convinced that software raid is more flexible than hardware raid (at least for RAID 1), I know that I can do hotplug stuff with my 3ware (or the PERC 5/i in our Dell servers). And the last time I checked with the kernel SATA support, hotplugging disks was not very well supported. With my 3ware controller I can use tw_cli or the GUI to rescan for a new disk or to remove it and I use this feature for backup. How would I do this with a „normal” SATA controller? Shade and sweet water! Stephan -- | Stephan SeitzE-Mail: [EMAIL PROTECTED] | | PGP Public Keys: http://fsing.rootsland.net/~stse/pgp.html | signature.asc Description: Digital signature
Re: RAID suggestions?
On 2008-03-19, Michael S. Peek penned: This would be fine, I don't really care if it's a hardware or software RAID, although it seems like a waste of money to buy a hardware RAID card just to use as a dense SATA controller. Is there such a thing as a SATA controller just for lots of drives? One that supports, say, 8 or more drives and is supported by the linux kernel out of the box? All I really want is to be able to have big-time data density in a single machine. Commentary from my husband, who is a storage geek: [quote] Mine is relatively small... only a 4-port setup, 2 internal/2 external. For native linux support, the LSI MegaRAID controllers are open source and part of the kernel, there's no 3rd party driver to install. Something like this has 8 internal SAS/SATA ports and is about $300. It appears to be out of stock, not sure if it's been replaced by something else or is just a big seller: http://www.newegg.com/Product/Product.aspx?Item=N82E16816118092 You'll want at least one PCIe lane for every two drives as a minimum for decent performance. A 4-port PCI-e x1 controller card is cheap, but can't get anywhere close to saturating 4 disks. [/quote] -- monique Help us help you: http://www.catb.org/~esr/faqs/smart-questions.html -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: RAID suggestions?
On Tue, 18 Mar 2008, Luke S Crawford wrote: What we are looking for here is a good enough raid solution... something that costs significantly less than completely duplicating the $800 server or workstation in question, (meaning most good raid solutions you Then DROP the idea of hw-raid. Get a damn good SATA/SCSI/SAS HBA, and use software raid. BTW, damn good means no VIA, SiS, nVidia, or other el-cheap-o half-broken SATA There is no middle-ground in hardware raid. Either get the realy good stuff, or don't use it for RAID. You can probably get a middle-level hw-raid card, and use it as JBOD for Linux software-raid. This is useful especially for SATA. When doing software RAID, *DO* use mdadm array checks daily, or at the very least a SMART long test daily. This is all the protection you have against bit rot causing a non-recoverable mess in your array when you lose a disk: you have to find bad sectors early, and refresh them. -- One disk to rule them all, One disk to find them. One disk to bring them all and in the darkness grind them. In the Land of Redmond where the shadows lie. -- The Silicon Valley Tarot Henrique Holschuh -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: RAID suggestions?
On Tue, 18 Mar 2008, Gregory Seidman wrote: See, here's the thing. That I in RAID is for inexpensive. The idea is to increase reliability on the cheap. You could engineer an amazing HD with a Err, the I is for inexpensive *DISKS* not an inexpensive ARRAY CONTROLLER :-) be hideously expensive. Unless you are using RAID to improve I/O rather than for redundancy, putting expensive hardware into the equation defeats the purpose of a RAID in the first place. If you need redundancy, you need some level of failure tolerance, AKA resilience. Inexpensive RAID controllers will not give you enhanced resilience at all, they cause data-loss really easily. Some are so crappy, they completely destroy the write-ordering and ignore any cache-flush barriers issued by the O.S, and thus are much worse to your filesystem integrity than using a single disk would be in a scenario where you don't have a full disk failure. -- One disk to rule them all, One disk to find them. One disk to bring them all and in the darkness grind them. In the Land of Redmond where the shadows lie. -- The Silicon Valley Tarot Henrique Holschuh -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: RAID suggestions?
On Tue, Mar 18, 2008 at 03:56:00PM -0400, Michael S. Peek wrote: But now I'm looking to build replacement servers and I thought I would ask what the community uses for it's hardware RAID, and why? I only use hardware raid where a battery-backed-up ram cache is available and the performance enhancement in confers is required (ie the mail storage for a mail server, a file server that sees a /lot/ of writes, etc). This is about the only time I would give up the flexibility of linux s/w raid. If you are doing only occasional writes and alot of reads then good controllers+sw raid+a buttload of ram will do you IMO. You get the joyous flexibility of mdadm for managing your raid array and the system ram acts as your fs cache for your many reads. You should also be careful with hw raid. The cheap stuff may well be worse then going sw. One server model that I use I turn off the hw raid on it because after a bit of testing it showed that sw raid was winning out in terms of performance. As for reliability of sw raid, I've been using it on 30 servers for 3 years without a hitch. It's handled disk failures just fine (and in one case the crashing of a mb northbridge locking up the pc - array recovered without problems). The other part of this is that you are not locked into a single vendor (or even model) for your array. If your raid card dies (it happens) it may well mean a complete rebuild unless you can find another like it. With software raid you can mix and match controllers, hd types and even network hds and it'll just deal. Hope this helps. :) cat. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: RAID suggestions?
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 03/20/08 20:54, Henrique de Moraes Holschuh wrote: On Tue, 18 Mar 2008, Gregory Seidman wrote: See, here's the thing. That I in RAID is for inexpensive. The idea is to increase reliability on the cheap. You could engineer an amazing HD with a Err, the I is for inexpensive *DISKS* not an inexpensive ARRAY CONTROLLER :-) I like the way you think, Henrique! - -- Ron Johnson, Jr. Jefferson LA USA Supporting World Peace Through Nuclear Pacification -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.6 (GNU/Linux) iD8DBQFH4zIFS9HxQb37XmcRAj0AAKDdaH/f6aiFcnEULXbKY/7UfRHWLwCeKNvL 69F8yIu9q4nGV/8+6wNl67Q= =UyKk -END PGP SIGNATURE- -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: RAID suggestions?
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Ron Johnson wrote: And that detailed care makes all the difference in the world! Now limp along with a drive failure, add a controller that needs updating and perform the update. Suddenly you find the meta data is unstable and you can not recover from it. I have NOT seen data loss from a professional, on the ball data center. I think what Damon wanted to say that with MD you typically don't expect data loss *even* though you don't pay for expensive service and maintenance. Our Raid controller broke just weeks before it went out of warranty and no, we didn't plan to spend the money on an expensive warranty extension. Johannes -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.6 (GNU/Linux) iD8DBQFH4N8NC1NzPRl9qEURAlrBAJ9spckyPmeFLEU6IBh4LU7lxWL2PgCfcpVT iaxSMhoFz0WWEDOIZK+2GCs= =7Uq2 -END PGP SIGNATURE- -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: RAID suggestions?
On Tue, Mar 18, 2008 at 10:09:26PM -0500, Ron Johnson wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 03/18/08 10:18, Luke S Crawford wrote: Ron Johnson [EMAIL PROTECTED] writes: Or... don't buy sucky h/w in the first place. If you *really* care about your data, you spend the extra bucks for quality h/w that has a competent support staff behind it. And you pay for an adequate backup solution! I think most people on this list are not looking to blow a Porsche (or more) on a netapp or EMC storage appliance. sure, they're great if you've We just bought 2 Linux clusters with (I think) EVA 5000 SANs. 40 total TB of SCSI drives, I think. strange I thought eva 3000's and 5000's went eol a while ago Obviously, though, by we, I don't mean the wife I. :) got the scratch, and if your data is really valuable, they might even make economic sense.But they don't make sense for your average debian user, who could buy several thousand backup workstations or servers for the price of one of the aforementioned 'good' raid boxes. What we are looking for here is a good enough raid solution... something For a given definition of good enough. OP is at a Uni, and mentioned using 16-24 drives. Thus I get the impression that he needs capacity, speed reliability. An $800 controller won't add that much on top of the cost of the drives, shelves power supplies. that costs significantly less than completely duplicating the $800 server or workstation in question, (meaning most good raid solutions you speak of are right out.) and that gives a significantly better MBTF (and/or performance) than just one disk. Personally, I run on a mix of single disks and software mirrors... but if someone knows of a raid card that (along with a redundant disk) doesn't double the cost of my server and that significantly increases MBTF or performance over software mirroring, I'm all ears. - -- Ron Johnson, Jr. Jefferson LA USA Supporting World Peace Through Nuclear Pacification -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.6 (GNU/Linux) iD8DBQFH4IPmS9HxQb37XmcRAmVAAJwM0aDwdMvAs6Fk7x07QBJ3pko7JACg5Iij EIB0Ss3MaULECvtrBwLv5SE= =9nC1 -END PGP SIGNATURE- -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED] -- It's about past seven in the evening here so we're actually in different time lines. - George W. Bush 01/01/2001 congratulating newly elected Philippine President Gloria Macapagal Arroyo, Washington, D.C. signature.asc Description: Digital signature
Re: RAID suggestions?
On Tue, Mar 18, 2008 at 05:41:20PM -0500, Ron Johnson wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 03/18/08 17:21, Gregory Seidman wrote: On Tue, Mar 18, 2008 at 04:33:19PM -0500, Ron Johnson wrote: On 03/18/08 16:03, Damon L. Chesser wrote: Ron Johnson wrote: On 03/18/08 15:41, Damon L. Chesser wrote: [snip] [snip] We (well, the company I work for) has much higher bandwidth needs than that. Which is why all new purchases now use SANs. RAID 10 and a lot of cache makes a database really scream. Then it's only the FC switch that's the potential bottleneck... Q) have you investigated 10G over FC ? - -- Ron Johnson, Jr. Jefferson LA USA -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.6 (GNU/Linux) iD8DBQFH4EUPS9HxQb37XmcRAqlXAJ9v8Uyg0Eo6ojMA8hRhig3z9wO0qQCfSi1P CXnkSLeUcKLKdskACZexOZY= =dyUD -END PGP SIGNATURE- -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED] -- It's my honor to speak to you as the leader of your country. And the great thing about America is you don't have to listen unless you want to. - George W. Bush 07/10/2001 New York, NY speaking to recently sworn in immigrants on Ellis Island signature.asc Description: Digital signature
Re: RAID suggestions?
On Tue, Mar 18, 2008 at 04:37:30PM -0500, Ron Johnson wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 03/18/08 15:44, Mike Bird wrote: On Tue March 18 2008 12:56:00 Michael S. Peek wrote: But now I'm looking to build replacement servers and I thought I would ask what the community uses for it's hardware RAID, and why? We use nothing for hardware RAID. Software RAID is much more flexible. With hardware RAID you always need to have a spare controller on hand, because without a matching replacement controller you can't retrieve your data after a controller failure. That's what dual redundant controllers are for. Both transfer data for the same device, and if one fails, the other keeps on plugging away. Obviously, performance suffers, but at least the machine keeps on chugging until you can replace the dead controller. Does Linux have that capability? I believe the kernel (+userland tools) can handle multipath (multipathd) The downside of software RAID is that it is slower when rebuilding. However rebuilding is so rare that this is not a significant issue for us. However if you're doing RAID-5 you're seriously exposed to data loss from double drive failures, and a faster rebuild can help to reduce that window of vulnerability. We've stopped using RAID-5. We use RAID-1 (3-way in some applications) to make LVM physical volumes. - -- Ron Johnson, Jr. Jefferson LA USA Working with women is a pain in the a**. My wife -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.6 (GNU/Linux) iD8DBQFH4DYaS9HxQb37XmcRAu+eAKDPDXpWuHpeuVb1RTWiCGs7XjnmdgCfSSaO pdHWq9HgvuY7CYCbCpShYAE= =BwPe -END PGP SIGNATURE- -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED] -- It just seems so un-American to me, the picture of the guy storming the house with a scared little boy (Elian Gonzalez) there. I talked to my little brother, Jeb I haven't told this to many people. But he's the governor of I shouldn't call him my little brother. - George W. Bush 04/27/2000 on NewsHour with Jim Lehrer signature.asc Description: Digital signature
Re: RAID suggestions?
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 03/19/08 07:03, Alex Samad wrote: On Tue, Mar 18, 2008 at 05:41:20PM -0500, Ron Johnson wrote: On 03/18/08 17:21, Gregory Seidman wrote: On Tue, Mar 18, 2008 at 04:33:19PM -0500, Ron Johnson wrote: On 03/18/08 16:03, Damon L. Chesser wrote: Ron Johnson wrote: On 03/18/08 15:41, Damon L. Chesser wrote: [snip] [snip] We (well, the company I work for) has much higher bandwidth needs than that. Which is why all new purchases now use SANs. RAID 10 and a lot of cache makes a database really scream. Then it's only the FC switch that's the potential bottleneck... Q) have you investigated 10G over FC ? You mean 10Gb FC switches? No. Extra 4Gb ports and HBAs give us the bandwidth we need. - -- Ron Johnson, Jr. Jefferson LA USA Supporting World Peace Through Nuclear Pacification -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.6 (GNU/Linux) iD8DBQFH4RHdS9HxQb37XmcRAowoAJ9Ynp02LRNLPPmeiGB1EjbtLJeamwCePaU6 5FPUgQWLFypZUaiXwuJtjKQ= =Hx4W -END PGP SIGNATURE- -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: RAID suggestions?
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 03/19/08 07:02, Alex Samad wrote: On Tue, Mar 18, 2008 at 10:09:26PM -0500, Ron Johnson wrote: We just bought 2 Linux clusters with (I think) EVA 5000 SANs. 40 total TB of SCSI drives, I think. strange I thought eva 3000's and 5000's went eol a while ago Then they must not have. But I do know that recently we got one for our VMS cluster. Shame on me for assuming... - -- Ron Johnson, Jr. Jefferson LA USA Supporting World Peace Through Nuclear Pacification -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.6 (GNU/Linux) iD8DBQFH4RNoS9HxQb37XmcRAuZIAKCLWlZ1BDcMk/yU/esF0QGJZdQlPwCg0z3q m4JBM33Ev2V3vSaNkx4RKJw= =USRn -END PGP SIGNATURE- -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: RAID suggestions?
Alex Samad wrote: On Tue, Mar 18, 2008 at 04:37:30PM -0500, Ron Johnson wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 03/18/08 15:44, Mike Bird wrote: On Tue March 18 2008 12:56:00 Michael S. Peek wrote: But now I'm looking to build replacement servers and I thought I would ask what the community uses for it's hardware RAID, and why? We use nothing for hardware RAID. Software RAID is much more flexible. With hardware RAID you always need to have a spare controller on hand, because without a matching replacement controller you can't retrieve your data after a controller failure. That's what dual redundant controllers are for. Both transfer data for the same device, and if one fails, the other keeps on plugging away. Obviously, performance suffers, but at least the machine keeps on chugging until you can replace the dead controller. Does Linux have that capability? I believe the kernel (+userland tools) can handle multipath (multipathd) SNIP Yes, the kernel does (or is able) to handle multipath, however AFAIK, the major SAN,NAS mfg do not support it. I only know of one former customer who tried to use it and it was failing. All the functionality you get from HBAs is not yet working. If you use multipath, you need to use vendor HBAs and vendor applications (aka PowerPath from EMC, the only one I have experience with) AFAIK. If you know better, please inform me. I did extensive searching on behalf of that customer and I only found that at best it is only partly running and buggy. This experience is about 6 months old. In short, IF multipathd works for your SAN/NAS you're home free, however, if you can't get it configured to see your LUNS, nothing you can do about it. So it comes down to which do you have more of? Time or Money? If time, play with multipathd, and if you have kernel devs on the team, perhaps you can fix the issues. If you have more money, go with the vendor solution. Disclaimer, we are leaving the area of Linux I know the most about and are on the outside of my knowledge base. All I know of this subject is from that one customer I could not effectively help other then to say use EMC's application, even after extensive research by me. Even after going through all the howto's I could find, his SAN was not properly being displayed. HTH Damon L. Chesser [EMAIL PROTECTED] -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: RAID suggestions?
Johannes Wiedersich wrote: Snip I think what Damon wanted to say that with MD you typically don't expect data loss *even* though you don't pay for expensive service and maintenance. Our Raid controller broke just weeks before it went out of warranty and no, we didn't plan to spend the money on an expensive warranty extension. Johannes Johannes, Works for me! :) Had that call many, many times (broke just after leaving warranty). Hate it. Now you have data you can not get to and the OEM is holding it hostage. The good news is the guys I worked for gave you a 30 day window you can ignore the out of warranty issue. -- Damon L. Chesser [EMAIL PROTECTED] -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: RAID suggestions?
Damon L. Chesser wrote: Having done support for a tier1 OEM, I found many of our customers (running Linux) ignored the raid controllers and used them as disk controllers and then used software raid. This would be fine, I don't really care if it's a hardware or software RAID, although it seems like a waste of money to buy a hardware RAID card just to use as a dense SATA controller. Is there such a thing as a SATA controller just for lots of drives? One that supports, say, 8 or more drives and is supported by the linux kernel out of the box? All I really want is to be able to have big-time data density in a single machine. ...That is, unless someone knows a good and cheap way to have big-time data density outside the machine. The other option I'm looking at is a NAS, but it seems to me that the cheaper solution is to build a storage server myself instead. My biggest hurdle here is that I have absolutely no experience with SANs or NASs, and I have a short period of time to get my proposal in, so I was planning on going with what I know will work: a big, fat case from rackmountpro.com with a hardware RAID card and 24 friggin' drives. Michael -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: RAID suggestions?
Michael S. Peek wrote: Damon L. Chesser wrote: Having done support for a tier1 OEM, I found many of our customers (running Linux) ignored the raid controllers and used them as disk controllers and then used software raid. This would be fine, I don't really care if it's a hardware or software RAID, although it seems like a waste of money to buy a hardware RAID card just to use as a dense SATA controller. Is there such a thing as a SATA controller just for lots of drives? One that supports, say, 8 or more drives and is supported by the linux kernel out of the box? All I really want is to be able to have big-time data density in a single machine. ...That is, unless someone knows a good and cheap way to have big-time data density outside the machine. The other option I'm looking at is a NAS, but it seems to me that the cheaper solution is to build a storage server myself instead. My biggest hurdle here is that I have absolutely no experience with SANs or NASs, and I have a short period of time to get my proposal in, so I was planning on going with what I know will work: a big, fat case from rackmountpro.com with a hardware RAID card and 24 friggin' drives. Michael Michael, Alas! I just don't know about SATA controllers. Given your situation, it would appear that your plan is the best one. I would stick with what you know and what you know works. Time is short and your rep is on the line. Beyond that, I would have to let some one more experienced with NAS/custom storage then myself advise you. RAID I feel comfortable with. Talking about big high density SATA controllers vs NAS, I do not. HTH -- Damon L. Chesser [EMAIL PROTECTED] -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: RAID suggestions?
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 03/19/08 10:52, Michael S. Peek wrote: Damon L. Chesser wrote: Having done support for a tier1 OEM, I found many of our customers (running Linux) ignored the raid controllers and used them as disk controllers and then used software raid. This would be fine, I don't really care if it's a hardware or software RAID, although it seems like a waste of money to buy a hardware RAID card just to use as a dense SATA controller. Is there such a thing as a SATA controller just for lots of drives? One that supports, say, 8 or more drives and is supported by the linux kernel out of the box? All I really want is to be able to have big-time data density in a single machine. ...That is, unless someone knows a good and cheap way to have big-time data density outside the machine. The other option I'm looking at is a NAS, but it seems to me that the cheaper solution is to build a storage server myself instead. My biggest hurdle here is that I have absolutely no experience with SANs or NASs, and I have a short period of time to get my proposal in, so I was planning on going with what I know will work: a big, fat case from rackmountpro.com with a hardware RAID card and 24 friggin' drives. I don't think there are any non-RAID high-density PCIe controllers. - -- Ron Johnson, Jr. Jefferson LA USA Supporting World Peace Through Nuclear Pacification -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.6 (GNU/Linux) iD8DBQFH4UiGS9HxQb37XmcRAjAvAJ9viFh+1W2742gpdSKuHPRKr2ftKQCcDTUU RlEf+py6L8LPEFtnbqYBKMg= =EafC -END PGP SIGNATURE- -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: RAID suggestions?
Damon L. Chesser wrote: Alas! I just don't know about SATA controllers. Given your situation, it would appear that your plan is the best one. I would stick with what you know and what you know works. Time is short and your rep is on the line. Beyond that, I would have to let some one more experienced with NAS/custom storage then myself advise you. RAID I feel comfortable with. Talking about big high density SATA controllers vs NAS, I do not. So I'm not a complete loon? Excellent. At least that makes me feel better. Like I said, in the past I've used 3ware, but on the last build I did I couldn't get the monitoring software to run. The command-line tool worked fine, but the monitor would segfault. So I wound up kludging it by having a cron job call a script that would run the command line tool, feed it the commands necessary to check the status of the RAID, and then check the output for any string that looked like an error. It works great for a kludge, but that's to say that it's not elegant by a long shot. (For instance, it doesn't know the difference between not OK and VERIFYING, so once a week I get 99 emails that say, An error was found: VERIFYING 1%, 2%, 3%, ...) It looks as though the new player on the block is Areca, which seems to be highly recommended in the reviews I've read, and it has driver support in the linux kernel out of the box. But I can't find the program (or is it kernel module?) for the http interface -- arechttp I think it's called. What else is out there for HW RAID, and how easy is it to use w/ a stock kernel? Michael -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: RAID suggestions?
Michael S. Peek wrote: SNIP So I'm not a complete loon? Excellent. At least that makes me feel better. Like I said, in the past I've used 3ware, but on the last build I did I couldn't get the monitoring software to run. The command-line tool worked fine, but the monitor would segfault. So I wound up kludging it by having a cron job call a script that would run the command line tool, feed it the commands necessary to check the status of the RAID, and then check the output for any string that looked like an error. It works great for a kludge, but that's to say that it's not elegant by a long shot. (For instance, it doesn't know the difference between not OK and VERIFYING, so once a week I get 99 emails that say, An error was found: VERIFYING 1%, 2%, 3%, ...) It looks as though the new player on the block is Areca, which seems to be highly recommended in the reviews I've read, and it has driver support in the linux kernel out of the box. But I can't find the program (or is it kernel module?) for the http interface -- arechttp I think it's called. What else is out there for HW RAID, and how easy is it to use w/ a stock kernel? Michael Michael, The only hardware raid controller I have experience with is Dell PERC controllers which is IIRC an Adaptec chipset. These PERCs do not interact with the kernel, rather they interact with the built in server monitor hardware. You might get messages in /var/log/messages about /dev/sdX if a HD fails, you might not. The actual hardware is masked from the OS by the server hardware, bios and PERC controllers. If an error happens the Dell logo would go amber then you run diags or Dell monitoring software (called OpenManage) to tell you what the fault is. So, I just don't know what monitoring software is out there to do your job. I believe HP and IBM also has similar hardware solutions built in to the servers combined with custom software monitoring tools. I have to bow to someone else's knowledge and learn with you about white boxes hardware raid. And no, you are not a loon! :) I just did not understand the basis for your RAID question and thought I would pass on my former customers experience and preferences. However, if 3ware can be used as just a controller (or you just make single HD volumes) you might still make mdadm work for you with the built in mdadm monitoring tools (that is essentially what my customers did). IE: 24 scsi HDs set up as 24 RAID-0 seen by the OS as 24 sd's. mdadm then is used to make raid-X out of those. mdadm can then tell you if sda has failed or not. I don't know if this is feasible for you, but I offer it up as the only solution I do know about outside of Dell hardware. HTH Damon L. Chesser [EMAIL PROTECTED] -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: RAID suggestions?
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 03/19/08 12:59, Michael S. Peek wrote: [snip] shot. (For instance, it doesn't know the difference between not OK and VERIFYING, so once a week I get 99 emails that say, An error was found: VERIFYING 1%, 2%, 3%, ...) grep error | grep -v VERIFYING - -- Ron Johnson, Jr. Jefferson LA USA Supporting World Peace Through Nuclear Pacification -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.6 (GNU/Linux) iD8DBQFH4WU+S9HxQb37XmcRAn5pAKC7aFRp2g+wKd/2Mbcn0iFaWqbgWgCglz1h ZY7CroLN1xzctQ+So/4nlEI= =5kLb -END PGP SIGNATURE- -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: RAID suggestions?
Michael S. Peek [EMAIL PROTECTED] writes: ...That is, unless someone knows a good and cheap way to have big-time data density outside the machine. The other option I'm looking at is a NAS, but it seems to me that the cheaper solution is to build a storage server myself instead. Price it out carefully; but remember, the more expensive netapp/emc will be a lot more reliable, however, if it works for your application, just building 2 yourself (and keeping one spare) is quite often a lot cheaper. Do a nightly rsync, and you are ready for most disasters with a half-day rollback worst-case. Of course, if restoring from your last backup is millions of dollars of lost profits, you might want to go with the emc/netapp- but if restoring from your last backup is more like a couple thousand (or even a couple tens of thousands of dollars) building one yourself with one in reserve and a good (tested!) backup setup may be the best solution. The other thing to consider is just engineering your application so that it stores the data on local disks; In terms of hardware (rather than engineering time) the cheapest (and probably highest performance) solution would be to just put one or two local disks internal to each computer, and have your application distribute the data in a redundant fashon... of course, depending on your application, this can be a lot of work- but if you have more Engineering power than dollars, you can get a good deal this way. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: RAID suggestions?
On Wed, Mar 19, 2008 at 08:15:09AM -0500, Ron Johnson wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 03/19/08 07:03, Alex Samad wrote: On Tue, Mar 18, 2008 at 05:41:20PM -0500, Ron Johnson wrote: On 03/18/08 17:21, Gregory Seidman wrote: On Tue, Mar 18, 2008 at 04:33:19PM -0500, Ron Johnson wrote: On 03/18/08 16:03, Damon L. Chesser wrote: Ron Johnson wrote: On 03/18/08 15:41, Damon L. Chesser wrote: [snip] [snip] We (well, the company I work for) has much higher bandwidth needs than that. Which is why all new purchases now use SANs. RAID 10 and a lot of cache makes a database really scream. Then it's only the FC switch that's the potential bottleneck... Q) have you investigated 10G over FC ? You mean 10Gb FC switches? No. yes sorry, after reading it again I could see how you could interpret either way Extra 4Gb ports and HBAs give us the bandwidth we need. - -- Ron Johnson, Jr. Jefferson LA USA Supporting World Peace Through Nuclear Pacification -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.6 (GNU/Linux) iD8DBQFH4RHdS9HxQb37XmcRAowoAJ9Ynp02LRNLPPmeiGB1EjbtLJeamwCePaU6 5FPUgQWLFypZUaiXwuJtjKQ= =Hx4W -END PGP SIGNATURE- -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED] -- There are three kinds of people: men, women, and unix. signature.asc Description: Digital signature
Re: RAID suggestions?
On Wed, Mar 19, 2008 at 11:46:17AM -0400, Damon L. Chesser wrote: Alex Samad wrote: On Tue, Mar 18, 2008 at 04:37:30PM -0500, Ron Johnson wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 03/18/08 15:44, Mike Bird wrote: On Tue March 18 2008 12:56:00 Michael S. Peek wrote: But now I'm looking to build replacement servers and I thought I would ask what the community uses for it's hardware RAID, and why? We use nothing for hardware RAID. Software RAID is much more flexible. With hardware RAID you always need to have a spare controller on hand, because without a matching replacement controller you can't retrieve your data after a controller failure. That's what dual redundant controllers are for. Both transfer data for the same device, and if one fails, the other keeps on plugging away. Obviously, performance suffers, but at least the machine keeps on chugging until you can replace the dead controller. Does Linux have that capability? I believe the kernel (+userland tools) can handle multipath (multipathd) SNIP Yes, the kernel does (or is able) to handle multipath, however AFAIK, the major SAN,NAS mfg do not support it. I only know of one former HP Storeage works support mutlipathd - with their HBA (qlogic) and their eva (and I think XP ) range and they are moving towards using the standard drivers, not having to install their own customer who tried to use it and it was failing. All the functionality you get from HBAs is not yet working. If you use multipath, you need to use vendor HBAs and vendor applications (aka PowerPath from EMC, the only one I have experience with) AFAIK. If you know better, please Storeage works have a whitepaper on doing multipath with linux and their storeage, using multipathd inform me. I did extensive searching on behalf of that customer and I only found that at best it is only partly running and buggy. This experience is about 6 months old. In short, IF multipathd works for your SAN/NAS you're home free, however, if you can't get it configured to see your LUNS, nothing you can do about it. So it comes down to which do you have more of? Time or Money? If time, play with multipathd, and if you have kernel devs on the team, perhaps you can fix the issues. If you have more money, go with the vendor solution. Disclaimer, we are leaving the area of Linux I know the most about and are on the outside of my knowledge base. All I know of this subject is from that one customer I could not effectively help other then to say use EMC's application, even after extensive research by me. Even after going through all the howto's I could find, his SAN was not properly being displayed. HTH Damon L. Chesser [EMAIL PROTECTED] -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED] -- His mind is like a steel trap: full of mice. -- Foghorn Leghorn signature.asc Description: Digital signature
RAID suggestions?
Hello gurus, I have built a couple of large storage servers using 16-24 HDDs connected to 3ware controllers, and so far it's worked pretty well. I chose 3ware because it was supported by the linux kernel out of the box. Although I'm not terribly satisfied with the managing software, the RAIDs themselves have ticked over without a hitch for years. But now I'm looking to build replacement servers and I thought I would ask what the community uses for it's hardware RAID, and why? Michael Peek -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: RAID suggestions?
Michael S. Peek wrote: Hello gurus, I have built a couple of large storage servers using 16-24 HDDs connected to 3ware controllers, and so far it's worked pretty well. I chose 3ware because it was supported by the linux kernel out of the box. Although I'm not terribly satisfied with the managing software, the RAIDs themselves have ticked over without a hitch for years. But now I'm looking to build replacement servers and I thought I would ask what the community uses for it's hardware RAID, and why? Michael Peek Michael, Sorry I sent this to you instead of the list. Re-sending to the list: Deps on what server you use. If you go with a tier1 supplier, their server comes with hwraid. Having done support for a tier1 OEM, I found many of our customers (running Linux) ignored the raid controllers and used them as disk controllers and then used software raid. Mdadm will not be obsoleted anytime soon, but your hardware controller might well be gone in two years. I think that if I were to build a server, I would not use hardware raid if I had a choice. The reasons being are: 1. Portability, just take your HDs with you and plug them in and it will not matter who makes the rest of the server. 2. no bios bugs or hardware updates to do 3. it is easy to follow, no interface to learn and re-learn or new terms you have to learn. 4. no clear cut benefits from hardware raid in MOST situations. 5. In three years when the hardware is old and buggy, your mdadm will still be working and you can just plug them in to the new server (assuming no changes in HD tech). 6. I have seen dozens of catastrophic hardware controller failures with complete data lost and not one mdadm failure. I will be thrilled to listen to other viewpoints on the matter! -- Damon L. Chesser [EMAIL PROTECTED] -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: RAID suggestions?
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 03/18/08 14:56, Michael S. Peek wrote: Hello gurus, I have built a couple of large storage servers using 16-24 HDDs connected to 3ware controllers, and so far it's worked pretty well. I chose 3ware because it was supported by the linux kernel out of the box. Although I'm not terribly satisfied with the managing software, the Because it's a buggy CLI app, or because it's not GUI? RAIDs themselves have ticked over without a hitch for years. Isn't that the most important factor? But now I'm looking to build replacement servers and I thought I would ask what the community uses for it's hardware RAID, and why? Writing only a someone who has been reading this list for many years, 3ware seems to be the most common h/w RAID vendor, just as NVIDIA is the video card vendor of choice for fast 3D OpenGL. - -- Ron Johnson, Jr. Jefferson LA USA Working with women is a pain in the a**. My wife -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.6 (GNU/Linux) iD8DBQFH4ClwS9HxQb37XmcRApvVAKDuXjBHBwQRDkWCAxSHhoAPc+MvFgCfa0QX wPkbRuUcF2bTiWHT0typ+KQ= =CI1N -END PGP SIGNATURE- -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: RAID suggestions?
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 03/18/08 15:41, Damon L. Chesser wrote: [snip] changes in HD tech). 6. I have seen dozens of catastrophic hardware controller failures with complete data lost and not one mdadm failure. That just means you're using sucky hardware. We've been using h/w controllers for 15 years, and never had a problem. Of course, they are proprietary, and from a Tier 1 vendor, cost a lot of money, and maintenance fees are high. But we've never lost data from a controller failure. (And damned little loss from any other reason, either, since there's a 24x7 admin staff that pays attention to drive failure lights, and replaces them immediately.) - -- Ron Johnson, Jr. Jefferson LA USA Working with women is a pain in the a**. My wife -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.6 (GNU/Linux) iD8DBQFH4CvUS9HxQb37XmcRAhF6AKDHEVP1nizedLN+pGRc7ONEMVN1CgCfZM4n 0F02wSJO6DSrgsW9DjDtWaY= =bk/1 -END PGP SIGNATURE- -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: RAID suggestions?
On Tue March 18 2008 12:56:00 Michael S. Peek wrote: But now I'm looking to build replacement servers and I thought I would ask what the community uses for it's hardware RAID, and why? We use nothing for hardware RAID. Software RAID is much more flexible. With hardware RAID you always need to have a spare controller on hand, because without a matching replacement controller you can't retrieve your data after a controller failure. The downside of software RAID is that it is slower when rebuilding. However rebuilding is so rare that this is not a significant issue for us. However if you're doing RAID-5 you're seriously exposed to data loss from double drive failures, and a faster rebuild can help to reduce that window of vulnerability. We've stopped using RAID-5. We use RAID-1 (3-way in some applications) to make LVM physical volumes. --Mike Bird -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: RAID suggestions?
Ron Johnson wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 03/18/08 15:41, Damon L. Chesser wrote: [snip] changes in HD tech). 6. I have seen dozens of catastrophic hardware controller failures with complete data lost and not one mdadm failure. That just means you're using sucky hardware. We've been using h/w controllers for 15 years, and never had a problem. Of course, they are proprietary, and from a Tier 1 vendor, cost a lot of money, and maintenance fees are high. But we've never lost data from a controller failure. (And damned little loss from any other reason, either, since there's a 24x7 admin staff that pays attention to drive failure lights, and replaces them immediately.) - -- Ron Johnson, Jr. Jefferson LA USA And that detailed care makes all the difference in the world! Now limp along with a drive failure, add a controller that needs updating and perform the update. Suddenly you find the meta data is unstable and you can not recover from it. I have NOT seen data loss from a professional, on the ball data center. -- Damon L. Chesser [EMAIL PROTECTED] -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: RAID suggestions?
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 03/18/08 16:03, Damon L. Chesser wrote: Ron Johnson wrote: On 03/18/08 15:41, Damon L. Chesser wrote: [snip] changes in HD tech). 6. I have seen dozens of catastrophic hardware controller failures with complete data lost and not one mdadm failure. That just means you're using sucky hardware. We've been using h/w controllers for 15 years, and never had a problem. Of course, they are proprietary, and from a Tier 1 vendor, cost a lot of money, and maintenance fees are high. But we've never lost data from a controller failure. (And damned little loss from any other reason, either, since there's a 24x7 admin staff that pays attention to drive failure lights, and replaces them immediately.) And that detailed care makes all the difference in the world! Now limp along with a drive failure, add a controller that needs updating and perform the update. Suddenly you find the meta data is unstable and you can not recover from it. I have NOT seen data loss from a professional, on the ball data center. Well heck, no one who cares about his data would do that... You replace the drive, let it rebuild, and *then* do the update. Or... don't buy sucky h/w in the first place. If you *really* care about your data, you spend the extra bucks for quality h/w that has a competent support staff behind it. And you pay for an adequate backup solution! Otherwise, you are blaming on the h/w the sins of the humans who bought the crummy h/w. - -- Ron Johnson, Jr. Jefferson LA USA Working with women is a pain in the a**. My wife -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.6 (GNU/Linux) iD8DBQFH4DUfS9HxQb37XmcRAknpAKDOC6KyUN6ZNVNUsIQ6Ps9sZ7iElQCgj1+J nKQYBbk3pgXksxqtK+korIQ= =7KaJ -END PGP SIGNATURE- -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: RAID suggestions?
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 03/18/08 15:44, Mike Bird wrote: On Tue March 18 2008 12:56:00 Michael S. Peek wrote: But now I'm looking to build replacement servers and I thought I would ask what the community uses for it's hardware RAID, and why? We use nothing for hardware RAID. Software RAID is much more flexible. With hardware RAID you always need to have a spare controller on hand, because without a matching replacement controller you can't retrieve your data after a controller failure. That's what dual redundant controllers are for. Both transfer data for the same device, and if one fails, the other keeps on plugging away. Obviously, performance suffers, but at least the machine keeps on chugging until you can replace the dead controller. Does Linux have that capability? The downside of software RAID is that it is slower when rebuilding. However rebuilding is so rare that this is not a significant issue for us. However if you're doing RAID-5 you're seriously exposed to data loss from double drive failures, and a faster rebuild can help to reduce that window of vulnerability. We've stopped using RAID-5. We use RAID-1 (3-way in some applications) to make LVM physical volumes. - -- Ron Johnson, Jr. Jefferson LA USA Working with women is a pain in the a**. My wife -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.6 (GNU/Linux) iD8DBQFH4DYaS9HxQb37XmcRAu+eAKDPDXpWuHpeuVb1RTWiCGs7XjnmdgCfSSaO pdHWq9HgvuY7CYCbCpShYAE= =BwPe -END PGP SIGNATURE- -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: RAID suggestions?
On Tue, Mar 18, 2008 at 04:33:19PM -0500, Ron Johnson wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 03/18/08 16:03, Damon L. Chesser wrote: Ron Johnson wrote: On 03/18/08 15:41, Damon L. Chesser wrote: [snip] changes in HD tech). 6. I have seen dozens of catastrophic hardware controller failures with complete data lost and not one mdadm failure. That just means you're using sucky hardware. We've been using h/w controllers for 15 years, and never had a problem. Of course, they are proprietary, and from a Tier 1 vendor, cost a lot of money, and maintenance fees are high. But we've never lost data from a controller failure. (And damned little loss from any other reason, either, since there's a 24x7 admin staff that pays attention to drive failure lights, and replaces them immediately.) And that detailed care makes all the difference in the world! Now limp along with a drive failure, add a controller that needs updating and perform the update. Suddenly you find the meta data is unstable and you can not recover from it. I have NOT seen data loss from a professional, on the ball data center. Well heck, no one who cares about his data would do that... You replace the drive, let it rebuild, and *then* do the update. Or... don't buy sucky h/w in the first place. If you *really* care about your data, you spend the extra bucks for quality h/w that has a competent support staff behind it. And you pay for an adequate backup solution! Otherwise, you are blaming on the h/w the sins of the humans who bought the crummy h/w. See, here's the thing. That I in RAID is for inexpensive. The idea is to increase reliability on the cheap. You could engineer an amazing HD with a MTBF rating of 150 years (hyperbole, but you get the point), but it would be hideously expensive. Unless you are using RAID to improve I/O rather than for redundancy, putting expensive hardware into the equation defeats the purpose of a RAID in the first place. Since I don't have major I/O performance requirements, just redundancy requirements, I use software RAID. I probably always will. I know that even if 3ware (for example -- replace with the name of your favorite HW RAID manufacturer) goes out of business, my computer catches fire, and one of my mirrored drives dies, I can buy an off-the-shelf system, install Debian, and rebuild my RAID. Ron Johnson, Jr. --Greg -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: RAID suggestions?
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 03/18/08 17:21, Gregory Seidman wrote: On Tue, Mar 18, 2008 at 04:33:19PM -0500, Ron Johnson wrote: On 03/18/08 16:03, Damon L. Chesser wrote: Ron Johnson wrote: On 03/18/08 15:41, Damon L. Chesser wrote: [snip] changes in HD tech). 6. I have seen dozens of catastrophic hardware controller failures with complete data lost and not one mdadm failure. That just means you're using sucky hardware. We've been using h/w controllers for 15 years, and never had a problem. Of course, they are proprietary, and from a Tier 1 vendor, cost a lot of money, and maintenance fees are high. But we've never lost data from a controller failure. (And damned little loss from any other reason, either, since there's a 24x7 admin staff that pays attention to drive failure lights, and replaces them immediately.) And that detailed care makes all the difference in the world! Now limp along with a drive failure, add a controller that needs updating and perform the update. Suddenly you find the meta data is unstable and you can not recover from it. I have NOT seen data loss from a professional, on the ball data center. Well heck, no one who cares about his data would do that... You replace the drive, let it rebuild, and *then* do the update. Or... don't buy sucky h/w in the first place. If you *really* care about your data, you spend the extra bucks for quality h/w that has a competent support staff behind it. And you pay for an adequate backup solution! Otherwise, you are blaming on the h/w the sins of the humans who bought the crummy h/w. See, here's the thing. That I in RAID is for inexpensive. The idea is to increase reliability on the cheap. You could engineer an amazing HD with a No, it (was) to increase single image capacity. Small-capacity hard drives were expensive, but high-capacity drives were *REALLY* expensive. Much more expensive than simply the ratio of the capacities would indicate. I.e., a 300MB drives was much more than 10x the price of a 30MB drive. (Am I seriously dating myself?) MTBF rating of 150 years (hyperbole, but you get the point), but it would be hideously expensive. Unless you are using RAID to improve I/O rather than for redundancy, putting expensive hardware into the equation defeats the purpose of a RAID in the first place. We used (and still use) RAID for it's redundancy and higher bandwidth. We used it for it ability to create very large devices, back when 36GB 18GB were the norm. (And many of those devices are still chugging along. DEC made damned fine hardware!) Since I don't have major I/O performance requirements, just redundancy requirements, I use software RAID. I probably always will. I know that even if 3ware (for example -- replace with the name of your favorite HW RAID manufacturer) goes out of business, my computer catches fire, and one of my mirrored drives dies, I can buy an off-the-shelf system, install Debian, and rebuild my RAID. We (well, the company I work for) has much higher bandwidth needs than that. Which is why all new purchases now use SANs. RAID 10 and a lot of cache makes a database really scream. Then it's only the FC switch that's the potential bottleneck... - -- Ron Johnson, Jr. Jefferson LA USA -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.6 (GNU/Linux) iD8DBQFH4EUPS9HxQb37XmcRAqlXAJ9v8Uyg0Eo6ojMA8hRhig3z9wO0qQCfSi1P CXnkSLeUcKLKdskACZexOZY= =dyUD -END PGP SIGNATURE- -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: RAID suggestions?
Ron Johnson [EMAIL PROTECTED] writes: Or... don't buy sucky h/w in the first place. If you *really* care about your data, you spend the extra bucks for quality h/w that has a competent support staff behind it. And you pay for an adequate backup solution! I think most people on this list are not looking to blow a Porsche (or more) on a netapp or EMC storage appliance. sure, they're great if you've got the scratch, and if your data is really valuable, they might even make economic sense.But they don't make sense for your average debian user, who could buy several thousand backup workstations or servers for the price of one of the aforementioned 'good' raid boxes. What we are looking for here is a good enough raid solution... something that costs significantly less than completely duplicating the $800 server or workstation in question, (meaning most good raid solutions you speak of are right out.) and that gives a significantly better MBTF (and/or performance) than just one disk. Personally, I run on a mix of single disks and software mirrors... but if someone knows of a raid card that (along with a redundant disk) doesn't double the cost of my server and that significantly increases MBTF or performance over software mirroring, I'm all ears. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: RAID suggestions?
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 03/18/08 10:18, Luke S Crawford wrote: Ron Johnson [EMAIL PROTECTED] writes: Or... don't buy sucky h/w in the first place. If you *really* care about your data, you spend the extra bucks for quality h/w that has a competent support staff behind it. And you pay for an adequate backup solution! I think most people on this list are not looking to blow a Porsche (or more) on a netapp or EMC storage appliance. sure, they're great if you've We just bought 2 Linux clusters with (I think) EVA 5000 SANs. 40 total TB of SCSI drives, I think. Obviously, though, by we, I don't mean the wife I. :) got the scratch, and if your data is really valuable, they might even make economic sense.But they don't make sense for your average debian user, who could buy several thousand backup workstations or servers for the price of one of the aforementioned 'good' raid boxes. What we are looking for here is a good enough raid solution... something For a given definition of good enough. OP is at a Uni, and mentioned using 16-24 drives. Thus I get the impression that he needs capacity, speed reliability. An $800 controller won't add that much on top of the cost of the drives, shelves power supplies. that costs significantly less than completely duplicating the $800 server or workstation in question, (meaning most good raid solutions you speak of are right out.) and that gives a significantly better MBTF (and/or performance) than just one disk. Personally, I run on a mix of single disks and software mirrors... but if someone knows of a raid card that (along with a redundant disk) doesn't double the cost of my server and that significantly increases MBTF or performance over software mirroring, I'm all ears. - -- Ron Johnson, Jr. Jefferson LA USA Supporting World Peace Through Nuclear Pacification -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.6 (GNU/Linux) iD8DBQFH4IPmS9HxQb37XmcRAmVAAJwM0aDwdMvAs6Fk7x07QBJ3pko7JACg5Iij EIB0Ss3MaULECvtrBwLv5SE= =9nC1 -END PGP SIGNATURE- -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Inexpensive hardware SATA RAID suggestions?
On (30/07/05 12:13), jennyw wrote: Anyone care to recommend SATA RAID controllers? I'd like to setup a relatively inexpensive box with hardware RAID 1. The system will use Sarge. I was wondering if anyone had hardware suggestions. I found a SATA RAID FAQ at: http://linux.yyz.us/sata/sata-status.html Reading through the list, it looks the best supported hardware might be AHCI which looks like it's currently only used by Intel and ULi chipsets. However, it also seems that there some RAID hardware uses other drivers, such as 3Ware (of course, I'm not sure that 3Ware counts as inexpensive, plus I'm having trouble with a 3Ware card right now and am not yet sure whether it's a driver problem or a problem with the card). If anyone cares to share their experience with AHCI chipsets, that'd be great! If you have good experiences with some other hardware RAID I'd of course love to hear about it, too. Please keep in mind I want to keep the system cost as low as possible, so I'm not too interested in high-end stuff right now (although it might be interesting for future reference). Another question: Is it safe to assume that SATA RAID controllers support hot-swapping of drives? Can't help directly with your question other than to ask why Hardware Raid? There have been a number of posts recently on problems setting up RAID on so called RAID controllers; if you search the archives, you'll find some useful info on the shortcomings of 'cheap' Raid controllers; generally they don't provide full RAID functionality. FWIW, I've got software RAID1 on 6 servers (2 with SATA drives) and they've all worked fine since setup using mdadm. I followed guidance from the first 3 links below. Generally, IIRC, I disabled any so called hardware RAID functionality in the BIOS or jump switches. http://juerd.nl/site.plp/debianraid http://rootraiddoc.alioth.debian.org/ http://unthought.net/Software-RAID.HOWTO/Software-RAID.HOWTO.html http://xtronics.com/reference/SATA-RAID-Debian.htm You don't need fancy new H/W for this ;) Regards Clive -- www.clivemenzies.co.uk ... ...strategies for business -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Inexpensive hardware SATA RAID suggestions?
Am 2005-07-30 12:13:13, schrieb jennyw: Anyone care to recommend SATA RAID controllers? 3Ware 3w8000-2LPIn germany around 120 Euro However, it also seems that there some RAID hardware uses other drivers, such as 3Ware (of course, I'm not sure that 3Ware counts as inexpensive, 3Ware driver up to 3w85xx are in the Kernel included. The 3w9xxx is curently missing. plus I'm having trouble with a 3Ware card right now and am not yet sure whether it's a driver problem or a problem with the card). 3Ware works since Debian Potato 2.2 with ALL kernels. Maybe its your card... Another question: Is it safe to assume that SATA RAID controllers support hot-swapping of drives? No, but all 3Ware support it. But you need special HotSWAP SATA racks Thanks! Jen Greetings Michelle -- Linux-User #280138 with the Linux Counter, http://counter.li.org/ Michelle Konzack Apt. 917 ICQ #328449886 50, rue de Soultz MSM LinuxMichi 0033/3/8845235667100 Strasbourg/France IRC #Debian (irc.icq.com) signature.pgp Description: Digital signature
Inexpensive hardware SATA RAID suggestions?
Anyone care to recommend SATA RAID controllers? I'd like to setup a relatively inexpensive box with hardware RAID 1. The system will use Sarge. I was wondering if anyone had hardware suggestions. I found a SATA RAID FAQ at: http://linux.yyz.us/sata/sata-status.html Reading through the list, it looks the best supported hardware might be AHCI which looks like it's currently only used by Intel and ULi chipsets. However, it also seems that there some RAID hardware uses other drivers, such as 3Ware (of course, I'm not sure that 3Ware counts as inexpensive, plus I'm having trouble with a 3Ware card right now and am not yet sure whether it's a driver problem or a problem with the card). If anyone cares to share their experience with AHCI chipsets, that'd be great! If you have good experiences with some other hardware RAID I'd of course love to hear about it, too. Please keep in mind I want to keep the system cost as low as possible, so I'm not too interested in high-end stuff right now (although it might be interesting for future reference). Another question: Is it safe to assume that SATA RAID controllers support hot-swapping of drives? Thanks! Jen -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]