Re: Hardware RAID chips
James Manning wrote: [ Thursday, January 13, 2000 ] Gregory Leblanc wrote: Since this list appears to be a good place for general RAID on Linux (or Linux on RAID?) questions, I thought I'd ask. What do people think of the StrongARM vs. the i960? Our i960 based cards scream, but we don't have any StrongARM yet (although I could probably get some if the performance is better). (warning, blantant personal opinion ranting :) That's the best kind! Although I haven't digged into results, the only i960 that seems worth running for a RAID5 is the one (-RN I think?) with the embedded XOR engine. Otherwise your XOR operations will do much better on the StrongARM running at typically 2x the speed. Sorry, should have said RAID5, RAID 1 is almost entirely dependant on the quality of the drivers for the SCSI interface, and the stripe computations for RAID0 are pretty simple, all things considered. That said, Mylex cards work great but I haven't had the time to really get a good chance to test any i960-based boards extensively. What do you mean by "test"? I've got a DPT card which seems to perform very well, but it just "feels fast". Greg
Re: Hardware RAID chips
On 14 Jan 2000, Chris Good wrote: Much, much better - we've taken all the i960 cards out of our systems and replaced them with strongarm based ones. Needless to say we shant be buying any more i960 cards and our supplier is talking about stopping shipping them as the perform so much worse. Are you comparing the new high-end to the old high-end (older i960's), or are you saying that even in the new stuff, the extremeraids just blow the acceleraids away? I'm about to put together a RAID5 on an accleraid 250. I've got a few systems using 200's for mirroring and have been happy with them...but mirroring isn't terribly CPU intensive, esp. compared to RAID5. -- Jon Lewis *[EMAIL PROTECTED]*| Spammers will be winnuked or System Administrator| nestea'd...whatever it takes Atlantic Net| to get the job done. _http://www.lewis.org/~jlewis/pgp for PGP public key__
RAID, Oracle, and blocksize
I'm getting ready to roll out a server running Oracle on Linux. The system has an HP NetRAID controller on it, and while it is currently a RAID-5 3-drive configuration, I'm considering switching to a 4-drive RAID 1 configuration. My real question, is how do strip-size, Oracle block size, and ext2 file system block size interact, and what should they be to get the most efficient database? (The database, while not large, is used for both operations and "warehousing", so it is difficult to identify what the read/write ratio is). It is my inclination to back the strip size 4K, leave the Oracle block size at 2K (default), and build the FS with block size=2k Thanks. -- David Corbin Mach Turtle Technologies, Inc. http://www.machturtle.com [EMAIL PROTECTED]
RAID fsck errors...
I don't follow exactly what this error message means, and it's scaring me enough that I'm not putting anything on this stripe set. Any ideas? [gleblanc@peecee gleblanc]# fsck /dev/md0 Parallelizing fsck version 1.15 (18-Jul-1999) e2fsck 1.15, 18-Jul-1999 for EXT2 FS 0.5b, 95/08/09 /dev/md0 contains a file system with errors, check forced. Pass 1: Checking inodes, blocks, and sizes Pass 2: Checking directory structure Error reading block 502219 (Attempt to read block from filesystem resulted in sh ort read). Ignore errory? yes Directory inode 244316, block 0, offset 0: directory corrupted Salvagey? yes Missing '.' in directory inode 244316. Fixy? yes Missing '..' in directory inode 244316. Fixy? yes yPass 3: Checking directory connectivity '..' in /src/linux-2.2.14/net/802/transit (244316) is The NULL inode (0), shou ld be /src/linux-2.2.14/net/802 (147806). Fixy? yes Pass 4: Checking reference counts Inode 2 ref count is 20, should be 21. Fixy? yes Inode 147806 ref count is 5, should be 4. Fixy? yes Pass 5: Checking group summary information /dev/md0: * FILE SYSTEM WAS MODIFIED * /dev/md0: 52576/257536 files (0.5% non-contiguous), 272039/515008 blocks [gleblanc@peecee gleblanc]# The one that I underlined with ^ is the one that I don't get. The others aren't that weird. Greg
Re: RAID fsck errors...
On Sat, Jan 15, 2000 at 10:40:52PM -0800, Gregory Leblanc wrote: I don't follow exactly what this error message means, and it's scaring me enough that I'm not putting anything on this stripe set. Any ideas? Error reading block 502219 (Attempt to read block from filesystem resulted in sh ort read). Ignore errory? yes Sounds like you never scanned your disks for bad blocks before you set up your md device, or one has developed afterwards that didn't fix itself. I've only seen this error before when the underlying block device had unremapped bad blocks. -- Elie Rosenblum That is not dead which can eternal lie, http://www.cosanostra.net And with strange aeons even death may die. Admin / Mercenary / System Programmer - _The Necronomicon_
raid with 2.2.13
Yohoo! I tried to install a raid- device on my linux- system. SuSE 6.3 with kernel 2.2.13 (kernel directly from ftp.fi.kernel.org) raidtools 0.90.0 In the kernel setup I've included the RAID-4/ 5 support and compiled the kernel (RAID- support not as module). Now I've written a raidtab- file with the following lines: raiddev /dev/md0 raid-level 5 nr-raid-disks 3 nr-spare-disks 0 persistent-superblock 1 parity-algorithmleft-symmetric chunk-size 128 device /dev/sdb5 raid-disk 0 device /dev/sdc5 raid-disk 1 device /dev/sdd5 raid-disk 2 An cat /proc/mdstat shows the following: Personalities : [4 raid5] read_ahead not set md0 : inactive md1 : inactive md2 : inactive md3 : inactive When I now want to start my device with "mkraid /dev/md0" the following occurs: handling MD device /dev/md0 analyzing super-block disk 0: /dev/sdb5, 2096419kB, raid superblock at 2096320kB disk 1: /dev/sdc5, 2096419kB, raid superblock at 2096320kB disk 2: /dev/sdd5, 2096419kB, raid superblock at 2096320kB mkraid: aborted, see the syslog and /proc/mdstat for potential clues. Not even the /proc/mdstat or the syslog (/ver/log/warn|messages) shows any more information. How can I get raid running? BTW: I've tried to apply the kernel-patch for the 2.2.11 kernel, but patch won't work. Is there any need for the kernel- patch and where can I get it for the 2.2.13 kernel?
loose cables on external raid device
hello, I have an external raid scsi device with a loose cable. It would come loose and things would freeze up on the machine, and then i would push it back in and the system would come back. Now however i get somewhat random operating system crashes. At least i think they're o/s crashes, but they aren't the usual kernel crashes with messages sent to the console. The machine just freezes up and won't except any input. I check all the cables and none are loose. I am wondering if having cables being taken off and put back on for a running system like this will corrupt random files or cause some general weirdness.
Re: raid with 2.2.13
On Sun, 16 Jan 2000, Standardaccount wrote: How can I get raid running? BTW: I've tried to apply the kernel-patch for the 2.2.11 kernel, but patch won't work. Is there any need for the kernel- patch and where can I get it for the 2.2.13 kernel? You need to apply the 2.2.11 patch to 2.2.13 kernel tree. There are a few (I think two) errors reported, but they can be safely ignored. This is necessary as plain 2.2.13 kernel use the 'old style' raid code, while raidtools-0.90 make use of the 'new style' raid code. D. PS: You can use 2.2.14 kernel with a 2.2.14 patch now (http://people.redhat.com/mingo/raid...).
Off the mailing list.
How do I get off of this mailing list. It's intriguing, but I don't deal with RAID. --Derek
Re: [FAQ-answer] Re: soft RAID5 + journalled FS + power failure =problems ?
Hi, Chris Wedgwood writes: This may affect data which was not being written at the time of the crash. Only raid 5 is affected. Long term -- if you journal to something outside the RAID5 array (ie. to raid-1 protected log disks) then you should be safe against this type of failure? Indeed. The jfs journaling layer in ext3 is a completely generic block device journaling layer which could be used for such a purpose (and raid/LVM journaling is one of the reasons it was designed this way). --Stephen