we have rh 4.2 5.0 and 5.1 running various versions of raid code.
the 1005 was the easiest by far. sorry if this seems rudimentary, but
there are lurkers here who can use all of this.
Ultra-quick start Raid (too quick) Howto:
1. rm -rf /usr/src/linux cause we want clean sources. redhat rpms
i previously posted this. i reccommend you look through the raid
archives at
http://www.linuxhq.com/lnxlists/
and you will find most of what you need. please note: the raid tools i
list and patches i list below have since been updated also,
if you are using 5.2 you will find that redhat
ok- i have heard this enough times. doing what little i can, i have
created a ultra short howto, specific to RH systems, though most of it is
aplicable to others. i am placingit at:
http://www.pfeiffer.edu~anoah/raid/
i will take some time this weekend to update it.
for now, it has gotten three
i am currently investigating a hardware-based solution to our problem.
it seems as if it would be possible to load the kernel into ram and
execute it from a rom rather than from disk. this is essentially how we
boot strap our diskless machines, but the rom in that case is small and
read directly
so what you are saying is that in the case of drive failure, we will lose
that data, unless we have a hardware raid.
how does the kernel map those pages to disk if the file is on an md
device? since you say it does not access the fs, does it just
auto. stripe the data across the constituent
no. no. no.
there is no swapping to raid1 or higher, and there is no point in swapping
to a file on a raid 0 or linear partition as they only hurt performance if
anything. this is true for a swap partition, or a swap file on any other
filesystem.
no swap to software raid. if you a really
yes. remove those lines from 5.2 initscripts.
al
"so don't tell us it can't be done, putting down what you don't know.
money isn't our god, integrity will free our souls" - Max Cavalera
ok guys. need to update the damn docs.
in the meantime- i wrote this up a while back
http://www.pfeiffer.edu/~anoah/raid/
hope it helps.
and no, rh 5.2 is busted. you have to take the raid stuff OUT of your rc
scripts. you also have to rpm -e raidtools.
al
Paranoid: So who's in charge
ok guys.
i am currently building a pair of machines from the ground up.
they will be root raid and all the partitions except swap will be some
form of raid.
i intend to document the process i go through to get rh 5.1 or 5.2
installed on these boxes.
would anyone object to me attempting to turn
on these machines which i plan to document, i will be using a sacrificial
partition to do an "everything-in-root" install.
for the purposes of this howto- i dont want to assume the user has a big
network of machines they can run nfs on (though i do, and that is usually
how i install.)
i will
i have been trying to install software raid on two systems for the past
several days. i am planning to document the experience and submit a newer
version of the current howto (since there were no complaints at this
suggestion)
however, i have had extreme problems with the newer versions.
the
ok- the main point with booting is thus- with the new raid code, you can
start raid from the kernel without any utils on the disk. meaning, all you
have to do is get the kernel into ram, and jump to its starting address.
this is what lilo is for, along with loadlin, etc.
they are needed cause
s purposes!
I was trying to say that if lilo could do RAID( 1 ) then you could have a
5(say) way mirror for the 10meg /boot partition... then lilo would be on
all the disks... then you could use all 5 disks in a raid5 for /
On Fri, 5 Feb 1999, m. allan noah wrote:
what you mention is no
i consistently get active inode messages trying to mkraid under 2.0.36
i have solved this two ways.
1. i found that one of the december 2.0.35
patches worked fine under mkraid. i used 2.0.35 to install the raids, then
upgraded to .36 after (upgraded the raidtools as well)
2. once or twice, if
once on accident i got a lilo config to install a kernel on one half of a
raid 1 device. it was cause i had lilo configured, made the array, copied
using dd, and the kernel ended up in the same place on the disk, so lilo
still worked. there seems to be no way to get this to work intentionally.
do not use raid0 for part of your fs. i ONLY use raid0 for news spools.
raid0 DOUBLES (at least) the chances of total fs loss. buy a bigger disk.
allan
"so don't tell us it can't be done, putting down what you don't know.
money isn't our god, integrity will free our souls" - Max Cavalera
uot; - Max Cavalera
On Sat, 17 Apr 1999, Dave Cinege wrote:
"m. allan noah" wrote:
do not use raid0 for part of your fs. i ONLY use raid0 for news spools.
raid0 DOUBLES (at least) the chances of total fs loss. buy a bigger disk.
Hmm. I've run a 5 drive hardware RAID0 on my mu
not sure if it's a good idea to do this with any disk interface that
not's designed for it. i have a DEC paper somewhere about SCSI, and
it spends a lot of time on hot swap issues. Hot swap can be extremely
bad for an interface physically and electrically. eg, SCA and other
proprietary
when you did the raidrun imediately before this, did you specify a value
for -p ?
the old raid stuff that ships with the default kernel has let me do that
incorrectly before. it you have a tape backup, i recommend that you patch
the kernel with the latest alpha patches, and use the docs
"so don't tell us it can't be done, putting down what you don't know.
money isn't our god, integrity will free our souls" - Max Cavalera
On Fri, 30 Apr 1999, John Walker wrote:
I have faithfully followed all the directions in the HOWTO at
ostenfeld.dk/~jakob/Software-RAID.HOWTO/... However,
"so don't tell us it can't be done, putting down what you don't know.
money isn't our god, integrity will free our souls" - Max Cavalera
On Sun, 2 May 1999, Anders Lindh wrote:
mkraid: aborted
look in var/log/messages, also run dmesg
The contents of /proc/mdstat remains the same after
after i do mkraid i can cat /proc/mdstat and see the progress of the syncing of
the disks ... once synced i can stop and start the md device w/ no problems,
the trouble is on reboot, at that time the kernel doesnt see /dev/hdc? as
operational and continues in degraded mode the actual syslog
you made raid support as a module. go back and re-make kernel. but in the
stuff you actually need directly, not a module.
allan
"so don't tell us it can't be done, putting down what you don't know.
money isn't our god, integrity will free our souls" - Max Cavalera
On Fri, 7 May 1999, Aaron
you should put the item you want to talk about in the body of the
document, not just in the subject. i almost missed it.
yes- we are running RH 5.9.10 with kernel 2.2.6, the latest raid patches,
and knfsd. works like a charm. quotas and all.
we only have had one problem, last night as it turns
no, i will explain, just for the benefit of those here:
needed to increase size of array, so i init 1. was going to destroy array
and remake in single user mode. system would not unmount the md device,
cause i had stopped knfsd in an odd manner (or so i thought).
i am not sure if cpu0 was
yes, you can reduce the size of blocks by one in ext2fs, as long as the
blocks are 4k or greater, then mkraid will copy disk one onto disk 2, etc.
and the raid SB will takeup its space at the end of the disk.
this only works for raid1 though, cause under raid5 your fs will be larger
than a
alpha to the linux community != alpha to closed source community.
redhat made a smart move, given all the probs we have telling people to
upgrade their 5.2 installation.
recent announcements from mingo have informed us not to use the alpha raid
patches on the newer kernels just yet. 2.2.6 or
probably, you are not REALLY running a patched kernel.
if you
cat /proc/mdstat
and see stuff about /dev/md0 being 'inactive'
then you are running an unpatched kernel, need to check lilo and make sure the
new kernel is there. also check your patch for rejects and your config to have
raid
PLEASE correct me if i am wrong...
allan
David Cooley [EMAIL PROTECTED] said:
I got the same errors until I increased my chunk size to 32K
At 01:32 PM 3/23/2000, you wrote:
On Thu, 23 Mar 2000, m. allan noah wrote:
mkraid: aborted, see the syslog and /proc/mdstat for potential
looks like you ran mke2fs on your partitions, then you did mkraid on them.
guess what? raid code puts a little chunk of info about each disk and the raid
array it is part of onto the end of the partition, and then reports the size
of the device as being a little smaller than the number of blocks
jeff- i am using 2.2.14 with mingo patch, and it is great. i have a dozen or
so boxes, 512meg, SMP pIII 450, ncr scsi, etc in this config. all are fine.
it would be interesting to see if raid is the issue, or your adaptec (i am
inclined to think the latter).
1. swap scsi cards. i like
this is networking bug in 2.2.11
upgrade kernel to 2.2.14, get new raid patch and raid tools from
www.redhat.com/~mingo/
allan
Bernd Burgstaller [EMAIL PROTECTED] said:
Dear all!
I am writing this mail due to hangups related to my raid devices. I am
seeking for suggestions enabling me
uhh- maybe cause you are compairing two completely different systems? try
using the same size and number of disks, same motherboard, same cpu, same
everything but the disks and controller, before you try to compair scsi to
ide...
allan
octave klaba [EMAIL PROTECTED] said:
Hi,
I made the
how do you expect to boot raid0? think about that...
better do raid 1 or regular disk for the partition where your kernel lives, so
your boot loader can find your kernel, rather than have to find pieces of your
kernel spead across multiple disks...
i have found raid in 2.3.xx to so far be a
i have found that on raid 1, chunk size does not matter for performance. what
matters more seems to be the block size option to mke2fs. i make my mysql
stores with chunksize 16, and run mke2fs with -R stride=4 -b 4096
allan
Erich [EMAIL PROTECTED] said:
I looked through the documentation,
hmm. first change this:
device /dev/hdc6
raid-disk 1
failed-disk 1
to this:
device /dev/hdc6
failed-disk 1
then change the partition types of both chunks back to 83 (not fd). then
reboot. check
Robert, i bet you are using some platform other than intel? a 64bit machine
perhaps? if this is the case, the raid code does not work properly without a
patch. byte ordering issues, it seems.
check the archive at marc.theaimsgroup.com for this patch. it was for PPC.
allan
Robert Hélie [EMAIL
This is where I am encountering problems. mkraid reports:
DESTROYING the contents of /dev/md1 in 5 seconds, Ctrl-C if unsure!
handling MD device /dev/md1
analyzing super-block
disk 0: /dev/sda6, 530113kB, raid superblock at 530048kB
disk 1: /dev/sdb6, 530113kB, raid superblock at
this is a raid 0 array correct? if so, redhat may have a different chunk size
than the original created array, which could explain why the data appears
screwed up.
the thing to do is go back and mkraid on the array with different chunk sizes
until you find the original one. then the first ext2
Harry Zink [EMAIL PROTECTED] said:
on 5/17/00 7:21 AM, James Manning at [EMAIL PROTECTED] wrote:
ls -l /dev/hd[gk]* ... you make need a later MAKEDEV (or edit yours)
to create all the necessary files
Not sure what this will help, except confirm again that these volumes aren't
there are linux-ha lists where the folks have more clue about this than some
of us, but linux software raid will allow a portion of what you are looking
for.
however, there have been revelations on this list as of late that indicate
making a software raid set composed of other software raid sets
IIRC, neither the fasttrak nor the fasttrak66 are REALLY hardware raid. did
you actually think you could get hardware raid from something that costs less
that 60 bucks US?
they do all the 'hardware' in software, onboard is only a rom, no cpu to do
any real work.
linux software raid over one of
i will point this out AGAIN. the promise fastTrak is NOT hardware raid. did
you really think 30 seconds with a soldering iron and a 7 cent resistor could
make a hardware raid device from a 20 dollar ide card?
there is no cpu on board the card, and all the 'raid' functionality is
provided in
ike to see it compared
against linux software raid.
Also, isn't RAID5 incredibly slow on anything but a well cached hardware
controller??? If yes, then why not use the FastTrak again?
Thanks,
--b
-Original Message-----
From: m. allan noah [mailto:[EMAIL PROTECTED]]
Sent: Tue
did you patch the kernel 2.2.16 with the raid patch?
take a look at the file /proc/mdstat
if that file has the word 'inactive' in it, then you need to patch your
kernel.
look at www.redhat.com/~mingo/
for patches.
allan
Jordan Wilson [EMAIL PROTECTED] said:
I have a few problems regarding
why do you want to make a two drive raid5? that makes no sense. use raid1.
yes- if there is data already on the drive, running mkraid is a pretty sure
way to destroy the filesystem, since part of the file system will be
overwritten.
allan
Bob Sterner [EMAIL PROTECTED] said:
The howto says
James Manning [EMAIL PROTECTED] said:
[m.allan noah]
The howto says try mkraid --force. With a 2 drive (2/4) will I lose
everything.
why do you want to make a two drive raid5? that makes no sense. use raid1.
If you *read* his message you'll notice that he has 4 drives in the
...
--
ai
http://sefiroth.org
On Wed, 28 Jun 2000, m. allan noah wrote:
anton, run this command as root:
dd if=/dev/sdb of=/dev/null count=100 bs=1k
this will dump some data out of your scsi disk, and into /dev/null check
the
light :)
if you miss it, increase
guess what I am trying to figure out if sdb is bad or not.
I will run the scsi utitility on it after hours... but does anyone know
how 'time inconsistencies in the superblock' generally arise?
--A
--
ai
http://sefiroth.org
On Wed, 28 Jun 2000, m. allan noah wrote:
well, if you
software raid will NOT save you from power failure. it will save you from
disk/controller/cable failure only! do NOT lull yourself into a false sense of
security.
if you have a people who cant handle unix and powering down, then you need an
UPS and lock your box in a closet.
linux software raid
i have not used adaptec 160 cards, but i have found most everything else they
make to be very finicky about cabling and termination, and have had hard
drives give trouble on adaptec that worked fine on other cards.
my money stays with a lsi/symbios/ncr based card. tekram is a good vendor, and
horst- you cannot make a raid array from a mounted disk. mkraid will
potentially destroy the file system that is on your disk.
if you wish to include you current / in a raid set, then you need to look at
the 'failed-disk' directive in your raidtab.
more info can be found in jacob's wonderful
you must patch your kernel with the latest raidcode for this to work. you are
using a kernel with the old raidcode in it, but trying to use the new tools on
it.
get latest tools and raid patches from www.redhat.com/~mingo/
apply patch, build new kernel with the raid levels you need built in
[EMAIL PROTECTED] said:
G'day Folks,
My apologies if this is RTFM or previously discussed on list (I have
just joined).
We have implemenetd mirroring using raidtools 0.9 on RedHat 6.2.
It is working well, but we would like the ability to do the following to
backup a Lotus Domino
linux software raid never assumes you want to add a disk back to an array just
cause it is working. that would be a bad thing (tm).
instead, you must tell the kernel you want to add a disk back to the array
with the raidhotadd /dev/mdX /dev/sdXX command.
allan
Art [EMAIL PROTECTED] said:
Hi,
you could write a simple shell or perl script to do this using the
/proc/mdstat as a reference, but it is a bad idea to put in a drive and have
the kernel _assume_ you want to put things back the way they were. i prefer
the control, rather than have the kernel assume.
allan
Emmanuel Galanos
56 matches
Mail list logo