[oops, sent to vgers.rutgers.edu - resending..]
hello,
i was probably unclear in my previous post. i had not been able to make
any raid devices (/dev/md0 was not actually created).
i modified the raidtab as was suggested, removing the raid-disk 1 line and
just leaving the failed-disk 1 .
i also reformatted/reinstalled the system. it currently looks like:
/dev/sda1 /boot 32M
/dev/sda2 / 96M
/dev/sda3 /var 512M
/dev/sda4 extended
/dev/sda5 /usr 1024M
/dev/sda6 /home 512M
/dev/sda7 /web (the rest of the disk)
this is the new raidtab:
raiddev /dev/md0
raid-level 1
nr-raid-disks 2
nr-spare-disks 0
chunk-size 8
persistent-superblock 1
device /dev/sdb2
raid-disk 0
device /dev/sda2
failed-disk 1
raiddev /dev/md1
raid-level 1
nr-raid-disks 2
nr-spare-disks 0
chunk-size 8
persistent-superblock 1
device /dev/sdb3
raid-disk 0
device /dev/sda3
failed-disk 1
raiddev /dev/md2
raid-level 1
nr-raid-disks 2
nr-spare-disks 0
chunk-size 8
persistent-superblock 1
device /dev/sdb5
raid-disk 0
device /dev/sda5
failed-disk 1
raiddev /dev/md3
raid-level 1
nr-raid-disks 2
nr-spare-disks 0
chunk-size 8
persistent-superblock 1
device /dev/sdb7
raid-disk 0
device /dev/sda7
failed-disk 1
raiddev /dev/md4
raid-level 1
nr-raid-disks 2
nr-spare-disks 0
chunk-size 8
persistent-superblock 1
device /dev/sdb8
raid-disk 0
device /dev/sda8
failed-disk 1
now, i'm at the point in the howto where it says:
(method 2)
Now, set up the RAID with your current root-device as the failed-disk in
the raidtab file. Don't put the failed-disk as the first disk in the
raidtab, that will give you problems with starting the RAID. Create the
RAID, and put a filesystem on it.
my question is, how do i create a raid device (/dev/md3 for example) and
then put a filesystem on it without losing the current data?
i tested this with my /dev/md3, which would be /home. i unmounted it,
mkfs /dev/md3, then remounted it, and teh data was gone (not a big deal,
nothing of importance).
this is my /proc/mdstat :
Personalities : [linear] [raid0] [raid1] [raid5]
read_ahead 1024 sectors
md0 : active raid1 sdb2[0] 104320 blocks [2/1] [U_]
md1 : active raid1 sdb3[0] 530048 blocks [2/1] [U_]
md2 : active raid1 sdb5[0] 1052160 blocks [2/1] [U_]
md3 : active raid1 sdb7[0] 530048 blocks [2/1] [U_]
md4 : active raid1 sdb8[0] 6409792 blocks [2/1] [U_]
unused devices: <none>
so the md devices are all created and in degraded mode [U_] .
i want to mount the devices, but the system complains that it can't find a
valid filesystem on it. but i can't create a valid filesystem without
destroying the current data. i must be missing something..
thanks again to all.
jon
--
Jonathan Nathan
Web Systems Engineer, CAIS Internet
[EMAIL PROTECTED]
On Fri, 17 Sep 1999, Bruno Prior wrote:
> > the biggest problem, i think, is that i can't get the system to boot off a
> > hard disk at all. during the (redhat 6.0) install i said to put lilo on
> > the mbr (/dev/sda). this didn't really work - when i bring the machine
> > up, it says something to the effect of "can't find operating system -
> > insert system disk," at which time i put the boot floppy that i created
> > during the install into the drive and boot fine.
>
> I don't understand. The disks work fine if you boot from floppy, but you can't
> install lilo to the MBR? Did the RedHat install give you an error when you chose
> to put lilo on the MBR? Check your BIOS settings. Some of them allow you to
> enable/prevent writing to the MBR. For example, the PhoenixBIOS on my dual PPro
> machine has an option "Fixed disk boot sector" under Security, which can be
> "Normal" or "Write Protect". You will want to enable writing (e.g. "Normal" in
> my case). If it's not that, this is a weird problem. Maybe it's related to the
> fact (as I think you are assuming) that you are using a "hardware"-RAID
> controller as a SCSI controller. But I thought you were supposed to be able to
> do that with the Adaptec raid cards, as they are so similar to the SCSI
> controllers.
>
> > fstab:
> > /dev/md1 / ext2 defaults 1 1
> > /dev/sda5 /usr ext2 defaults 1 2
> > /dev/sda2 /var ext2 defaults 1 2
> > /dev/sda7 /web ext2 defaults 1 2
> > /dev/sda6 /work ext2 defaults 1 2
> > /dev/sda3 swap swap defaults 0 0
> > /dev/sdb3 swap swap defaults 0 0
> > /dev/fd0 /mnt/floppy ext2 noauto 0 0
> > none /proc proc defaults 0 0
> > none /dev/pts devpts mode=0622 0 0
> > /dev/sdb1 /mnt/newroot ext2 defaults 0 0
> >
> > lilo.conf:
> > boot=/dev/sda
> > map=/boot/System.map
> > install=/boot/boot.b
> > prompt
> > timeout=50
> > image=/boot/vmlinuz
> > label=linux
> > root=/dev/md1
> > initrd=/boot/initrd-2.2.5-15smp.img
> > read-only
> > image=/boot/vmlinuz-2.2.5-15
> > label=linux-up
> > root=/dev/sda1
> > initrd=/boot/initrd-2.2.5-15.img
> > read-only
>
> Lilo can't see raid devices. So if /boot is on /dev/md1, as it is in this case,
> lilo can't find any of the files in /boot, such as the kernel image, the system
> map etc. The traditional solution is to have /boot on its own little non-raided
> partition.
>
> You don't have /boot on a non-raided partition, and you haven't taken any of the
> measures mentioned on the list a couple of weeks ago to allow you to have /boot
> on raid. As it states in the second paragraph of the root-RAID section of
> Jakob's HOWTO: "Your /boot filesystem will have to reside on a non-RAID
> device". That's why this won't work.
>
> Why don't you take a little bite out of one (or maybe both for redundancy's
> sake) of your swap partitions to create a /boot partition. You only need a few
> Mb at most. Copy all the files on /boot onto this partition, delete all the
> files on /boot, and then mount the partition on /boot. You should now be able to
> run lilo successfully and boot from the hard disk onto a root-RAID system.
>
> Except for one other problem:
>
> > raidtab:
> > raiddev /dev/md0
> > raid-level 1
> > nr-raid-disks 2
> > nr-spare-disks 0
> > chunk-size 8
> > persistent-superblock 1
> > device /dev/sdb6
> > raid-disk 0
> > device /dev/sda6
> > raid-disk 1
> > failed-disk 1
> etc.
>
> This shouldn't work. You want to remove the "raid-disk 1" line, so there is only
> a failed-disk line below the failed disk device. This applies to every md device
> in your raidtab. Which is puzzling, because you seem to imply that you have
> successfully created /dev/md1. Are you sure /dev/md1 has been created and is
> actually running? What does /proc/mdstat say? What messages did you get when you
> ran "mkraid /dev/md1"? What does "df" tell you about what partitions are mounted
> where? I've got a sneaking suspicion that / is still on /dev/sda1 and not on
> /dev/md1 as your fstab indicates. You need to sort this out before you try
> booting onto root-RAID.
>
> Cheers,
>
>
> Bruno Prior [EMAIL PROTECTED]
>
>