hello,
how do you get the adresses of your hd�s or what does this line mean?
append="ide6=0x168,0x36e,10 ide0=autotune ide1=autotune ide6=autotune"
thanks
ph. trolliet
-----Original Message-----
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED]]On Behalf Of Charles Wilkins
Sent: Montag, 28. Mai 2001 16:31
To: Philippe Trolliet
Cc: Linux Raid mailing list
Subject: Re: lilo setup with raid1
Keep in mind that when a drive fails, raid has the ability to recover and
continue to run.
When the USER initiates a reboot is when the system stands a chance of not
rebooting depending on whether or not the BIOS on your motherboard supports
booting from other drives that the first harddisk.
If it does not, like mine does not, then worst case scenario is that you
have to temporarily swap the ide cables to get the system back up. IF i
hadn't realized that a drive had failed to begin with and IF if was the
primary master.
here is my fstab:
I am using a promise 100 controller btw. The devices are hde and hdg.
/dev/md0 / ext2 defaults 1 1
/dev/md1 /boot ext2 defaults 1 1
/dev/hda /mnt/cdrom1 auto user,noauto,nosuid,exec,nodev,ro 0 0
/dev/hdc /mnt/cdrom2 auto user,noauto,nosuid,exec,nodev,ro 0 0
/dev/hdb /mnt/cdrom3 auto user,noauto,nosuid,exec,nodev,ro 0 0
/dev/hdd /mnt/cdrom4 auto user,noauto,nosuid,exec,nodev,ro 0 0
/dev/scd0 /mnt/cdrom5 auto user,noauto,nosuid,exec,nodev,ro 0 0
/dev/scd1 /mnt/cdr auto user,noauto,nosuid,exec,nodev,ro 0 0
/dev/sda1 /mnt/zip auto user,noauto,nosuid,exec,nodev 0 0
/dev/fd0 /mnt/floppy auto sync,user,noauto,nosuid,nodev 0 0
/dev/hde2 swap swap pri=1 0 0
/dev/hdg2 swap swap pri=1 0 0
none /dev/pts devpts mode=0620 0 0
none /proc proc defaults 0 0
notice /dev/hde2 and /dev/hdg2.
They are the swap partitions and are not included in the raid 1.
This is because the kernel can use them both in a fashion similar to raid 1
when the priorities are set the same.
here is my lilo.conf: kid tested and mother approved . . .
boot=/dev/md1
map=/boot/map
install=/boot/boot.b
prompt
timeout=50
default=vmlinux
vga=normal
keytable=/boot/us.klt
message=/boot/message
menu-scheme=wb:bw:wb:bw
image=/boot/vmlinux
label=vmlinux
root=/dev/md0
append="ide6=0x168,0x36e,10 ide0=autotune ide1=autotune ide6=autotune"
read-only
image=/boot/vmlinuz
label=vmlinuz
root=/dev/md0
append="ide6=0x168,0x36e,10 ide0=autotune ide1=autotune ide6=autotune"
read-only
image=/boot/bzImage
label=z
root=/dev/md0
append="ide6=0x168,0x36e,10 ide0=autotune ide1=autotune ide6=autotune"
read-only
I have had a drive fail once already since I have started using raid1.
The md devices continued to work properly and the system did boot.
To remove the drive and replace, (and to tell raid to not try to initialize
the failed device), I set the raid-disk to failed-disk for the corresponding
physical harddisk in the /etc/raidtab.
# root array
raiddev /dev/md0
raid-level 1
nr-raid-disks 2
chunk-size 32
nr-spare-disks 0
persistent-superblock 1
device /dev/hdg3
raid-disk 0
device /dev/hde3
raid-disk 1
# /boot array
raiddev /dev/md1
raid-level 1
nr-raid-disks 2
chunk-size 32
nr-spare-disks 0
persistent-superblock 1
device /dev/hdg1
raid-disk 0
device /dev/hde1
raid-disk 1
Then shutdown, replace the disk, and bootup.
Re-edit the raidtab file and change the failed-disk back to raid-disk.
Then do a raidhotadd with the appropriate arguments. Please read the HOWTO
here for the correct use of raidhotadd.
http://www.linuxdoc.org/HOWTO/Boot+Root+Raid+LILO-4.html#ss4.3
I hope this helps and good luck.
I would like to add that kernel 2.4.4 seems to do ide raid 1 very well. I
beat the pants off the drives with lots of imaging and find/grep commands
and the performance and reliability seems to be there.
Charles Wilkins
----- Original Message -----
From: "Philippe Trolliet" <[EMAIL PROTECTED]>
To: "Linux Raid Mailing List" <[EMAIL PROTECTED]>
Sent: Monday, May 28, 2001 2:02 AM
Subject: lilo setup with raid1
> hello,
> i want lilo to boot from the md devices even if one hd fails. can anybody
> help me?
> here my configuration:
>
> df -h shows:
> Filesystem Size Used Avail Use% Mounted on
> /dev/md0 28G 3.5G 23G 14% /
> /dev/md1 99M 5.3M 88M 6% /boot
> --------------------------------------------------------------------------
--
> -------
> my raidtab:
>
> #MD0
> raiddev /dev/md0
> raid-level 1
> nr-raid-disks 2
> chunk-size 32
> nr-spare-disks 0
> persistent-superblock 1
> device /dev/hdc3
> raid-disk 0
> device /dev/hda3
> raid-disk 1
>
> #MD1
> raiddev /dev/md1
> raid-level 1
> nr-raid-disks 2
> chunk-size 32
> nr-spare-disks 0
> persistent-superblock 1
> device /dev/hdc1
> raid-disk 0
> device /dev/hda1
> raid-disk 1
> --------------------------------------------------------------------------
--
> -------
> my fstab:
>
> /dev/hda2 swap swap defaults 0 2
> /dev/hdc2 swap swap defaults 0 2
> /dev/md0 / ext2 defaults 1 1
> /dev/md1 /boot ext2 defaults 1 1
>
> /dev/hdb /cdrom auto
> ro,noauto,user,exec 0 0
>
> /dev/fd0 /floppy auto noauto,user 0
0
>
> proc /proc proc defaults 0 0
> # End of YaST-generated fstab lines
> --------------------------------------------------------------------------
--
> -------
> /proc/mdstat:
>
> Personalities : [linear] [raid0] [raid1] [raid5]
> read_ahead 1024 sectors
> md1 : active raid1 hda1[1] hdc1[0] 104320 blocks [2/2] [UU]
> md0 : active raid1 hda3[1] hdc3[0] 29808512 blocks [2/2] [UU]
> unused devices: <none>
>
> thanks a lot
> best regards
> ph. trolliet
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to [EMAIL PROTECTED]
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]