Gustin Johnson wrote:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Martin Glazer wrote:
<snip>
Using CentOS 5.2 and in the fstab, the drives are referenced via /dev/md2, /dev/md0, etc and not via UUID. In mdadm.conf they are
referenced via UUID

The UUID should be the same, but it is worth checking the UUIDs in the
mdadm.conf against what the (live) OS is actually seeing.  These are
symlinks in /dev/disk/by-uuid btw.

Now I'm confused about the UUID - when I recreate the arrays on the live
cd, I get different UUID's than what is in the mdadm.conf file when
compared to the /dev/disk/by-uuid
(I also found that blkid gives the UUID information)


If I do a mdadm --examine --scan
I get the same UUID's that are in the mdadm.conf file.

So which is correct?
If the ones under the live CD, should I change the mdadm.conf file to match?

I am not sure.  I always layer LVM on top of the software RAID stack, so
I just use the UUID of the LVM.

How far along the boot process does it get before panicking?  What is
the last thing you see before the panic?  You may need to edit the grub
kernel line and remove the quiet option (this can be done within grub,
no need to boot and mount by using the "e" key while in grub).

these are the last few lines before crashing

Red Hat nash version5.1.19.6 starting
Unable to access resume device (/dev/md1)
EXT3-fs: unable to read superblock
mount: error mounting /dev/root on /sysroot as ext3:
setuproot: moving /dev failed: No such file or directory
setuproot: error mounting /proc: No such file or directory
setuproot: error mounting /sys: No such file or directory
switchroot: mount failed: No such file or directory
Kernel panic - not syncing: Attempted to kill init!


I am not sure what the option "rhgb" is for, that is a new one for me.

I think it has something to do with the Red Hat graphical boot...


<snip>
If you can mount the arrays and read data then there is nothing wrong
with the arrays themselves.  Chances are this is a grub problem.  What
is the default entry in /boot/grub/menu.lst?
Not sure that it is a grub problem though -

This is the default entry
title CentOS (2.6.18-92.1.22.el5)
        root (hd0,0)
        kernel /vmlinuz-2.6.18-92.1.22.el5 ro root=/dev/md2 rhgb quiet
        initrd /initrd-2.6.18-92.1.22.el5.img


I also recreated initrd in a chroot in case I needed different modules
on boot as per Cody's message earlier.

<snip>
I would verify the partitions with fsck.  Of course they should not be
mounted when you do this.

They all check out okay

What is the difference between the old server and the new?

The old server has a different systemboard. Both Intel, but the one I am trying to install the drives onto is older.


_______________________________________________
clug-talk mailing list
[email protected]
http://clug.ca/mailman/listinfo/clug-talk_clug.ca
Mailing List Guidelines (http://clug.ca/ml_guidelines.php)
**Please remove these lines when replying

Reply via email to