On Tue, Jun 14, 2011 at 02:15:18PM +1000, david wrote:

> Next, how do I persuade the new partition to boot? Do I have to do some  
> magic with grub? If so, what? Do I cpio the old /boot onto the new,  
> non-LVM boot partition? or can I use /boot within the new LV?
>
> Everything I read says to put /boot into a non-lvm partition. Does  
> grub-install  from a live CD give me the opportunity to spell out the  
> right parameters?

I'm trying to so the same thing right now, and I've got *almost*
everything working.  Now when I try to boot from the new drive, I see
some error messages flash by during boot (they're not logged to syslog
and don't appear when I run dmesg) saying that it can't write to
/lib/modules/`uname -r`/volatile because it's a read-only filesystem. 

This is happening because the tmpfs that's normally mounted there isn't
being mounted, and I have no idea why.  I don't know where this mount is
supposed to be done and I'm not having any success finding anything
useful via Google, and without knowing where it's done I have no idea
how to fix it.

Is there anyone out there who knows how to fix this, or who can give me
a clue or two to help me figure it out?


This is what I've done so far:

I split the new drive (/dev/sdb) into three partitions and started a
degraded RAID 1 array on each of them.  I formatted the first with ext3
(for /boot) the second as swap, and the third is my LVM volume with
separate partitions for /, /home, /tmp, /usr and /var.  I mounted all of
the new filesystems under /media/lvm and copied the files from the
existing drives, then created a new initramfs and installed grub, like
this:

    # these two files are used bu update-initramfs to build the
    # initrd
    cp /proc/cmdline /media/lvm/proc/
    cp /proc/modules /media/lvm/proc/

    # create an mdadm.conf on the new drive
    mdadm -E -s >> /media/lvm/etc/mdadm/mdadm.conf

    # chroot into the new drive
    chroot /media/lvm/

    # change the root device in the copy of /proc/cmdline, mine
    # now contains "root=/dev/mapper/vg0-root ro"
    vi /proc/cmdline

    # update the mounts in the new fstab to use the new RAID/LVM
    # partitions
    vi /etc/fstab

    # edit the new grub memu so that the kernel's root device is
    # the new LVM root device (e.g. /dev/mapper/vg0-root), the
    # grub root device is the new /boot partition or RAID array,
    # and the kernel and initrd pathnames are relative to /boot
    vi /boot/grub/menu.lst

    # create a new initrd that includes LVM and RAID support
    # do this once for each kernel version you want to be able
    # to boot (replace "`uname -r`" with the kernel version)
    update-initramfs -c -k `uname -r`

    # install grub on the new drive
    grub-install /dev/sdb


Any suggestions on how to fix my mount problem are welcome.


Thanks,

John

-- 
I don't know what Connect[.com.au] were thinking when they put sprinklers 
in their data centre. I wonder what they'd do if you asked for a quote for 
enough rack space to hold 3 servers, a router, a switch and an umbrella?
            -- Richard Archer
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html

Reply via email to