I had an interesting experience this morning.  I am in a transition
between two sets of disk drives on my Dell server, and have the future
replacement data drive in an external USB box.  I have built an LVM
structure on that drive, and copied all the files from the internal
drive to it.

I can manually mount the external USB drive, or make an appropriate
entry in /etc/fstab and mount it with
 $ sudo mount -a
[EMAIL PROTECTED] ~]$ df -h | grep data1
/dev/mapper/G2-Data2  247G  171G   64G  74% /data1

But if I reboot the system, the USB drives are not available at the
time that the startup process probes LVM devices, and thus not
available for fsck and mounting.  So the system comes to the error
screen "enter root password to repair".  And I can't repair it from
there because the last part of the repair process is rebooting.

If I comment out the /dev/G2/Data2 line in /etc/fstab the system comes
up.  Then I can make the LVM volume available with
 $ sudo /usr/sbin/vgchange -a -y /dev/G2/Data2
 $ sudo mount /dev/G2/Data2 /data1                 # yes, the names
are mixed up

So what I need is a good way to automate this process, or to work
around it.  Google/groups doesn't offer anything that I found useful.
I suppose it could be a script in rc.local, maybe, if USB devices are
available when rc.local runs.

Note that this is not a permanent condition, and will be cured by
moving the new disk to its home in the CPU box.

   carl
--
   carl lowenstein         marine physical lab     u.c. san diego
                                                [EMAIL PROTECTED]


--
[email protected]
http://www.kernel-panic.org/cgi-bin/mailman/listinfo/kplug-list

Reply via email to