On 17/07/10 09:11, Matthew Glubb wrote:
Hi All,
I have a problem replacing a failed disk with a LVM volume on a RAID1 array.
Normally in the past when a disk has failed, I have dropped the offending disk
from the array, replaced the disk, booted, rebuilt the filesystem on the new
disk and re-synced the array. I've done this about four times with this method.
However, I recently upgraded from Etch to Lenny. This week, I had a degraded
array warning; a disk is failing.
So. I duly repeated the steps to replace the disk but on booting with the new
unformatted disk, I get the following error:
"Alert! dev/mapper/vg00-lv01 does not exist ...
...Dropping to shell"
At the moment, I have had to reinstall the old, failing disk in order to be
able to boot and run the system. Has anyone had this problem before? Does
anyone know of any solution to it?
I've included the relevant disk/raid configuration at the end of this email.
The device /dev/sdb is the one that is failing.
You don't include the one piece of information that shows that the
volume group sits on the raid device.
Can you do either pvdisplay, or vgdisplay -v
Does it show the volume group sitting on raid device
--
Alan Chandler
http://www.chandlerfamily.org.uk
--
To UNSUBSCRIBE, email to [email protected]
with a subject of "unsubscribe". Trouble? Contact [email protected]
Archive: http://lists.debian.org/[email protected]