I am not sure where to begin with this issue. The server is a freshly  
built CentOS 5x machine with a 3ware 9500 controller, 2x320gb RAID-1  
setup with only two partitions, /dev/sda1 for boot as a standard ext3  
and /dev/sda2 as a LVM VolGroup00.

I also had another set of 2 older drives from the previous build of  
the machine, also RAID-1. There was also a VolGroup00 on that  
drive/partition and so I could not acess unless I switched the cables  
from the controller. I seem to recall running some LVM tools a while  
back trying to rename the LV/LG on the older drives so I could mount  
them but gave up.

Everything had been running fine until I removed the two older drives,  
leaving only the 2x320gb drives with the single LV/LGVolGroup00 as  
/dev/sda2.

But now when I boot there is a disk check failure checking/looking for  
/dev/sdb2? and even when I re-attach the older drives, I still cant  
boot from either pair and get LV metadata errors.

I figure/HOPE AND PRAY actually, that I can somehow check/edit some  
LV/LG files somewhere to set things back to a single VolGroup00 using  
/dev/sda2. I noticed many different files in /etc/lvm/archive and  
/etc/lvm/backups and looking at those files I see various entries,  
some I know cant be right and some that look correct for VolGroup00.

Can someone suggest an approach to recover my working LVG, and where  
do I find the current settings file that gets loaded? Is this  
/etc/lvm/lvm.conf? The contents seem very different than whats in the  
etc/lvm/archive/VolGroup00_00007.vg etc files or those in  
/etc/lvm/backups

I am really stuck at this point and would be so greatful to anyone  
that could help.



_______________________________________________
clug-talk mailing list
[email protected]
http://clug.ca/mailman/listinfo/clug-talk_clug.ca
Mailing List Guidelines (http://clug.ca/ml_guidelines.php)
**Please remove these lines when replying

Reply via email to