Sean,

What does "ls -l /dev/vghome/ /dev/vgdata/" show?


Mark Post

-----Original Message-----
From: Linux on 390 Port [mailto:[EMAIL PROTECTED] On Behalf Of
[EMAIL PROTECTED]
Sent: Thursday, April 28, 2005 10:16 AM
To: [email protected]
Subject: Lost or AWOL VG's on SLES9


Hello all,

Tom had the below mentioned a few weeks ago:

Tom Duerbusch
Tue, 29 Mar 2005 08:49:38 -0800
I think this is an easy one.

I added some dasd a couple weeks ago.  My notes show that I didn't do it
manually, I used Yast (so I should have been convered<G>).

Then I created a LVM over the 5 volumes, again thru Yast.

Started using LVM and everything was great.

Yesterday, I did the first reboot of this machine since I added the dasd.

Well, the boot fails with:

Activating LVM volume groups...

  No volume groups found

..skipped

Checking file systems...

fsck 1.34 (25-Jul-2003)

Checking all file systems.

[/sbin/fsck.reiserfs (1) -- /use] fsck.reiserfs -a /dev/dasdb1

Reiserfs super block in block 16 on 0x0 of format 3.6 with standard journal
Blocks (total/free): 600816/592586 by 4096 bytes

Filesystem is clean

Replaying journal..

Reiserfs journal '/dev/dasdb1' in blocks [18..8211]: 0 transactions replayed
Checking internal tree..finished

[/sbin/fsck.reiserfs (1) -- /data1] fsck.reiserfs -a /dev/system/lvmdata1


Failed to open the device '/dev/system/lvmdata1': No such device or address
Warning... fsck.reiserfs for device /dev/system/lvmdata1 exited with signal
6.


A cat /proc/dasd/devices does not show the 5 volumes.
A cat /etc/zipl.conf doesn't (no longer in sles9) show the volumes linux
knows about.  Apparently it is dynamically sensing the devices.

The big difference in Sles8 vs Sles9, is now we have to "activate" dasd
prior to dasdfmt, etc.

I think all I need to do is:

1.  manually activate the dasd
2.  update some file to show these volumes should be activated.

Anyway, I'm now at:

fsck failed.  Please repair manually and reboot. The root
file system is currently mounted read-only. To remount it read-write do:

   bash# mount -n -o remount,rw /

Attention: Only CONTROL-D will reboot the system in this maintanance mode.
shutdown or reboot will not work.

Give root password for login:

Which means, my only access is via 3270 console (no IP connections have been
started).  And only dasda1 is mounted.

So, how do I manually "activate" dasd?  Assuming that was the problem.

*====================================================================*

Now I have the following situation:

1.) mounted the root-fs on a healthy system  and commented vg-mounts in
/etc/fstab:

/dev/dasdb1          /                    reiserfs
acl,user_xattr        1 1
/dev/dasdc1          /var                 reiserfs
acl,user_xattr        1 2
/dev/dasda           swap                 swap
pri=42                0 0
devpts               /dev/pts             devpts
mode=0620,gid=5       0 0
proc                 /proc                proc
defaults              0 0
sysfs                /sys                 sysfs
noauto                0 0
#  /dev/vghome/lvhome   /export/home         reiserfs
acl,user_xattr        1 2
#   /dev/vgdata/lvdata   /opt/app_data1       reiserfs
acl,user_xattr        1 2

2.) reipled image -> ok

3.) cat /proc/dasd/devices shows all disks:

0.0.0200(FBA ) at ( 94:     0) is dasda       : active at blocksize:
512, 81920 blocks, 40 MB
0.0.0201(ECKD) at ( 94:     4) is dasdb       : active at blocksize:
4096, 1058400 blocks, 4134 MB
0.0.0300(ECKD) at ( 94:     8) is dasdc       : active at blocksize:
4096, 180000 blocks, 703 MB
0.0.0311(ECKD) at ( 94:    12) is dasdd       : active at blocksize:
4096, 1802880 blocks, 7042 MB
0.0.0312(ECKD) at ( 94:    16) is dasde       : active at blocksize:
4096, 1802880 blocks, 7042 MB
0.0.0313(ECKD) at ( 94:    20) is dasdf       : active at blocksize:
4096, 1802880 blocks, 7042 MB
0.0.0500(ECKD) at ( 94:    24) is dasdg       : active at blocksize:
4096, 135000 blocks, 527 MB

4.) vgscan:

chrl3008:/home/srzkeg # vgscan
  Reading all physical volumes.  This may take a while...
  Found volume group "vghome" using metadata type lvm2
  Found volume group "vgdata" using metadata type lvm2

5.) vgdisplay:

chrl3008:/home/srzkeg # vgdisplay
  --- Volume group ---
  VG Name               vghome
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  2
  VG Access             read/write
  VG Status             resizable
  MAX LV                255
  Cur LV                1
  Open LV               0
  Max PV                255
  Cur PV                1
  Act PV                1
  VG Size               524.00 MB
  PE Size               4.00 MB
  Total PE              131
  Alloc PE / Size       131 / 524.00 MB
  Free  PE / Size       0 / 0
  VG UUID               kA7F9m-eSrJ-6jyQ-4BmX-Jatx-fyPU-OL2sCx

 --- Volume group ---
  VG Name               vgdata
  System ID
  Format                lvm2
  Metadata Areas        3
  Metadata Sequence No  4
  VG Access             read/write
  VG Status             resizable
  MAX LV                255
  Cur LV                1
  Open LV               0
  Max PV                255
  Cur PV                3
  Act PV                3
  VG Size               20.62 GB
  PE Size               4.00 MB
  Total PE              5280
  Alloc PE / Size       5274 / 20.60 GB
  Free  PE / Size       6 / 24.00 MB
  VG UUID               LIdxJ3-8J8o-k7Pm-VB8t-CQst-xMhv-NKMlWS

7.) lvdisplay's:

chrl3008:/home/srzkeg # lvdisplay /dev/vghome/lvhome
  --- Logical volume ---
  LV Name                /dev/vghome/lvhome
  VG Name                vghome
  LV UUID                85eEE5-CBiG-tt6W-RCo4-8se1-6H1p-rnfwz0
  LV Write Access        read/write
  LV Status              NOT available
  LV Size                524.00 MB
  Current LE             131
  Segments               1
  Allocation             next free (default)
  Read ahead sectors     0

chrl3008:/home/srzkeg # lvdisplay /dev/vgdata/lvdata
  --- Logical volume ---
  LV Name                /dev/vgdata/lvdata
  VG Name                vgdata
  LV UUID                o3PV81-X4li-8uEa-8s3t-aq4K-A5TP-G8D8iM
  LV Write Access        read/write
  LV Status              NOT available
  LV Size                20.60 GB
  Current LE             5274
  Segments               3
  Allocation             next free (default)
  Read ahead sectors     0


8.) edited /etc/fstab to uncomment vg's:

/dev/dasdb1          /                    reiserfs
acl,user_xattr        1 1
/dev/dasdc1          /var                 reiserfs
acl,user_xattr        1 2
/dev/dasda           swap                 swap
pri=42                0 0
devpts               /dev/pts             devpts
mode=0620,gid=5       0 0
proc                 /proc                proc
defaults              0 0
sysfs                /sys                 sysfs
noauto                0 0
/dev/vghome/lvhome   /export/home         reiserfs
acl,user_xattr        1 2
/dev/vgdata/lvdata   /opt/app_data1       reiserfs
acl,user_xattr        1 2


9.) mount -a:

chrl3008:/home/srzkeg # mount -a
mount: /dev/vghome/lvhome is not a valid block device
mount: /dev/vgdata/lvdata is not a valid block device

Any ideas / suggestions?

TIA







Regards / Cordialement / Gruss / Saludos

Cassidy IT Services

John Cassidy Dipl.Ingr (Informatique)
Schleswigstr. 7
51065 Cologne / Koeln / Keulen

EU


Tel. +49 (0) 177 799 58 56

Email: [EMAIL PROTECTED]

HTTP: www.jdcassidy.net

----------------------------------------------------------------------
For LINUX-390 subscribe / signoff / archive access instructions, send email
to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390

----------------------------------------------------------------------
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390

Reply via email to