Hello Steffen,
Tried this configuration: root=/dev/ram0 ro ip=off ramdisk_size=40000 rescue and still booting installer, not a rescue system: … Starting sshd to allow login over the network. Connect now to 172.27.20.19 and log in as user install to start the installation . E.g. using: ssh -x [email protected] You may log in as the root user to start an interactive shell. In the installer root ssh I have no commands for LVM: Welcome to the anaconda install environment 1.2 for zSeries /sbin/xauth: creating new authority file /root/.Xauthority [anaconda root@linux8 root]# lsdasd Bus-ID Status Name Device Type BlkSz Size Blocks ============================================================================== 0.0.0200 active dasdb 94:4 ECKD ??? 2347MB ??? 0.0.0201 active dasdc 94:8 ECKD ??? 2347MB ??? 0.0.0202 active dasdd 94:12 ECKD ??? 2347MB ??? 0.0.0203 active dasde 94:16 ECKD ??? 2347MB ??? 0.0.0204 active dasdf 94:20 ECKD ??? 7043MB ??? 0.0.0205 active dasdg 94:24 ECKD ??? 7043MB ??? 0.0.0206 active dasdh 94:28 ECKD ??? 7043MB ??? [anaconda root@linux8 root]# fdsik -l -bash: fdsik: command not found [anaconda root@linux8 root]# pvdisplay -bash: pvdisplay: command not found [anaconda root@linux8 root]# vgchange -bash: vgchange: command not found [anaconda root@linux8 root]# parted -bash: parted: command not found [anaconda root@linux8 root]# lvm -bash: lvm: command not found [anaconda root@linux8 root]# dracut -bash: dracut: command not found > Maybe I'm confused, but I thought your DASDs would be PVs in an LVM > configuration so you would not mount the individual PVs, but they need to be > assembled in an LVM VG and you would access some LV of that VG? You’re right those are PVs in an LVM, maybe I would try to activate the vg_root in another Linux (called RESCUE by the way) but wouldn’t It cause problems with the running root? Since the rescue RHEL system (that I can’t start) mounts it as /mnt/sysimage. > (step 2 sounds quite like your use case with root-fs on LVM) You’re right too. But can’t start rescue… > I suppose you'd need to fixup your config files on the root-fs including Completely agree, but… no rescue shell… =[ what am I doing wrong (to start the rescue system)? Can’t see it. Still going to try from the Linux “RESCUE” virtual machine. Thanks again. Roberto. > > This looks like upstream (and RHEL7 or RHEL8) syntax > [ > https://anaconda-installer.readthedocs.io/en/latest/boot-options.html#inst-rescue > ], > whereas RHEL6 has an older slightly different syntax: > > IBM Z: > > https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/installation_guide/ch-parmfiles-miscellaneous_parameters > > general (not everything applies to Z): > > https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/installation_guide/ap-rescuemode#s1-rescuemode-boot > > Does this help? > > I think you don't even need the cms conf file for the rescue system, as > the > latter runs off the initrd and without network. The cms conf file is only > parsed by the very early installer phase which should not run in rescue > mode. > > > > CMSDASD=191 CMSCONFFILE=redhat.conf > > > > REDHAT CONF A1 > > > > DASD="200-206" > > > > HOSTNAME="linux8" > > > > NETTYPE="qeth" > > > > IPADDR="172.27.20.19" > > > > SUBCHANNELS="0.0.0900,0.0.0901,0.0.0902" > > > > NETMASK="255.255.0.0" > > > > GATEWAY="172.27.20.254" > > > > > > > > It does IPL the installation dialog but never the rescue system. > > > > >>> 3. Log the guest off and attach the disks(s) to another, running > system,. > >> > >> Working on this from 1. Mounted its first DASD (200) and could read it, > >> didn’t found the /etc/dasd.conf needed. > >> > >> Linked second DASD (201) tried to mount it, and couldn´t: > >> > >> [root@rescue ~]# mount -t ext4 /dev/dasdf1 /lx8/20x > >> > >> mount: wrong fs type, bad option, bad superblock on /dev/dasdf1, > >> > >> missing codepage or helper program, or other error > >> > >> In some cases useful info is found in sysl > > Maybe I'm confused, but I thought your DASDs would be PVs in an LVM > configuration so you would not mount the individual PVs, but they need to > be > assembled in an LVM VG and you would access some LV of that VG? > > > https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/physvol_display > > > https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/vg_activate > > Basically, in the dracut (initrd) rescue shell you can manually prepare > all > necessary dependency (devices) for the root-fs. Then try to exit the > rescue > shell and it will try (again) to mount the root-fs and continue to boot. > > > https://mirrors.edge.kernel.org/pub/linux/utils/boot/dracut/dracut.html#accessing-the-root-volume-from-the-dracut-shell > (step 2 sounds quite like your use case with root-fs on LVM) > > I suppose you'd need to fixup your config files on the root-fs including > > https://www.ibm.com/support/knowledgecenter/linuxonibm/com.ibm.linux.z.lgdd/lgdd_t_initrd_rebld.html > [applies to RHEL6, too] > afterwards so any subsequent (re)boot will succeed without manual > intervention. > > -- > Mit freundlichen Gruessen / Kind regards > Steffen Maier > > Linux on IBM Z Development > > https://www.ibm.com/privacy/us/en/ > IBM Deutschland Research & Development GmbH > Vorsitzender des Aufsichtsrats: Matthias Hartmann > Geschaeftsfuehrung: Dirk Wittkopp > Sitz der Gesellschaft: Boeblingen > Registergericht: Amtsgericht Stuttgart, HRB 243294 > > ---------------------------------------------------------------------- > For LINUX-390 subscribe / signoff / archive access instructions, > send email to [email protected] with the message: INFO LINUX-390 or > visit > http://www2.marist.edu/htbin/wlvindex?LINUX-390 > ---------------------------------------------------------------------- For LINUX-390 subscribe / signoff / archive access instructions, send email to [email protected] with the message: INFO LINUX-390 or visit http://www2.marist.edu/htbin/wlvindex?LINUX-390
