Hello Steffen,

You have been a major help! It worked finally, not as documented but here
it goes how step by step:

1.       Created a RESCUE PRM file in MDISK 191:

root=/dev/ram0 ro ip=off ramdisk_size=40000

repo=ftp://172.27.20.33

CMSDASD=191 CMSCONFFILE=redhat.conf



2.       The REDHAT CONF file also in MDISK 191:

DASD="200-206"

HOSTNAME="linux8"

NETTYPE="qeth"

IPADDR="172.27.20.19"

SUBCHANNELS="0.0.0900,0.0.0901,0.0.0902"

NETMASK="255.255.0.0"

GATEWAY="172.27.20.254"

LAYER2=0

DNS="172.27.20.254"

SEARCHDNS="something.com"



3.       The RESCUE EXEC A

/* */

'CL RDR'

'PURGE RDR ALL'

'SPOOL PUNCH * RDR'

'PUNCH KERNEL IMG A (NOH'

'PUNCH RESCUE  PRM A (NOH'

'PUNCH INITRD IMG A (NOH'

'CH RDR ALL KEEP NOHOLD'

'I 00C'



4.       Executed RESCUE EXEC:



5.       SSH as install.

Welcome to the anaconda install environment 1.2 for zSeries



/sbin/xauth:  creating new authority file /root/.Xauthority

detecting hardware...

waiting for hardware to initialize...

detecting hardware...

waiting for hardware to initialize...

Running anaconda 13.21.82, the Red Hat Enterprise Linux system installer -
please wait.

18:32:23 Starting graphical installation.



6.       Selected language, the install downloaded files from FTP and
presented the graphical installation dialog, clicked NEXT in the first
screen.

7.       Selected “Basic Storage Devices” and click NEXT.

8.       A “Examining storage devices” window appeared, disappeared and the
ssh install session closed.

9.       Restarted de ssh session and logged in as root.

Welcome to the anaconda install environment 1.2 for zSeries

[anaconda root@linux8 root]#



10.   Activated the LV.

[anaconda root@linux8 root]# lvm vgchange -ay

  2 logical volume(s) in volume group "vg_rh60" now active



11.   Mounted it.

[anaconda root@linux8 root]# mount -t ext4 /dev/vg_rh60/lv_root
/mnt/sysimage



12.   Changed root.

[anaconda root@linux8 root]# chroot /mnt/sysimage



13.   And can see the grown FS.

[anaconda root@linux8 /]# df

Filesystem           1K-blocks      Used Available Use% Mounted on

/dev/mapper/vg_rh60-lv_root

                      22217292   7974736  13119608  38% /

sysfs                 22217292   7974736  13119608  38% /sys



14.   Modified the /etc/zipl.conf file.

[anaconda root@linux8 /]# vi /etc/zipl.conf



[anaconda root@linux8 /]# cat /etc/zipl.conf

[defaultboot]

timeout=5

default=linux

target=/boot/

[linux]

        image=/boot/vmlinuz-2.6.32-71.el6.s390x

        ramdisk=/boot/initramfs-2.6.32-71.el6.s390x.img

        parameters="root=/dev/mapper/vg_rh60-lv_root rd_DASD=0.0.0200
rd_DASD=0.0.0201 rd_DASD=0.0.0202 rd_DASD=0.0.0203 rd_DASD=0.0.0204
rd_DASD=0.0.0205 rd_DASD=0.0.0206 rd_LVM_LV=vg_rh60/lv_root
rd_LVM_LV=vg_rh60/lv_swap rd_NO_LUKS rd_NO_MD rd_NO_DM LANG=en_US.UTF-8
SYSFONT=latarcyrheb-sun16 KEYTABLE=us cio_ignore=all,!0.0.0009
crashkernel=auto"



15.   Exited the chgroot shell.



[anaconda root@linux8 sbin]# exit

exit



16.   Mounted de boot DASD.



[anaconda root@linux8 root]# mkdir /boot

[anaconda root@linux8 root]# mount -t ext4 /dev/dasdb1 /boot



17.   Ran the zipl with config. file parameter.



[anaconda root@linux8 root]# zipl -V -c /mnt/sysimage/etc/zipl.conf

Using config file '/mnt/sysimage/etc/zipl.conf' (from command line)

Target device information

  Device..........................: 5e:04

  Partition.......................: 5e:05

  Device name.....................: dasdb

  Device driver name..............: dasd

  DASD device number..............: 0200

  Type............................: disk partition

  Disk layout.....................: ECKD/compatible disk layout

  Geometry - heads................: 15

  Geometry - sectors..............: 12

  Geometry - cylinders............: 3339

  Geometry - start................: 24

  File system block size..........: 4096

  Physical block size.............: 4096

  Device size in physical blocks..: 128004

Building bootmap in '/boot/'

Building menu 'rh-automatic-menu'

Adding #1: IPL section 'linux' (default)

  kernel image......: /boot/vmlinuz-2.6.32-71.el6.s390x

  kernel parmline...: 'root=/dev/mapper/vg_rh60-lv_root rd_DASD=0.0.0200
rd_DASD=0.0.0201 rd_DASD=0.0.0202 rd_DASD=0.0.0203 rd_DASD=0.0.0204
rd_DASD=0.0.0205 rd_DASD=0.0.0206 rd_LVM_LV=vg_rh60/lv_root
rd_LVM_LV=vg_rh60/lv_swap rd_NO_LUKS rd_NO_MD rd_NO_DM LANG=en_US.UTF-8
SYSFONT=latarcyrheb-sun16 KEYTABLE=us cio_ignore=all,!0.0.0009
crashkernel=auto'

  initial ramdisk...: /boot/initramfs-2.6.32-71.el6.s390x.img

  component address:

    kernel image....: 0x00010000-0x007a5fff

    parmline........: 0x00001000-0x00001fff

    initial ramdisk.: 0x02000000-0x02a1efff

    internal loader.: 0x0000a000-0x0000afff

Preparing boot device: dasdb (0200).

Preparing boot menu

  Interactive prompt......: enabled

  Menu timeout............: 5 seconds

  Default configuration...: 'linux'

Syncing disks...

Done.



18.   Logged on and off the linux virtual machine and everything worked
fine!

Looks like it never started the “real” rescue system, right? But at least
that “installation” kernel led me recreate the zipl boot record. Do you
think there are big gaps in the RHEL 6 documentation about this procedure?

You have been a great help, without it I would take a lot more time to
solve it! Thanks a lot really, appreciate all your help!

Have a great weekend.
Roberto.

PS. Thanks a lot also to all too!

El jue., 22 de ago. de 2019 a la(s) 05:25, Steffen Maier (
[email protected]) escribió:

> Hi Roberto,
>
> On 8/22/19 12:34 AM, Roberto Ibarra Magdaleno wrote:
> > Tried this configuration:
> >
> > root=/dev/ram0 ro ip=off ramdisk_size=40000
> > rescue
> >
> > and still booting installer, not a rescue system:
>
> > Starting sshd to allow login over the
> > network.
> > Connect now to 172.27.20.19 and log in as user install to start the
> > installation
> > .
> >
> > E.g. using: ssh -x [email protected]
>
> Those steps might still all be necessary for the rescue option kicking in
> later, see further down.
>
> > You may log in as the root user to start an interactive shell.
>
> This is also a possible alternative, but depending on which phases/stages
> the
> install process has dynamically loaded into the ramdisk, the available
> tools
> can be very minimal.
>
> (You can even get a root shell on the console (without network) just be
> pressing enter (twice in a z/VM guest).
>
> https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/installation_guide/ch-s390-phase_1#ch-s390-Phase_1-terminals
> )
>
>
> > In the installer root ssh I have no commands for LVM:
>
> I looked at the content of RHEL6.10 images/initrd.img and it is indeed
> very
> minimal.
>
>
> > Welcome to the anaconda install environment 1.2 for zSeries
>
> > [anaconda root@linux8 root]# lsdasd
> > Bus-ID     Status      Name      Device  Type  BlkSz  Size      Blocks
> >
> ==============================================================================
> > 0.0.0200   active      dasdb     94:4    ECKD  ???    2347MB    ???
> > 0.0.0201   active      dasdc     94:8    ECKD  ???    2347MB    ???
> > 0.0.0202   active      dasdd     94:12   ECKD  ???    2347MB    ???
> > 0.0.0203   active      dasde     94:16   ECKD  ???    2347MB    ???
> > 0.0.0204   active      dasdf     94:20   ECKD  ???    7043MB    ???
>
> > 0.0.0205   active      dasdg     94:24   ECKD  ???    7043MB    ???
> > 0.0.0206   active      dasdh     94:28   ECKD  ???    7043MB    ???
>
> So at least the 2 new minidisks are there and active, that's good.
>
> Anaconda and its environment tooling handle cio_ignore transparently for
> the
> user, so usually there is no need to change the cio_ignore= kernel
> parameter.
>
> > [anaconda root@linux8 root]# pvdisplay
> > -bash: pvdisplay: command not found
>
> See further down for a possibly different syntax due to ramdisk space
> reasons,
> not necessarily providing all the individual end user process binary names
> for
> the LVM tooling suite.
>
> > You’re right those are PVs in an LVM, maybe I would try to activate the
> > vg_root in another Linux (called RESCUE by the way) but wouldn’t It cause
> > problems with the running root? Since the rescue RHEL system (that I
> can’t
> > start) mounts it as /mnt/sysimage.
>
>  From an LVM assembly point of view it should not cause problems as the
> VGs
> should have different names.
> I suppose you would mount the LVM to be fixed under some free mount point
> and
> chroot there.
>
> > Still going to try from the Linux “RESCUE” virtual machine.
>
> That's a good alternative.
>
>
> >> whereas RHEL6 has an older slightly different syntax:
> >>
> >> IBM Z:
> >>
> >>
> https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/installation_guide/ch-parmfiles-miscellaneous_parameters
> >>
> >> general (not everything applies to Z):
> >>
> >>
> https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/installation_guide/ap-rescuemode#s1-rescuemode-boot
> >>
> >> Does this help?
> >>
> >> I think you don't even need the cms conf file for the rescue system, as
> >> the
> >> latter runs off the initrd and without network. The cms conf file is
> only
> >> parsed by the very early installer phase which should not run in rescue
> >> mode.
>
> I guess I stand corrected. My apologies. The "rescue" option is part of
> anaconda and that lives in install.img which is in turn loaded by
> initrd.img,
> typically over the network on s390.
> So probably you do need all the installer network setup on the kernel parm
> file
> including an indirect pointer to the install.img by means of repo=
> [
> https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/installation_guide/ch-parmfiles-loader_parameters
> ].
> I think repo= in the parm file is optional. If you run without it, connect
> as
> install user over ssh, then loader runs and you can teach the repo
> location
> interactively in the TUI
>
> https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/installation_guide/s1-installationmethod-s390
> .
> IIRC, the rescue will only start after above and the step where it pulls
> install.img
>
> https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/installation_guide/ch22s06
> Only then you would get the rescue part instead of the anaconda installer.
>
> PC users booting the installer of a DVD don't notice that as the boot
> process
> would automatically load initrd.img which would automatically load
> install.img
> from the same DVD and so the "rescue" boot option simply materializes
> without
> further steps there. PC users installing over the network would be more
> like
> the process on s390.
>
> >>> CMSDASD=191 CMSCONFFILE=redhat.conf
>
> >>> REDHAT   CONF     A1
> >>> DASD="200-206"
> >>> HOSTNAME="linux8"
> >>> NETTYPE="qeth"
> >>> IPADDR="172.27.20.19"
> >>> SUBCHANNELS="0.0.0900,0.0.0901,0.0.0902"
> >>> NETMASK="255.255.0.0"
> >>> GATEWAY="172.27.20.254"
>
>
> >> Basically, in the dracut (initrd) rescue shell you can manually prepare
> >> all
> >> necessary dependency (devices) for the root-fs. Then try to exit the
> >> rescue
> >> shell and it will try (again) to mount the root-fs and continue to boot.
> >>
> >>
> >>
> https://mirrors.edge.kernel.org/pub/linux/utils/boot/dracut/dracut.html#accessing-the-root-volume-from-the-dracut-shell
> >> (step 2 sounds quite like your use case with root-fs on LVM)
>
> # lvm vgscan
> # lvm vgchange -ay
>
>
> --
> Mit freundlichen Gruessen / Kind regards
> Steffen Maier
>
> Linux on IBM Z Development
>
> https://www.ibm.com/privacy/us/en/
> IBM Deutschland Research & Development GmbH
> Vorsitzender des Aufsichtsrats: Matthias Hartmann
> Geschaeftsfuehrung: Dirk Wittkopp
> Sitz der Gesellschaft: Boeblingen
> Registergericht: Amtsgericht Stuttgart, HRB 243294
>
> ----------------------------------------------------------------------
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to [email protected] with the message: INFO LINUX-390 or
> visit
> http://www2.marist.edu/htbin/wlvindex?LINUX-390
>

----------------------------------------------------------------------
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO LINUX-390 or visit
http://www2.marist.edu/htbin/wlvindex?LINUX-390

Reply via email to