Thank you to everyone who responded. No, we did not run mkinitrd and zipl after adding the second LUN to the root file system. When we have added LUNs in the past it has been unnecessary because they were not needed during boot - I started to understand this just after I posted my question. (My linux knowledge is not very good - I understand z/OS and system z in general and do the 'z' parts of installing zLinux. A colleage who knows little about system z then takes over. There seems to be a big gap in our combined knowledge, but I have now learned something about the boot process - at least I now understand that 'rd' in initrd stands for 'ram disk'). Unfortunately the volume groups in all of our systems are called 'VolGroup00' so I was not able to vgimport the group into a RH 5 system so I accessed it from a SUSE system which does not LVs so that I could rename it. I was just about to import it into a RH 5 system so that I could run initmkd and zipl under chroot when I read Steffen's refernce to the restrictions with LVs in RH 5. The reason for adding the second PV was that the first was full, so it is possible that any new boot image will be created on the new physical volume. I think this is getting too difficult. Re-installing with a later release is probably more productive. Incidentally I spent some time yesterday in attempting to enter kernel parameters using the SCPDATA option of LOADDEV in zVM V5. (a). I assumed that the parameters have to be in ascii so I entered them in hexadecimal. (b). Is this method actually available for RH 5? - I found the reference in the RH 6 documentation. (c). If SCPDATA can be used with RH 5, what is the syntax of the zfcp options. I asssumed that it was "zfcp.device=0.0.uuuu,<wwpn>,<lun>" but also tried the "rd.zfcp" format from RH 7. Should it be 'rh_zfcp" as in RH 6 or is it not available at all ? Keith On Wednesday, 21 May 2014, 11:21, Steffen Maier <[email protected]> wrote:
Hi Keith, On 05/20/2014 02:36 PM, Keith Gooding wrote: > We installed a RH 5.2 system where the root file system is on a > logical volume comprising a single physical volume which is on zFCP LUN. > Later we added another zFCP LUN, updated zfcp.conf and added the > physical volume to the LV. This appeared to be OK until we had a > scheduled reboot of the system. Did you run mkinitrd and then zipl after having changed /etc/zfcp.conf? RHEL5 simply copies activation statements of all entries of zfcp.conf into the initrd (whether those LUNs are needed for the root-fs or not, so it's at least sufficient to activate all LUNs for the root-fs). > The boot starts OK until: > > Scanning logical volumes > Reading all physical volumes. > This may take a while... > Couldn't find device with uuid 'YD0yYY-Ty0J-I303-2jqQ-4bYs-SSFx-QyzQzc'. > Found volume group "VolGroup00" using metadata type lvm2 > > Eventually it gives up because it cannot mount the root file system > because the logical volume is incomplete.I was able to access both > LUNs from another system and I ran vgimport on that system to check > the zfcp.conf. It seems to be OK. Does anyone know if I need some > entries in the zipl.conf file for the zfcp LUNs?. The RHEL 7 (beta) > documentation states that if the root file system is on a logical > volume using zFCP then entries are needed in zipl.conf but I cannot > find any instructions for RHEL 5, or indeed any documentation of the > syntax for zipl.conf. The zipl.conf which was generated by the RHEL > 5 installer does not have any zfcp entries in zipl.conf.We could just > install a new system with a larger root file system but I would like > to be able to fix this if possible.Keith Gooding This procedure is only different starting with RHEL6 (including RHEL7, where the syntax of rd_ZFCP= changed slightly to rd.zfcp=). It's dracut (successor of mkinitrd) does not (depending on dracut config) copy activation statements from zfcp.conf into the initramfs, but the user (or anaconda during installation) has to use explicit rd.zfcp= statements as pseudo kernel boot parameters to activate LUNs of all paths required for the root-fs. [https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Installation_Guide/ap-s390info-Adding_FCP-Attached_LUNs-Persistently.html https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/7-Beta/html/Installation_Guide/sect-post-installation-fcp-attached-luns-persistent-s390.html] Everything else, such as all data volumes (even /boot if it's not included in the root-fs) or scsi tapes only go into /etc/zfcp.conf. [https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Installation_Guide/ap-s390info-Adding_FCP-Attached_LUNs-Persistently-Not_part_of_root_file_system.html https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/7-Beta/html/Installation_Guide/sect-post-installation-fcp-attached-luns-no-root-s390.html] Re-running mkinitrd/dracut is not necessary in all cases (only if lvm.conf or other config files that changed are included in initramfs), but it would probably be safest to re-create anyways. You definitely have to re-run zipl to enable the added kernel boot parameters. If I looked it up correctly, RHEL5 does not have zipl support for device-mapper devices [1]. So /boot is most likely a single path SCSI device (or DASD) without LVM in your case anyway and thus independent of and unrelated to the root-fs. Other releases that have this support (such as RHEL6/7 or SLES, via upstream s390-tools-1.8.3 or as backport), have particular constraints for the device-mapper support, which are described in detail in the book 'Device Drivers, Features, and Commands', Chapter 'Initial program loader for System z - zipl ', Section 'Preparing a logical device as a boot device' [2 and pick suitable distro release]: <quote> You can prepare logical devices as boot devices. Logical devices are provided by device drivers that do not work on real hardware. For example, device mapper provides logical devices. In this context, a target device means a logical device on which the file system is located. A base device is a physical device on which the logical device is located, or a logical device that is a linear mapping beginning at block 0 of the physical device. You can prepare a logical DASD or SCSI device as a boot device if the following conditions are met: * Kernel, initial RAM disk, and parameter files are all located on a logical device that maps to a single base device. This base device can be mirrored or accessed through a multipath configuration. * Adjacent data blocks on the logical device correspond to adjacent data blocks on the base device. * zipl has access to the first blocks, including block 0, of the base device. </quote> The first bullet generically describes a special case of multi-PV LVM VG where the /boot LV happens to map to one single PV. As long as you ensure this constraint remains fulfilled, it is possible, though it could be considered unsafe. This matches the discussion Tomas Pavelka pointed to. [1] http://www.ibm.com/developerworks/linux/linux390/s390-tools-1.8.3.html#changes [2] http://www.ibm.com/developerworks/linux/linux390/distribution_hints.html -- Mit freundlichen Grüßen / Kind regards Steffen Maier Linux on System z Development IBM Deutschland Research & Development GmbH Vorsitzende des Aufsichtsrats: Martina Koederitz Geschaeftsfuehrung: Dirk Wittkopp Sitz der Gesellschaft: Boeblingen Registergericht: Amtsgericht Stuttgart, HRB 243294 ---------------------------------------------------------------------- For LINUX-390 subscribe / signoff / archive access instructions, send email to [email protected] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 ---------------------------------------------------------------------- For more information on Linux on System z, visit http://wiki.linuxvm.org/ ---------------------------------------------------------------------- For LINUX-390 subscribe / signoff / archive access instructions, send email to [email protected] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 ---------------------------------------------------------------------- For more information on Linux on System z, visit http://wiki.linuxvm.org/
