Victor: You did not get the snv_121 LU packages before you do luupgrade.
You should do this: mount -F hsfs -r /dev/lofi/1 /mnt cd /mnt/So*/Tools/Installer ./liveupgrade20 -nodisplay -noconsole ludelete snv_121 lucreate snv_121 luupgrade again Victor Kramer wrote: > Hello, > > I'm trying to do the upgrade of SCXE from snv_114 to snv_121. > For some reason the GRUB menu file is not updated and after reboot I can boot > in old snv_114 only. > Here is the upgrade session. There are no neither errors nor warnings during > upgrade: > root at server:~> lofiadm -a `pwd`/sol-nv-b121-x86-dvd.iso > /dev/lofi/1 > > root at server:~> mount -F hsfs -r /dev/lofi/1 /mnt > > root at server:~> lucreate -n snv_121 > Checking GRUB menu... > System has findroot enabled GRUB > Analyzing system configuration. > Comparing source boot environment <snv_114> file systems with the file > system(s) you specified for the new boot environment. Determining which > file systems should be in the new boot environment. > Updating boot environment description database on all BEs. > Updating system configuration files. > Creating configuration for boot environment <snv_121>. > Source boot environment is <snv_114>. > Creating boot environment <snv_121>. > Cloning file systems from boot environment <snv_114> to create boot > environment <snv_121>. > Creating snapshot for <rpool/ROOT/snv_114> on <rpool/ROOT/snv_114 at snv_121>. > Creating clone for <rpool/ROOT/snv_114 at snv_121> on <rpool/ROOT/snv_121>. > Setting canmount=noauto for </> in zone <global> on <rpool/ROOT/snv_121>. > Creating snapshot for <rpool/export/zones/CRM> on <rpool/export/zones/CRM at > snv_121>. > Creating clone for <rpool/export/zones/CRM at snv_121> on > <rpool/export/zones/CRM-snv_121>. > Saving existing file </boot/grub/menu.lst> in top level dataset for BE > <snv_121> as <mount-point>//boot/grub/menu.lst.prev. > File </boot/grub/menu.lst> propagation successful > Copied GRUB menu from PBE to ABE > No entry for BE <snv_121> in GRUB menu > Population of boot environment <snv_121> successful. > Creation of boot environment <snv_121> successful. > > root at server:~> luupgrade -u -n snv_121 -s /mnt > > System has findroot enabled GRUB > No entry for BE <snv_121> in GRUB menu > Uncompressing miniroot > Copying failsafe kernel from media. > 74221 blocks > miniroot filesystem is <lofs> > Mounting miniroot at </mnt/Solaris_11/Tools/Boot> > Validating the contents of the media </mnt>. > The media is a standard Solaris media. > The media contains an operating system upgrade image. > The media contains <Solaris> version <11>. > Constructing upgrade profile to use. > Locating the operating system upgrade program. > Checking for existence of previously scheduled Live Upgrade requests. > Creating upgrade profile for BE <snv_121>. > Checking for GRUB menu on ABE <snv_121>. > Saving GRUB menu on ABE <snv_121>. > Checking for x86 boot partition on ABE. > Determining packages to install or upgrade for BE <snv_121>. > Performing the operating system upgrade of the BE <snv_121>. > CAUTION: Interrupting this process may leave the boot environment unstable > or unbootable. > Upgrading Solaris: 100% completed > Installation of the packages from this media is complete. > Restoring GRUB menu on ABE <snv_121>. > Adding operating system patches to the BE <snv_121>. > The operating system patch installation is complete. > ABE boot partition backing deleted. > PBE GRUB has no capability information. > PBE GRUB has no versioning information. > ABE GRUB is newer than PBE GRUB. Updating GRUB. > GRUB update was successful. > Configuring failsafe for system. > Failsafe configuration is complete. > INFORMATION: The file </var/sadm/system/logs/upgrade_log> on boot > environment <snv_121> contains a log of the upgrade operation. > INFORMATION: The file </var/sadm/system/data/upgrade_cleanup> on boot > environment <snv_121> contains a log of cleanup operations required. > INFORMATION: Review the files listed above. Remember that all of the files > are located on boot environment <snv_121>. Before you activate boot > environment <snv_121>, determine if any additional system maintenance is > required or if additional media of the software distribution must be > installed. > The Solaris upgrade of the boot environment <snv_121> is complete. > Installing failsafe > Failsafe install is complete. > root at server:~> lustatus > Boot Environment Is Active Active Can Copy > Name Complete Now On Reboot Delete Status > -------------------------- -------- ------ --------- ------ ---------- > snv_114 yes yes yes no - > snv_121 yes no no yes - > > root at server:~> luactivate snv_121 > System has findroot enabled GRUB > Generating boot-sign, partition and slice information for PBE <snv_114> > > Generating boot-sign for ABE <snv_121> > Generating partition and slice information for ABE <snv_121> > Copied boot menu from top level dataset. > Generating direct boot menu entries for PBE. > Generating xVM menu entries for PBE. > Generating direct boot menu entries for ABE. > Generating xVM menu entries for ABE. > Disabling splashimage > Re-enabling splashimage > No more bootadm entries. Deletion of bootadm entries is complete. > GRUB menu default setting is unaffected > Done eliding bootadm entries. > > ********************************************************************** > > The target boot environment has been activated. It will be used when you > reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You > MUST USE either the init or the shutdown command when you reboot. If you > do not use either init or shutdown, the system will not boot using the > target BE. > > ********************************************************************** > > In case of a failure while booting to the target BE, the following process > needs to be followed to fallback to the currently working boot environment: > > 1. Boot from Solaris failsafe or boot in single user mode from the Solaris > Install CD or Network. > > 2. Mount the Parent boot environment root slice to some directory (like > /mnt). You can use the following command to mount: > > mount -Fzfs /dev/dsk/c1t1d0s0 /mnt > > 3. Run <luactivate> utility with out any arguments from the Parent boot > environment root slice, as shown below: > > /mnt/sbin/luactivate > > 4. luactivate, activates the previous working boot environment and > indicates the result. > > 5. Exit Single User mode and reboot the machine. > > ********************************************************************** > > Modifying boot archive service > Propagating findroot GRUB for menu conversion. > File </etc/lu/installgrub.findroot> propagation successful > File </etc/lu/stage1.findroot> propagation successful > File </etc/lu/stage2.findroot> propagation successful > Deleting stale GRUB loader from all BEs. > File </etc/lu/installgrub.latest> deletion successful > File </etc/lu/stage1.latest> deletion successful > File </etc/lu/stage2.latest> deletion successful > Activation of boot environment <snv_121> successful. > > root at server:~> lustatus > Boot Environment Is Active Active Can Copy > Name Complete Now On Reboot Delete Status > -------------------------- -------- ------ --------- ------ ---------- > snv_114 yes yes no no - > snv_121 yes no yes no - > > After that I do "init 6" command: > root at server:~> init 6 > propagating updated GRUB menu > > root at server:~> Connection to prostor closed. > > $ ssh root at server > Password: > Last login: Mon Aug 24 17:47:01 2009 from osol.proact.lv > Sun Microsystems Inc. SunOS 5.11 snv_114 November 2008 > root at server:~> lustatus > Boot Environment Is Active Active Can Copy > Name Complete Now On Reboot Delete Status > -------------------------- -------- ------ --------- ------ ---------- > snv_114 yes yes yes no - > snv_121 yes no no yes - > > > Any ideas how can I finished the upgrade? > It seems the GRUB menu on new PE is not updated. > Is it a bug or I'm doing something wrong? > > I checked the log file /a/var/sadm/system/logs/upgrade_log > There are no errors as well. > > Thanks in advance, > victor