I have hit a problem with zones in relation to doing Solaris upgrades - I can’t
do them currently. I have expended considerable effort in trying to be
positioned to do upgrades but they do not work for me as currently implemented.
I’ll provide a summary of my situation and then make suggestions as to what I
think needs to be done to make upgrades work. Let me know how you see it.
Background: I have production Solaris 10 Update 1 environments installed on 11
servers that I want to upgrade to Update 2 (or 3 or …) to get the new zone
features. It is not feasible to rebuild these servers each time an update with
a desired new feature or improvement is released so upgrades must be able to be
done. The number of installed non-global zones per server varies from none to a
current maximum of 11 (expected to reach around 20 on my five E2900 servers).
On each server with installed zones, the zone root filesystems are each
configured on individual SAN LUNs (8 GB for sparse zones, 12 GB for whole root
zones) to make sure there is enough operational space for /var logs, lightly
used home directories, required open source software, etc. Zones with heavy
home directory requirements also have a separately mounted LUN for /export/home
and each zone has 1-to-n separate LUNs for application filesystems. All of the
servers have Sun-branded Qlogic HBAs that use the Leadville driver for SAN
attachment which makes sure the drivers are available in the DVD mini-root.
MPXIO has been turned on to make sure there are multiple paths to the zone root
LUNs so that there is redundant access in case of HBA failure. This environment
works fine operationally and for patching.
The Problem: I have tried unsuccessfully to upgrade one of my lab servers that
is configured with 2 zones and opened a case with support only to be told that
because the DVD mini-root does not support MPXIO, upgrades cannot be done and
that I need to open a RFE to get that fixed; Engineering also noted that a
future release of Live Upgrade will include MPXIO support and that should
resolve the issue. Not only did the upgrade not work but it left the server OS
trashed because it had already done the global upgrade before failing because
it wasn’t able to find the zone roots to finish the upgrade. The real problem
is that the upgrade process cannot identify and mount the zone roots to do the
upgrade and as a result, the upgrade fails. Although it superficially appears
that this is due to MPXIO not being available in the mini-root, isn’t really an
MPXIO issue but an issue of the zone roots being implemented on external LUNs.
The same exact problem would exist if the zone roots are mounted on any
external drive without MPXIO because the controller address of the mountpoint
in the vfstab would be unlikely to match the controller address allocated by
the mini-root, again causing the mounts to fail during an upgrade. Live Upgrade
is not currently an option nor do I expect it to ever be an option (even with
MPXIO support) both because of the way my zone roots are mounted and because my
E2900 servers have only 2 internal drives which are mirrored for high
availability leaving no place to do a Live Upgrade.
I think this points to a more basic functional integration problem with doing
upgrades that hasn’t been considered in the overall zone implementation design.
The design seems to assume that all zone roots are implemented on an internal
drive; in that scenario, there is no problem because the zone roots are always
on a drive available and easily identified when booted from the DVD mini-root.
That’s great in the lab or if there is a huge internal drive that can truly
hold all zone roots. It would be great if the marketing myth that all zones can
be tiny were true. Unfortunately, that doesn’t account for the need to run
zones operationally with usefully sized space for /var, /export, and /opt. In
my case, that would mean an internal drive capable of holding at least 200 GB
for zone roots, plus swap, plus around another 12 GB for the global zone OS and
/var. Even if that size drive were available, I wouldn’t want to use it because
of much poorer performance for the zones than for those installed on individual
drives/LUNs due to the drive latency and head position conflict issues of being
on one single drive.
I think of a couple of ways to handle upgrades that would solve my problem.
Fix Option 1: The fix needed for the upgrade process requires that the upgrade
process be able to discover the location of each of the zone roots whether they
are on internal or external drives and then mount them as part of the upgrade
process. If a drive can be seen from the mini-root and zones have been
implemented, the zone root(s) should be able to be discovered using data from
the boot disk’s vfstab and some kind of probing procedure. At a minimum, LUNs
visible from HBAs supported in the Leadville driver should be able to be
discovered. The upgrade process must also be changed to make sure all of the
filesystems needed for doing an upgrade are available before actually starting
the upgrade - if not, the upgrade should be failed; that way the server would
still be usable if, for example, not all of the zone roots were discovered. In
addition, if some kind of tag is placed in a zone root to enable the upgrade
discovery process, there should be a utility created to implement the tag in
older zones which do not have them; this utility could be a prereq to
performing an upgrade. BTW, I think this would break Live Upgrade.
Fix Option 2: As an alternative suggestion that might not break Live Upgrade,
I suggest that the upgrade process for zones be made a separate step that is
done to each zone on first boot once the global zone is upgraded. In this case,
it wouldn’t matter where the zone roots were mounted because the upgrade would
only be done once the zone roots are mounted. I think this would at least
require a new zone status but might be easier to accomplish than my first
This message posted from opensolaris.org
zones-discuss mailing list