On 3/5/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
> 1) Moves within the same platform: Are all zone moves between different
> machine hardware classes considered to be supported? Is the move of a
> zone from a 25k domain (sun4u) to a T2000 (sun4v) OK? If so, is it
> equally OK to move a zone from a 25k domain to a FSC PrimePower system
The basic requirement for migrations of any sort is that both the
original and the new global zones must been running the same
packaging/patch levels for the OS. Since sun4u and sun4v machines have
different contents for certain packages (for example, SUNWcar and
SUNWkvm), migration between a sun4u and a sun4v system may be
problematic although I'm not sure of "zoneadm attach" will detect this
discrepancy (it would need to compare the ARCH value of the package in
This potential problem could be helped a lot if the installer would
add the required packages for sun4u and sun4v when installing. This
would help greatly in the area of creating a single flash archive for
use across sun4u and sun4v systems.
> 2) Cross-platform moves: is there a way to move a zone from SPARC to
> x86 or vice versa?
Not supported at the current time.
I have a strong interest in seeing zone migrations work across
dissimilar systems. My primary use case is to be able to move
workloads from one server to another that is freshly patched. This is
essential in an environment where multiple unrelated workloads have
been consolidated to a single server. There is a high likelihood
1) the various owners of the workloads will be unable to find a
mutually acceptable outage window, unless of course we have a power
outage at the same time. :) Patching is difficult without power,
2) No one will be happy with the increased amount of time required to
patch a server with a bunch of zones on it.
3) There will always be someone that can't move forward for whatever reason.
Live Upgrade support for zones exists in Solaris Express
(closed-source... other issues with that). Live Upgrade does not
solve problems 1 or 3.
I have solved this problem by enforcing standards that relegate all
applications to file systems other than the root file system. I use
customized tools for transferring a zone configuration from one
machine to another, build the zone on the target system with the
configuration, then synchronize application data. The build+sync
process takes about 20 minutes per zone and is done with no outage.
When it comes time to cut over to the target machine, the zone is shut
down, resynchronized, and then booted on the target. Application down
time is typically 2 minutes + app stop+start time.
Last week I migrated my lab jumpstart development environment from a
V240 (sparc) running Solaris 10U1 to an X4500 (x86) running S10U3. It
worked without a hitch. (OS media, JASS, etc. are served from a
separate NAS device.) I've used the same mechanisms for "migrating"
zones between a wide variety of sun4u and sun4v hardware. There are
places where this approach doesn't work in the current implementation,
but they could be fixed with the right amount of effort.
By having strong rules and a bunch of scripting in place, workload
migrations that were previously measured in person-days of technical
effort and potentially person-weeks of coordination now take minutes
of technical effort and no more coordination with the business than is
required for a reboot.
Note that while Sun won't support the scripts I write, they do not
touch private interfaces (e. g. no munging of /etc/zones/index).
zones-discuss mailing list