We have some marketing event lately and I think some brief introduction needed 
for Live Upgrade, because are mislead with it.

Live Upgrade is just one feature "Live Upgrade" offers. And honestly it is not 
upgrading running system to new release, but instead upgrading separate system 
image or system clone using alternative root feature of upgrade program.

"Live Upgrade" as a product offers creation of system clones - copy of current 
system and manipulating them. One of this "manipulating" operation is upgrading 
such a copy while running on "primary" system. But in general Live Upgrade make 
possible to create system copy or Boot Environment (because you can reboot 
system from it):

[i]lucreate[/i]

once created it may be mounted to some mountpoint - /a for example. Then you 
can cd /a and manually do whatever you want to this system copy. Remove, copy, 
create files etc - anything.

[i]lumount[/i]

once copy (Boot Environment) is ready to use Live Upgrade makes it ready to 
reboot - by setting eeprom boot device etc...

[i]luactivate[/i]

After this by reboot (init 6) system will be switched to new Boot Environment.

You may create as many Boot Environment as your hard drives can contain. 
LiveUpgrade offers upgrading, patching, packaging this Boot Environment which 
is done by mounting it to some mountpoint and then upgrading, patching, 
packaging it using alternative root feature of installation technologies.

Lucreate in general just newfs partitions provided to it, mount them as 
specified to one root mountpoint - /a, /a/usr, /a/var... and after all of them 
mounted it will just run cpio to copy files from current system to alternative 
filesystem set.

Using just lucreate you may migrate you system from one disk to another. Attach 
the disk, create system copy on it and then reboot from it. Or you may 
reconfigure partition configuration, increate, split, merge system filesystems 
etc... This thin is pretty handy for system administrator - however it is not 
about upgrading at all.

Playing this game you may keep non system data on shared partition. Like if you 
have huge amount of html files on separate filesystem it is not necessarily to 
copy all of them on new Boot Environment but instead share it between different 
BE (initially all nonsystem partitions was shared only - it was me who 
introduce ability to copy them also, I am still proud of it...).

Another use case of lucreate only - kind of hot-swap BE. You may nightly create 
BE and if something happen to primary system - you may always reboot from that 
hot-swap copy. Please Note that this is different animal then mirroring. 
Mirroring will prevent disk error, but if data itself is corrupted for some 
reason - mirror can not help, but copy of entire system, ready to reboot from 
is kind of relief even in this case. As former IT admin for financial company I 
can imagine that this feature may be very welcomed there.

To prevent repartitioning you may use lumake in this case. It will just recopy 
everything to existing BE with same filesystems configuration.

Of course all operation on alternative copies - Alternative Boot Enviroments 
does not affect currently running system at all. You can damage it as you want 
and if it is not working after this - even you are not able to boot from it, it 
is always possible to go back to unchanged copy.

For example this is the only way to deliver critical patches which incompatible 
with previous to them version of kernel - it will change alternative root while 
running unchanged kernel on primary root.

Plus it is less expensive in terms of downtime. For production servers it may 
be very costly to run upgrade - ability to run it in background and then just 
reboot the system is critical.

So it is win-win situation for big customers with critical production servers - 
it is most safe and offer minimal downtime! For upgrade and as well for 
packaging, patching, disk reconfiguration or any manual manipulation with 
mounted BE.

Examples:

Keeping many boot enviroments on the system is very useful for testing - I have 
on my test machine Solaris 8, Solaris 9, Solaris 10 and Solaris 11 (Nevada).

On my development machine I suppose to have latest builds so I have to upgrade 
every month and I have two BE which I switch between once I need to go to new 
build.

One of our huge customer has 3 BE as a standard configuration for their 
machines - 2 as I have for switching between when new software need to be 
installed (not nesseserely upgrade but may be new packages or patches, because 
they doing it almost weekly) and 3rd BE for hot-swap in case of failure happen 
which hey repopulate nightly by cronjob.

vassun.
 
 
This message posted from opensolaris.org

Reply via email to