I don't think live_upgrade(5) is zfs friendly. I've tried to use
live_upgrade(5) with my zones in a zfs filesystem and it didn't worked.
You could try to detach the zone before the luupgrade and then attach
it in the new BE, but I wouldn't hold my breath.
Antonello
Jeff Cheeney wrote:
Maybe someone on the install or zones discussion lists can help answer
this question.
Giovanni Schmid wrote:
I have read different articles/docs/posts about solaris zones and
liveupgrade issues until now; however, I have some doubts about the right
way to deploy liveupgrade boot environments in the following case.
I have a system with two disks configured as mirrors (that is, the same
fdisk partition and VTOC).
On the primary disk, I installed Solaris 10 8/07 with two sparse root zones,
say Z1 and Z2. Just two file systems were settled on the primary disk: an
UFS mounted on /, and a ZFS pool mirroring slice 4 of the two disks and
mounted on /zfspool on the 1st disk. The UFS is intended to contain all but
user's homes. These are served through a ZFS, namely /zfspool/users/home.
Only zone Z2 inherits this ZFS, via the add fs setting.
All that premised, my questions are:
What is the correct way of using Live Upgrade for this case ? Would
something like:
# lucreate -c bootenv1 -m /:c2d0s0:ufs -n bootenv2
be sufficient ? That is, will Z2 in bootenv2 see /zfspool/users/home ?
Any help is appreciated !
g.s
This message posted from opensolaris.org
___
storage-discuss mailing list
[EMAIL PROTECTED]
http://mail.opensolaris.org/mailman/listinfo/storage-discuss
___
zones-discuss mailing list
zones-discuss@opensolaris.org
___
zones-discuss mailing list
zones-discuss@opensolaris.org