On Wed 14 Dec 2011 at 05:10PM, Ian Collins wrote:
> On 12/14/11 05:06 PM, Mike Gerdts wrote:
> >On Wed 14 Dec 2011 at 05:02PM, Ian Collins wrote:
> >>On 12/14/11 04:54 PM, Ian Collins wrote:
> >>>On 12/14/11 04:48 PM, John D Groenveld wrote:
> >>>>In message<4ee8183b.2050...@ianshome.com>, Ian Collins writes:
> >>>>>The zone originally came from a Solaris 10 update 9 system. How do I go
> >>>>>about patching it?
> >>>>Can you v2v the zone back to an S10 system and then apply the latest
> >>>>patches there?
> >>>I was hoping no one would suggest that!
> >>>
> >>That's probably harder than it appears, the zone's root zfs
> >>filesystems have been migrated, so they can't be sent back to an
> >>older OS version.
> >By this, do you mean that you ran /usr/lib/brand/shared/dsconvert?
> >
> 
> Yes.

You should be able to get out of the situation you are in with:

1. Reboot to the Solaris 11 Express BE

   root@global# beadm activate <s11express-be-name>
   root@global# init 6

2. Partially revert the work done by dsconvert

   In this example, the zone's zonepath is /zones/s10.

   root@global# zfs list -r /zones/s10
   rpool/zones/s10                    3.18G  11.3G    51K  /zones/s10
   rpool/zones/s10/rpool              3.18G  11.3G    31K  /rpool
   rpool/zones/s10/rpool/ROOT         3.18G  11.3G    31K  legacy
   rpool/zones/s10/rpool/ROOT/zbe-0   3.18G  11.3G  3.18G  /
   rpool/zones/s10/rpool/export         62K  11.3G    31K  /export
   rpool/zones/s10/rpool/export/home    31K  11.3G    31K  /export/home

   The goal here is to move rpool/zones/s10/rpool/ROOT up one level.  We
   need to do a bit of a dance to get it there.  Do not reboot or issue
   'zfs mount -a' in the middle of this.  If something goes wrong and a
   reboot happens, it won't be disasterous - you will just need to
   complete the procedure when the next boot stops with
   svc:/filesystem/local problems.

   root@global# zfs set mountpoint=legacy rpool/zones/s10/rpool/ROOT/zbe-0
   root@global# zfs set zoned=off rpool/zones/s10/rpool
   root@global# zfs rename rpool/zones/s10/rpool/ROOT/zbe-0 \
                    rpool/zones/s10/ROOT
   root@global# zfs set zoned=on rpool/zones/s10/rpool
   root@global# zfs set zoned=on rpool/zones/s10/ROOT

   Now the zone's dataset layout should look like:

   root@global# zfs list -r /zones/s10
   NAME                                USED  AVAIL  REFER  MOUNTPOINT
   rpool/zones/s10                    3.19G  11.3G    51K  /zones/s10
   rpool/zones/s10/ROOT               3.19G  11.3G    31K  legacy
   rpool/zones/s10/ROOT/zbe-0         3.19G  11.3G  3.19G  legacy
   rpool/zones/s10/rpool                93K  11.3G    31K  /rpool
   rpool/zones/s10/rpool/export         62K  11.3G    31K  /export
   rpool/zones/s10/rpool/export/home    31K  11.3G    31K  /export/home

3. Boot the zone and patch

   root@global# zoneadm -z s10 boot
   root@global# zlogin s10
   root@s10# ...  (apply required patches)

4. Shutdown the zone

   root@s10# init 0

5. Revert the dataset layout to the way that dsconvert left it.

   Again, try to avoid reboots during this step.

   root@global# zfs set zoned=off rpool/zones/s10/ROOT
   root@global# zfs set zoned=off rpool/zones/s10/rpool
   root@global# zfs rename rpool/zones/s10/ROOT rpool/zones/s10/rpool/ROOT
   root@global# zfs set zoned=on rpool/zones/s10/rpool
   root@global# zfs inherit zoned rpool/zones/s10/rpool/ROOT

6. Reboot to Solaris 11

   root@global# beadm activate <solaris11-be-name>
   root@global# init 6

At this point, the zone should be bootable on Solaris 11.

I've filed:

7121298 dsconvert should prevent conversion if not at right S10 patch level

Sorry for the troubles you had.

-- 
Mike Gerdts
Solaris Core OS / Zones                 http://blogs.oracle.com/zoneszone/
_______________________________________________
zones-discuss mailing list
zones-discuss@opensolaris.org

Reply via email to