Hi,
thanx for this deep answer!
I already destroyed the cloned machine (that actually was just configured, but 
never booted as
the clone command failed).
I also rebooted the entire host.
It still does not boot saying "multiple datasets" as before.
What I found in the zpool history, regarding the clone, is this:
2013-01-09.10:51:25 zfs snapshot 
data/sonicle/zones/xstreamdev/ROOT/zbe@xstreamdev_snap
2013-01-09.10:51:25 zfs clone 
data/sonicle/zones/xstreamdev/ROOT/zbe@xstreamdev_snap 
data/sonicle/zones/xstreamdev/ROOT/zbe-1
2013-01-09.10:51:26 zfs set org.opensolaris.libbe:active=on 
data/sonicle/zones/xstreamdev/ROOT/zbe-1
2013-01-09.10:51:26 zfs set 
org.opensolaris.libbe:parentbe=56251e7f-6e2f-6c41-8396-d92ea45c1b5d 
data/sonicle/zones/xstreamdev/ROOT/zbe-1
2013-01-09.10:51:31 zfs set canmount=noauto 
data/sonicle/zones/xstreamdev/ROOT/zbe-1
I can't see any mention to the new zone (xstreamdevclone), just the original 
xstreamdev.
Looks like it tried to create the second zbe (zbe-1), then something failed.
Now probably the system sees the two zbe and it doesn't know which one I want 
to boot.
I should probably destroy the zbe-1 and the snapshot it comes from?
The origin of the error, may be too similar namings of both the zone name and 
the datasets?
(xstreamdev -xstreamdevclone, data/sonicle/zones/xstreamdev 
-data/sonicle/zones/xstreamdevclone)
----------------------------------------------------------------------------------
Da: Jim Klimov
A: [email protected]
Cc: Gabriele Bulfon
Data: 10 gennaio 2013 13.06.00 CET
Oggetto: Re: [discuss] cloned zone, marked as incomplete
On 2013-01-10 11:45, Gabriele Bulfon wrote:
Ok, I tried to manually edit index file to installed.
I tried both detaching and booting but I have this:
...
Well, for one - is it possible for you to reboot the host OS so as to
have both zones not "booted" so they won't interfere with each other
(dataset busy, etc.) and you'd have a cleaner picture of what resources
they contend about?
As for your questions on /etc/zones/* files - yes, they are editable,
no, it is not supported or recommended by docs. That said, I do more
often "doctor" them by hand (including cloning) than by proper tools.
Perhaps this habbit settled from early OpenSolaris days, when everything
was evolving quickly and not all published builds were stable, and tools
might lag behind in functionality - or such was the impression.
So I suggest that you revise the /etc/zones/ZONENAME.xml files and
verify that they don't indeed use the same datasets as roots (I am
not sure if you may "delegate" the same dataset hierarchy into several
zones, but you certainly can lofs-mount resources into multiple zones),
and also verify that their zonename tags are different in the manifests.
After that, use something like this to list the zone datasets' local
ZFS attributes (use your zonetree root as appropriate):
# zfs list -t filesystem -o name -H -r rpool/zones/build | \
while read Z; do zfs get all "$Z" | grep local; done
This might yield a list like this:
rpool/zones/build/zone1  sharenfs                        off
local
rpool/zones/build/zone1  sharesmb                        off
local
rpool/zones/build/zone1/ROOT  mountpoint                      legacy
local
rpool/zones/build/zone1/ROOT  zoned                           on
local
rpool/zones/build/zone1/ROOT/zbe  canmount                        noauto
local
rpool/zones/build/zone1/ROOT/zbe  org.opensolaris.libbe:active    on
local
rpool/zones/build/zone1/ROOT/zbe  org.opensolaris.libbe:parentbe
717f5aeb-1222-6381-f3d3-cc52c9336f6e  local
rpool/zones/build/zone1/ROOT/zbe-1  canmount
noauto                                                local
rpool/zones/build/zone1/ROOT/zbe-1  org.opensolaris.libbe:active    on
local
rpool/zones/build/zone1/ROOT/zbe-1  org.opensolaris.libbe:parentbe
750040bf-d1de-e8ac-8e0f-b22cd5315ddf                  local
rpool/zones/build/zone1/ROOT/zbe-2  canmount
noauto                                local
rpool/zones/build/zone1/ROOT/zbe-2  org.opensolaris.libbe:parentbe
503ebce4-d7b5-6fed-f15b-f0af0ce63672  local
rpool/zones/build/zone1/ROOT/zbe-2  org.opensolaris.libbe:active    on
local
# zfs get mountpoint  rpool/zones/build/zone1/ROOT/zbe-2
NAME                                PROPERTY    VALUE       SOURCE
rpool/zones/build/zone1/ROOT/zbe-2  mountpoint  legacy      inherited
from rpool/zones/build/zone1/ROOT
You can see that these zbe* datasets mention "parentbe" - this
regards your GZ root:
# df -k /
Filesystem            kbytes    used   avail capacity  Mounted on
rpool/ROOT/oi_151a4-20120607
61415424  452128 24120698     2%    /
# zfs list -t filesystem -o name -H -r rpool/ROOT | while read Z; \
do zfs get all $Z | grep local; done | grep oi_151a4-20120607' '
rpool/ROOT/oi_151a4-20120607  mountpoint                      /
local
rpool/ROOT/oi_151a4-20120607  canmount                        noauto
local
rpool/ROOT/oi_151a4-20120607  org.opensolaris.libbe:policy    static
local
rpool/ROOT/oi_151a4-20120607  org.opensolaris.libbe:uuid
503ebce4-d7b5-6fed-f15b-f0af0ce63672  local
In fact, at this point I abruptly can't help much more :)
The Sol10/SXCE dataset structure for zones was different (without
separate ZBEs so explicitly), so I can not vouch for the method
that current zones implementation uses to pick one of these ZBEs
as the one to mount during zone startup - the latest number or the
UUID matching current GZ root BE do not seem as foolproof methods;
no ZFS attributes nor XML manifest attributes catch my eye as IDs
either...
Now that I learned of my shortsightedness, I'd like to learn the
answer to this question - how is a particular ZBE picked? ;)
What I do know is that now you can "ready" the zone so that its
resources are mounted and the root process is running, but nothing
more - this validates the zone config and in particular allows pkg
to run and update the zone's filesystem tree. As in the older zone
implementation, the logical filesystem for the GZ access is mounted
at $ZONEROOT/root (with other $ZONEROOT/* resources being for the
system use - device links, detached-zone manifests, etc.):
# zoneadm -z zone1 ready
# df -k /zones/build/zone1
Filesystem            kbytes    used   avail capacity  Mounted on
rpool/zones/build/zone1
61415424      34 24120306     1%    /zones/build/zone1
# df -k /zones/build/zone1/root
Filesystem            kbytes    used   avail capacity  Mounted on
rpool/zones/build/zone1/ROOT/zbe-2
61415424  305162 24120696     2%
/zones/build/zone1/root
# ps -efZ | grep -v grep | grep zone1
zone1     root 10764     1   0 15:40:08 ?           0:00 zsched
global     root 10662     1   0 15:40:05 ?           0:00 zoneadmd -z
zone1
# zlogin zone1
zlogin: login allowed only to running zones (zone1 is 'ready').
What your cloning *should have* done IMHO (be it by tools or by hand)
is to create a snapshot and clone of a ZBE of the source zone and name
it as the ZBE of your new zone (adding the layers of parent datasets
as appropriate), and in the copied XML manifest it should have used
that mountpath as the zone root. I guess something broke in the process,
perhaps renaming of the cloned dataset or the forging of its new ZFS
attributes, or modifications to the manifest...
The ZFS part you can probably revise in "zpool history" ;)
If you do things by hand, be wary of the "zoned" ZFS property - it
essentially disables manual works with a dataset from the GZ, so you
might have to set it to "off" during your works and reenable after
you're done, and this might require that all zones involved are shut
down (including sometimes zones that live on clones of the dataset -
i.e. all zones on the system which are cloned from the same ancestor).
Now, a question from me to the public (if anyone got reading this
far): what is now the proper recommendation to store the "delegated"
datasets (which the local zone admin can manage with zfs commands) -
should these be stored under the owning zone's zone-root dataset,
i.e. as $ZONEROOT/localzfs/my/hier/archy, or should they be stored
in a separate structure (i.e. pool/zonedata/$ZONENAME/my/hier/archy)?
Apparently there are pros and cons of any approach, most obvious is
storage of programs and data in different pools, so I ask about the
simpler scenario when these are in the same pool, and can be rooted
in different dataset trees or in the same one. Backup, cloning and
quota-ing/reservation for objects under a single root is more simple;
likewise, major upgrades of software separate from data (i.e. perhaps
conversion from SVR4 to IPS and vice versa, or plain major version
updates) is more simple when the two are not connected...
HTH and thanks in advance,
//Jim Klimov



-------------------------------------------
illumos-discuss
Archives: https://www.listbox.com/member/archive/182180/=now
RSS Feed: https://www.listbox.com/member/archive/rss/182180/21175430-2e6923be
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21175430&id_secret=21175430-6a77cda4
Powered by Listbox: http://www.listbox.com

Reply via email to