Ok, I destroyed the zbe-1 clone and the zbe snapshot.
The original zone got ready and booted happily!
What's strange is that I have many other machines cloned from a non-booted base
machine,
used as a template for many similar ones.
These ones have their own zbe clone with origin from a base zbe snapshot.
I mean, these did not create a base zbe-1, but a zbe inside the cloned machine
dataset:
These are the datasets of the non-booted base zone:
data/xsbasezone 2.56G 830G 41.1K /data/xsbasezone
data/xsbasezone/ROOT 2.56G 830G 37.5K legacy
data/xsbasezone/ROOT/zbe 2.56G 830G 2.56G legacy
These are the datasets of one of the cloned and running zones:
data/sonicle/zones/www 26.0G 830G 42.9K
/data/sonicle/zones/www
data/sonicle/zones/www/ROOT 26.0G 830G 37.5K legacy
data/sonicle/zones/www/ROOT/zbe 26.0G 830G 27.2G legacy
These are all the properties of the cloned zbe:
sonicle@xstreamserver:/etc/zones$ zfs get all data/sonicle/zones/www/ROOT/zbe
NAME PROPERTY VALUE
SOURCE
data/sonicle/zones/www/ROOT/zbe type filesystem
-
data/sonicle/zones/www/ROOT/zbe creation Thu Oct 4
12:22 2012 -
data/sonicle/zones/www/ROOT/zbe used 26.0G
-
data/sonicle/zones/www/ROOT/zbe available 830G
-
data/sonicle/zones/www/ROOT/zbe referenced 27.2G
-
data/sonicle/zones/www/ROOT/zbe compressratio 1.00x
-
data/sonicle/zones/www/ROOT/zbe mounted yes
-
data/sonicle/zones/www/ROOT/zbe origin
data/xsbasezone/ROOT/[email protected]_snap -
data/sonicle/zones/www/ROOT/zbe quota none
default
data/sonicle/zones/www/ROOT/zbe reservation none
default
data/sonicle/zones/www/ROOT/zbe recordsize 128K
default
data/sonicle/zones/www/ROOT/zbe mountpoint legacy
inherited from data/sonicle/zones/www/ROOT
data/sonicle/zones/www/ROOT/zbe sharenfs off
default
data/sonicle/zones/www/ROOT/zbe checksum on
default
data/sonicle/zones/www/ROOT/zbe compression off
default
data/sonicle/zones/www/ROOT/zbe atime on
default
data/sonicle/zones/www/ROOT/zbe devices on
default
data/sonicle/zones/www/ROOT/zbe exec on
default
data/sonicle/zones/www/ROOT/zbe setuid on
default
data/sonicle/zones/www/ROOT/zbe readonly off
default
data/sonicle/zones/www/ROOT/zbe zoned on
inherited from data/sonicle/zones/www/ROOT
data/sonicle/zones/www/ROOT/zbe snapdir hidden
default
data/sonicle/zones/www/ROOT/zbe aclmode discard
default
data/sonicle/zones/www/ROOT/zbe aclinherit restricted
default
data/sonicle/zones/www/ROOT/zbe canmount noauto
local
data/sonicle/zones/www/ROOT/zbe xattr on
default
data/sonicle/zones/www/ROOT/zbe copies 1
default
data/sonicle/zones/www/ROOT/zbe version 4
-
data/sonicle/zones/www/ROOT/zbe utf8only off
-
data/sonicle/zones/www/ROOT/zbe normalization none
-
data/sonicle/zones/www/ROOT/zbe casesensitivity sensitive
-
data/sonicle/zones/www/ROOT/zbe vscan off
default
data/sonicle/zones/www/ROOT/zbe nbmand off
default
data/sonicle/zones/www/ROOT/zbe sharesmb off
default
data/sonicle/zones/www/ROOT/zbe refquota none
default
data/sonicle/zones/www/ROOT/zbe refreservation none
default
data/sonicle/zones/www/ROOT/zbe primarycache all
default
data/sonicle/zones/www/ROOT/zbe secondarycache all
default
data/sonicle/zones/www/ROOT/zbe usedbysnapshots 1.28G
-
data/sonicle/zones/www/ROOT/zbe usedbydataset 24.8G
-
data/sonicle/zones/www/ROOT/zbe usedbychildren 0
-
data/sonicle/zones/www/ROOT/zbe usedbyrefreservation 0
-
data/sonicle/zones/www/ROOT/zbe logbias latency
default
data/sonicle/zones/www/ROOT/zbe dedup off
default
data/sonicle/zones/www/ROOT/zbe mlslabel none
default
data/sonicle/zones/www/ROOT/zbe sync standard
default
data/sonicle/zones/www/ROOT/zbe refcompressratio 1.00x
-
data/sonicle/zones/www/ROOT/zbe written 186M
-
data/sonicle/zones/www/ROOT/zbe org.opensolaris.libbe:active on
local
data/sonicle/zones/www/ROOT/zbe org.opensolaris.libbe:parentbe
56251e7f-6e2f-6c41-8396-d92ea45c1b5d local
It's clear that this clone has its cloned dataset inside its own dataset,
including its own zbe.
Also it's clear that the destination clone of xsbase was www.sonicle.com, as
the snapshot is @www.sonicle.com_snap
The zpool history of the requested clone command, shows it used "xstreamdev" as
a name for the snapshot, not "xstreamdevclone".
But I'm sure that I issued "zoneadm -z xstreamdevclone clone xstreamdev" (I
have it in bash history...).
Source dataset:
data/sonicle/zones/xstreamdev 4.35G 830G 41.1K
/data/sonicle/zones/xstreamdev
data/sonicle/zones/xstreamdev/ROOT 4.35G 830G 37.5K legacy
data/sonicle/zones/xstreamdev/ROOT/zbe 4.35G 830G 3.43G legacy
Requested destination clone dataset:
data/sonicle/zones/xstreamdevclone 4.35G 830G 41.1K
/data/sonicle/zones/xstreamdevclone
Maybe too similar names confused the cloning process?
Any ideas?
Maybe this time I can try a clone with "-m copy". It's few gigabytes.
Da:
Gabriele Bulfon
A:
[email protected] [email protected]
Data:
11 gennaio 2013 12.33.38 CET
Oggetto:
Re: [discuss] cloned zone, marked as incomplete
Hi,
thanx for this deep answer!
I already destroyed the cloned machine (that actually was just configured, but
never booted as
the clone command failed).
I also rebooted the entire host.
It still does not boot saying "multiple datasets" as before.
What I found in the zpool history, regarding the clone, is this:
2013-01-09.10:51:25 zfs snapshot
data/sonicle/zones/xstreamdev/ROOT/zbe@xstreamdev_snap
2013-01-09.10:51:25 zfs clone
data/sonicle/zones/xstreamdev/ROOT/zbe@xstreamdev_snap
data/sonicle/zones/xstreamdev/ROOT/zbe-1
2013-01-09.10:51:26 zfs set org.opensolaris.libbe:active=on
data/sonicle/zones/xstreamdev/ROOT/zbe-1
2013-01-09.10:51:26 zfs set
org.opensolaris.libbe:parentbe=56251e7f-6e2f-6c41-8396-d92ea45c1b5d
data/sonicle/zones/xstreamdev/ROOT/zbe-1
2013-01-09.10:51:31 zfs set canmount=noauto
data/sonicle/zones/xstreamdev/ROOT/zbe-1
I can't see any mention to the new zone (xstreamdevclone), just the original
xstreamdev.
Looks like it tried to create the second zbe (zbe-1), then something failed.
Now probably the system sees the two zbe and it doesn't know which one I want
to boot.
I should probably destroy the zbe-1 and the snapshot it comes from?
The origin of the error, may be too similar namings of both the zone name and
the datasets?
(xstreamdev -xstreamdevclone, data/sonicle/zones/xstreamdev
-data/sonicle/zones/xstreamdevclone)
----------------------------------------------------------------------------------
Da: Jim Klimov
A: [email protected]
Cc: Gabriele Bulfon
Data: 10 gennaio 2013 13.06.00 CET
Oggetto: Re: [discuss] cloned zone, marked as incomplete
On 2013-01-10 11:45, Gabriele Bulfon wrote:
Ok, I tried to manually edit index file to installed.
I tried both detaching and booting but I have this:
...
Well, for one - is it possible for you to reboot the host OS so as to
have both zones not "booted" so they won't interfere with each other
(dataset busy, etc.) and you'd have a cleaner picture of what resources
they contend about?
As for your questions on /etc/zones/* files - yes, they are editable,
no, it is not supported or recommended by docs. That said, I do more
often "doctor" them by hand (including cloning) than by proper tools.
Perhaps this habbit settled from early OpenSolaris days, when everything
was evolving quickly and not all published builds were stable, and tools
might lag behind in functionality - or such was the impression.
So I suggest that you revise the /etc/zones/ZONENAME.xml files and
verify that they don't indeed use the same datasets as roots (I am
not sure if you may "delegate" the same dataset hierarchy into several
zones, but you certainly can lofs-mount resources into multiple zones),
and also verify that their zonename tags are different in the manifests.
After that, use something like this to list the zone datasets' local
ZFS attributes (use your zonetree root as appropriate):
# zfs list -t filesystem -o name -H -r rpool/zones/build | \
while read Z; do zfs get all "$Z" | grep local; done
This might yield a list like this:
rpool/zones/build/zone1 sharenfs off
local
rpool/zones/build/zone1 sharesmb off
local
rpool/zones/build/zone1/ROOT mountpoint legacy
local
rpool/zones/build/zone1/ROOT zoned on
local
rpool/zones/build/zone1/ROOT/zbe canmount noauto
local
rpool/zones/build/zone1/ROOT/zbe org.opensolaris.libbe:active on
local
rpool/zones/build/zone1/ROOT/zbe org.opensolaris.libbe:parentbe
717f5aeb-1222-6381-f3d3-cc52c9336f6e local
rpool/zones/build/zone1/ROOT/zbe-1 canmount
noauto local
rpool/zones/build/zone1/ROOT/zbe-1 org.opensolaris.libbe:active on
local
rpool/zones/build/zone1/ROOT/zbe-1 org.opensolaris.libbe:parentbe
750040bf-d1de-e8ac-8e0f-b22cd5315ddf local
rpool/zones/build/zone1/ROOT/zbe-2 canmount
noauto local
rpool/zones/build/zone1/ROOT/zbe-2 org.opensolaris.libbe:parentbe
503ebce4-d7b5-6fed-f15b-f0af0ce63672 local
rpool/zones/build/zone1/ROOT/zbe-2 org.opensolaris.libbe:active on
local
# zfs get mountpoint rpool/zones/build/zone1/ROOT/zbe-2
NAME PROPERTY VALUE SOURCE
rpool/zones/build/zone1/ROOT/zbe-2 mountpoint legacy inherited
from rpool/zones/build/zone1/ROOT
You can see that these zbe* datasets mention "parentbe" - this
regards your GZ root:
# df -k /
Filesystem kbytes used avail capacity Mounted on
rpool/ROOT/oi_151a4-20120607
61415424 452128 24120698 2% /
# zfs list -t filesystem -o name -H -r rpool/ROOT | while read Z; \
do zfs get all $Z | grep local; done | grep oi_151a4-20120607' '
rpool/ROOT/oi_151a4-20120607 mountpoint /
local
rpool/ROOT/oi_151a4-20120607 canmount noauto
local
rpool/ROOT/oi_151a4-20120607 org.opensolaris.libbe:policy static
local
rpool/ROOT/oi_151a4-20120607 org.opensolaris.libbe:uuid
503ebce4-d7b5-6fed-f15b-f0af0ce63672 local
In fact, at this point I abruptly can't help much more :)
The Sol10/SXCE dataset structure for zones was different (without
separate ZBEs so explicitly), so I can not vouch for the method
that current zones implementation uses to pick one of these ZBEs
as the one to mount during zone startup - the latest number or the
UUID matching current GZ root BE do not seem as foolproof methods;
no ZFS attributes nor XML manifest attributes catch my eye as IDs
either...
Now that I learned of my shortsightedness, I'd like to learn the
answer to this question - how is a particular ZBE picked? ;)
What I do know is that now you can "ready" the zone so that its
resources are mounted and the root process is running, but nothing
more - this validates the zone config and in particular allows pkg
to run and update the zone's filesystem tree. As in the older zone
implementation, the logical filesystem for the GZ access is mounted
at $ZONEROOT/root (with other $ZONEROOT/* resources being for the
system use - device links, detached-zone manifests, etc.):
# zoneadm -z zone1 ready
# df -k /zones/build/zone1
Filesystem kbytes used avail capacity Mounted on
rpool/zones/build/zone1
61415424 34 24120306 1% /zones/build/zone1
# df -k /zones/build/zone1/root
Filesystem kbytes used avail capacity Mounted on
rpool/zones/build/zone1/ROOT/zbe-2
61415424 305162 24120696 2%
/zones/build/zone1/root
# ps -efZ | grep -v grep | grep zone1
zone1 root 10764 1 0 15:40:08 ? 0:00 zsched
global root 10662 1 0 15:40:05 ? 0:00 zoneadmd -z
zone1
# zlogin zone1
zlogin: login allowed only to running zones (zone1 is 'ready').
What your cloning *should have* done IMHO (be it by tools or by hand)
is to create a snapshot and clone of a ZBE of the source zone and name
it as the ZBE of your new zone (adding the layers of parent datasets
as appropriate), and in the copied XML manifest it should have used
that mountpath as the zone root. I guess something broke in the process,
perhaps renaming of the cloned dataset or the forging of its new ZFS
attributes, or modifications to the manifest...
The ZFS part you can probably revise in "zpool history" ;)
If you do things by hand, be wary of the "zoned" ZFS property - it
essentially disables manual works with a dataset from the GZ, so you
might have to set it to "off" during your works and reenable after
you're done, and this might require that all zones involved are shut
down (including sometimes zones that live on clones of the dataset -
i.e. all zones on the system which are cloned from the same ancestor).
Now, a question from me to the public (if anyone got reading this
far): what is now the proper recommendation to store the "delegated"
datasets (which the local zone admin can manage with zfs commands) -
should these be stored under the owning zone's zone-root dataset,
i.e. as $ZONEROOT/localzfs/my/hier/archy, or should they be stored
in a separate structure (i.e. pool/zonedata/$ZONENAME/my/hier/archy)?
Apparently there are pros and cons of any approach, most obvious is
storage of programs and data in different pools, so I ask about the
simpler scenario when these are in the same pool, and can be rooted
in different dataset trees or in the same one. Backup, cloning and
quota-ing/reservation for objects under a single root is more simple;
likewise, major upgrades of software separate from data (i.e. perhaps
conversion from SVR4 to IPS and vice versa, or plain major version
updates) is more simple when the two are not connected...
HTH and thanks in advance,
//Jim Klimov
illumos-discuss
|
Archives
|
Modify
Your Subscription
-------------------------------------------
illumos-discuss
Archives: https://www.listbox.com/member/archive/182180/=now
RSS Feed: https://www.listbox.com/member/archive/rss/182180/21175430-2e6923be
Modify Your Subscription:
https://www.listbox.com/member/?member_id=21175430&id_secret=21175430-6a77cda4
Powered by Listbox: http://www.listbox.com