On Sun, 21 Feb 2010 17:33:06 +0100, Anil <an...@entic.net> wrote:

> r...@vps1:~# zoneadm -z note move /zones/note
> Moving across file systems; copying zonepath /zones/bugs...sh[1]: cd: 
> /zones/bugs: [No such file or directory]
> zoneadm: zone 'note': 'copy' failed with exit code 1.
>
> The copy failed.
> More information can be found in /var/log/zoneAAA2XaapU
>
> Cleaning up zonepath /zones/note...The ZFS file system for this zone has been 
> destroyed.
>
> I believe the zones are not mounted when the zone is not running so the cp 
> fails. Luckily it did not delete the data *phew*.

hmmm...I tested this myself, I'm not getting the bizar error messages from sh.

what error messages are logged in mentioned file 'var/log/zoneAAA2XaapU'

in general, I'd be helpfull to provide corresponding 'zoneadm list -cp' and 
'zfs list -t all' outputs
for the zones before and after the failure, as well as OS version/build info.

the error message suggest we attempted to copy which is correct if we're 
crossing file
system boundaries according to the 'move' psarc case: PSARC/2005/711

<snip>
The syntax for moving a zone will be:

                # zoneadm -z my-zone move /newpath

        where /newpath specifies the new zonepath for the zone.  This will
        be implemented so that it works both within and across filesystems,
        subject to the existing rules for zonepath (e.g. it cannot be on an
        NFS mounted filesystem).  When crossing filesystem boundaries the
        data will be copied and the original directory will be removed.
        Internally the copy will be implemented using cpio with the proper
        options to preserve all of the data (ACLs, etc.).  The zone must be
        halted while being moved.
<snip end>

however contrary to this description and your case, in my tests this just 
renames
the zfs mount point property and does notthing else, even when the move would 
cross
file system boundaries:

osoldev.root./export/home/batschul.=> zoneadm list -cp
0:global:running:/::ipkg:shared
-:zone1:installed:/tank/zones/zone1:caa7e784-dab0-6f77-e202-8cf135714809:ipkg:shared

osoldev.root./export/home/batschul.=> zfs list -t all
NAME                                             USED  AVAIL  REFER  MOUNTPOINT
tank/zones                                       996M   151G    38K  /tank/zones
tank/zones/zone1                                 996M   151G    36K  
/tank/zones/zone1
tank/zones/zone1/ROOT                            996M   151G  31.5K  legacy
tank/zones/zone1/ROOT/zbe                        996M   151G   996M  legacy

1) moving inside the same zfs dataset tank/zones

osoldev.root./export/home/batschul.=> zoneadm -z zone1 move /tank/zones/test

osoldev.root./export/home/batschul.=> zoneadm list -cp
0:global:running:/::ipkg:shared
-:zone1:installed:/tank/zones/test:caa7e784-dab0-6f77-e202-8cf135714809:ipkg:shared

osoldev.root./export/home/batschul.=> zfs list -t all
NAME                                             USED  AVAIL  REFER  MOUNTPOINT
tank/zones                                       996M   151G  36.5K  /tank/zones
tank/zones/zone1                                 996M   151G    36K  
/tank/zones/test
tank/zones/zone1/ROOT                            996M   151G  31.5K  legacy
tank/zones/zone1/ROOT/zbe                        996M   151G   996M  legacy

2) moving to different zfs dataset and different pool, rpool/export/home

osoldev.root./export/home/batschul.=> zoneadm -z zone1 move /export/home/test2

osoldev.root./export/home/batschul.=> zoneadm list -cp
0:global:running:/::ipkg:shared
-:zone1:installed:/export/home/test2:caa7e784-dab0-6f77-e202-8cf135714809:ipkg:shared

osoldev.root./export/home/batschul.=> zfs list -t all
NAME                                             USED  AVAIL  REFER  MOUNTPOINT
tank/zones                                       996M   151G  36.5K  /tank/zones
tank/zones/zone1                                 996M   151G    36K  
/export/home/test2
tank/zones/zone1/ROOT                            996M   151G  31.5K  legacy
tank/zones/zone1/ROOT/zbe                        996M   151G   996M  legacy

so there's no "move" in the sense of move happening in case 2) this seems wrong 
to me.

even the behavior in 1) looks suspect.

apparently we do have a bug open already that pretty much matches suspect 1)

6918505 zone move should rename ZFS file system, not change mountpoint
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6918505

but I can not find anything existing for suspect 2) and the missing "move"
action here as not only do we cross file system boundaries but we do even move
over to a different pool!

so either we'd need to enhance the scope of 6918505 or a new bug
ought to be filed for case 2)

---
frankB

_______________________________________________
zones-discuss mailing list
zones-discuss@opensolaris.org

Reply via email to