This should work just fine with latest bits (Nevada 77 and later) via:
http://bugs.opensolaris.org/view_bug.do?bug_id=6425096
Its backport is currently targeted for an early build of s10u6.
eric
On Jan 8, 2008, at 7:13 AM, Andreas Koppenhoefer wrote:
[I apologise for reposting this... but no one replied to my post
from Dec, 4th.]
Hallo all,
while experimenting with zfs send and zfs receive mixed with
cloning on receiver side I found the following...
On server A there is a zpool with snapshots created on regular
basis via cron.
Server B get's updated by a zfs-send-ssh-zfs-receive command pipe.
Both servers are running Solaris 10 update 4 (08/07).
Sometimes I want to do some testing on server B without corrupting
data on server A. For doing so I create a clone of the filesystem.
Up to here everything is ok. As long as the mounte clone filesystem
is NOT busy, any further zfs-send-ssh-zfs-receive will work
properly, updating my pool on B.
But there are some long running test jobs on server B which keeps
clone's filesystem busy or just a single login shell with its cwd
within clone's filesystem, which makes the filesystem busy from
umount's point of view.
Meanwhile another zfs-send-ssh-zfs-receive command gets launched to
copy new snapshot from A to B. If the receiving pool of a zfs-
receive-command has busy clones, the receive command will fail.
For some unknown reason the receive command tries to umount my
cloned filesystem and fails with Device busy.
The question is: why?
Since the clone is (or should be) independent of its origin, zfs
receive should not umount cloned data of older snapshots.
If you want to reproduce this - below (and attached) you find a
simple test script. The script will bump out at last zfs-receive
command. If you comment out the line cd /mnt/copy, script will
run as expected.
Disclaimer: Before running my script, make sure you do not have
zpools named copy or origin. Use this script only on a test
machine! Use it at your own risk.
Here is the script:
pre
#!/usr/bin/bash
# cleanup before test
cd /
set -ex
for pool in origin copy; do
zpool destroy $pool || :
rm -f /var/tmp/zpool.$pool
[ -d /mnt/$pool ] rmdir /mnt/$pool
mkfile -nv 64m /var/tmp/zpool.$pool
zpool create -m none $pool /var/tmp/zpool.$pool
done
zfs create -o mountpoint=/mnt/origin origin/test
update () {
# create/update a log file
date /mnt/origin/log
}
snapnum=0
make_snap () {
snapnum=$(($snapnum+1))
zfs snapshot origin/[EMAIL PROTECTED]
}
update
make_snap
update
make_snap
update
make_snap
update
zfs send origin/[EMAIL PROTECTED] | zfs receive -v -d copy
zfs clone copy/[EMAIL PROTECTED] copy/clone
zfs send -i origin/[EMAIL PROTECTED] origin/[EMAIL PROTECTED] | zfs receive
-v -d copy
zfs set mountpoint=/mnt/copy copy/clone
zfs list -r origin copy
cd /mnt/copy # make filesystem busy
zfs send -i origin/[EMAIL PROTECTED] origin/[EMAIL PROTECTED] | zfs receive
-v -d copy
ls -l /mnt/{origin,copy}/log
exit
/pre
-
Cleanup with
pre
zpool destroy copy; zpool destroy origin; rm /var/tmp/zpool.*
/pre
after running tests.
- Andreas
This message posted from opensolaris.orgtest-zfs-
clone.sh___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss