Hallo all,

while experimenting with "zfs send" and "zfs receive" mixed with cloning on 
receiver side I found the following...

On server A there is a zpool with snapshots created on regular basis via cron.
Server B get's updated by a zfs-send-ssh-zfs-receive command pipe.
Sometimes I want to do some testing on server B without corrupting data on 
server A. For doing so I create a clone of the filesystem. Up to here 
everything is ok. As long as the clone filesystem is NOT busy, any further 
zfs-send-ssh-zfs-receive will work properly, updating my pool on B without 
desturbing my clone.

But there are some long running test jobs on server B which keeps clone's 
filesystem busy.
Meanwhile another zfs-send-ssh-zfs-receive command gets launched to copy new 
snapshot from A to B. If the receiving pool of a zfs-receive-command has busy 
clones, the receive command will fail.
For some unknown reason the receive command tries a umount my cloned filesystem 
and fails with "Device busy".

The question is: why?

Since the clone is (or should be) independent of its origin, "zfs receive" 
should not umount cloned data of older snapshots.

If you want to reproduce this - I've attached simple test script.
The script will bump out at last zfs-receive command. If you comment out the 
"cd /mnt/copy", script will run as expected.

Disclaimer: Before running my script, make sure you do not have zpools named 
"copy" or "origin". Use this script only on a test machine!
Cleanup with
  zpool destroy copy; zpool destroy origin; rm /var/tmp/zpool.*
after running tests.
 
 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to