Frank Cusack wrote:
No, I just tried the @[EMAIL PROTECTED] incremental again.  I didn't think to 
try
another incremental.  So I was basically doing the mountpoint=none trick,
they trying @[EMAIL PROTECTED] again without doing mountpoint=none.

Again, seeing the exact sequence of commands you ran would make it quicker for me to diagnose this.

I think you're saying that you ran:

zfs set mountpoint=none destfs
zfs rollback [EMAIL PROTECTED]
zfs send -i @4 [EMAIL PROTECTED] | zfs recv ... -> success
zfs inherit mountpoint destfs
zfs rollback -r [EMAIL PROTECTED]
zfs send -i @4 [EMAIL PROTECTED] | zfs recv ... -> failure

This would be consistent with hitting bug 6343779.

It sounds like the mountpoint=none trick works for you, so can't you just
incorporate it into your script?  Eg:

Sure.  I was just trying to identify the problem correctly, in case
this isn't just another instance of an already-known problem.
mountpoint=none is really suboptimal for me though, it means i cannot
have services running on the receiving host.  I was hoping readonly=on
would do the trick.

Really? I find it hard to believe that mountpoint=none causes any more problems than 'zfs recv' by itself, since 'zfs recv' of an incremental stream always unmounts the destination fs while the recv is taking place.

It's all existing snapshots on that one filesystem.  If I take a new
snapshot (@6) and send it, it works.  Which seems weird to me.  It seems
to be something to do with the sending host, not the receiving host.

From the information you've provided, my best guess is that the problem is associated with your @4 snapshot, and you are hitting 6343779. Here is the bug description:

Even when not accessing a filesystem, it can become
dirty due to the zpl's delete queue.  This means
that even if you are just 'zfs restore'-ing incremental
backups into the filesystem, it may fail because the
filesystem has been modified.

One possible solution would be to make filesystems
created by 'zfs restore' be readonly by default, and have
the zpl not process the delete queue if it is mounted
readonly.
*** (#1 of 2): 2005-10-31 03:31:02 PST [EMAIL PROTECTED]

Note, currently even if you manually set the filesystem to be readonly,
the ZPL will still process the delete queue, making it particularly
difficult to ensure there are no changes since a most recent snapshot
which has entries in the delete queue.  The only workaround I could
find is to not mount the filesystem.
*** (#2 of 2): 2005-10-31 03:34:56 PST [EMAIL PROTECTED]

--matt
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to