On Mon, 15 Jun 2009, Todd Stansell wrote:

Any thoughts on how this can be done? I do have other systems I can use to test this procedure, but ideally it would not introduce any downtime, but that can be arranged if necessary.

I think the only work-around is to re-promote 'data', destroy the data/var_tellme (or rename it if there are changes you need to keep), wait for the next hourly snapshot, clone data/var_tellme off of *that*, and then do the promote. That way there's no extra @clean snapshot sitting around.

But in general it's not a good idea to have datasets at higher levels be children of snapshots at lower levels. See CR 6622809, or this thread on zfs-discuss: http://www.opensolaris.org/jive/thread.jspa?messageID=368609&#368609

You may experience boot-time mount problems with this kind of inverted parent-child relationship. I see you changed the mountpoint to "legacy", so maybe you already ran into that.

Perhaps a better way to handle this is to snapshot d...@clean, and then do a zfs send | zfs recv to make a working copy of the dataset. This will take a while to copy the 10g, but then you can destroy d...@clean and data/var_tel...@clean, and delete everything from the 'data' dataset. The downside is that you won't be able to transfer the snapshots to be children of the new dataset.


Regards,
markm
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to