2011-10-30 2:14, Edward Ned Harvey пишет:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Jim Klimov
summer, and came up with a new question. In short, is it
possible to add "restartability" to ZFS SEND, for example
Rather than building something new and special into the filesystem, would
something like a restartable/continuable mbuffer command do the trick?
Well, it is true that for the purposes of sending a replication
stream over a flaky network, some sort of restartable buffer
program might suffice.
If one or both machines were rebooted in the process, however,
this would get us into the situation that all incomplete-snapshot
data was sent in vain, and the receiver has to destroy that data,
which may even get it to crash during pool import. Afterwards
the send attempt has to be done again, and if the conditions
were such that any attempt is likely to fail - it likely will.
Not all of our machines live in ivory-tower datacenters ;)
Per Paul Kraus (who recently wrote about similar problems):
> Uhhh, not being able to destroy snapshots that are "too big"
> is a pretty big one for us
Inserting artificial snapshots into existing datasets (perhaps
including the inheritance tree of "huge incomplete snapshots"
such as we can see now) might also allow to destroy an unneeded
dataset with less strain on the system, piece by piece. Perhaps
even without causing a loop of kernel panics, wow! ;)
The way I see it, this feature would help solve at least two
problems (or work-around them). To me these problems are
substantial. Perhaps to others, like Paul, too.
Because of highly-probable failures during a single unit of
ZFS-SEND replication, I am bound to not use it at all.
I also have to plan destruction of datasets at my home rig
(which was tainted with dedup) and expect weeks of downtime
while the system is being reset to crawl through the blocks
being released after a large delete...
zfs-discuss mailing list