Thanks Ian. That sounds like an option also. The plan was to break up the
file systems anyway, since some i will want to be replicated remotely, and
others not as much.
On Fri, Oct 5, 2012 at 11:17 AM, Ian Collins <i...@ianshome.com> wrote:
> On 10/05/12 21:36, Jim Klimov wrote:
>> 2012-10-05 11:17, Tiernan OToole wrote:
>>> Also, as a follow up question, but slightly unrelated, when it comes to
>>> the ZFS Send, i could use SSH to do the send, directly to the machine...
>>> Or i could upload the compressed, and possibly encrypted dump to the
>>> server... Which, for resume-ability and speed, would be suggested? And
>>> if i where to go with an upload option, any suggestions on what i should
>> As for this, the answer depends on network bandwidth, reliability,
>> and snapshot file size - ultimately, on the probability and retry
>> cost of an error during transmission.
>> Many posters on the list strongly object to using files as storage
>> for snapshot streams, because in reliability this is (may be) worse
>> than a single-disk pool and bitrot on it - a single-bit error in
>> a snapshot file can render it and all newer snapshots invalid and
>> Still, given enough scratch space on the sending and receiving sides
>> and a bad (slow, glitchy) network in-between, I did go with compressed
>> files of zfs-send streams (perhaps making recursion myself and using
>> smaller files of one snapshot each - YMMV). For compression on multiCPU
>> senders I can strongly suggest "pigz --fast $filename" (I did have
>> problems in pigz-1.7.1 compressing several files with one command,
>> maybe that's fixed now). If you're tight on space/transfer size more
>> than on CPU, you can try other parallel algos - pbzip2, p7zip, etc.
>> Likewise, you can also pass the file into an encryptor of your choice.
> I do have to suffer a slow, glitchy WAN to a remote server and rather than
> send stream files, I broke the data on the remote server into a more fine
> grained set of filesystems than I would do normally. In this case, I made
> the directories under what would have been the leaf filesystems filesystems
> By spreading the data over more filesystems, the individual incremental
> sends are smaller, so there is less data to resend if the link burps during
> a transfer.
> zfs-discuss mailing list
zfs-discuss mailing list