On 2012-Apr-11 18:34:42 +1000, Ian Collins <i...@ianshome.com> wrote:
>I use an application with a fairly large receive data buffer (256MB) to 
>replicate data between sites.
>I have noticed the buffer becoming completely full when receiving 
>snapshots for some filesystems, even over a slow (~2MB/sec) WAN 
>connection.  I assume this is due to the changes being widely scattered.

As Richard pointed out, the write side should be mostly contiguous.

>Is there any way to improve this situation?

Is the target pool nearly full (so ZFS is spending lots of time searching
for free space)?

Do you have dedupe enabled on the target pool?  This would force ZFS to
search the DDT to write blocks - this will be expensive, especially if
you don't have enough RAM.

Do yoy have a high compression level (gzip or gzip-N) on the target
filesystems, without enough CPU horsepower?

Do you have a dying (or dead) disk in the target pool?

Peter Jeremy
zfs-discuss mailing list

Reply via email to