Hi all,

I'm wondering whether it is feasible to have an option that will make
rsync spawn a separate thread to close files it has created, to avoid
the main process blocking while the destination files are flushed during
the close operation?

The reason I ask is that it is currently very slow to use rsync on a
fast locally-mounted network filesystem, as you see the following
behaviour:

 1. rsync reads the source file, network remains idle
 2. Kernel buffers start to fill up
 3. Some seconds later, kernel starts writing data to destination
    filesystem
 4. rsync gets to the end of the file and closes the target file
 5. rsync hangs for 10+ seconds while the target file's data gets
    flushed over the network.  No data is being read from the source.
 6. Back to step 1, rsync reads the next file from the source, while
    the network is idle as the kernel buffers are now empty.

This gives the impression that the copy process alternates between
reading then writing, instead of both reading and writing at the same
time.  It results in a much slower operation because in step 1 above,
the network is idle, then in step 5, the local disk is idle.

I am thinking that if the final close operation was performed in a
separate thread, it would allow the main rsync operation to continue
and start copying the next file while the previous one was still being
flushed.

This would mean both the source (local disk) and the target (network)
would be fully utilized instead of them sitting idle for a large amount
of the operation.

Is something like this feasible?

Many thanks,
Adam.

P.S. I am using a CIFS mount for this, and when I mount it with
cache=none then the alternating read-then-write behaviour goes away,
but the transfer rate drops by almost 30% so it ends up being slower
overall.



-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html

Reply via email to