While we're discussing memory issues, could someone provide a simple
answer to the following three questions?
(a) How much memory, in bytes/file, does rsync allocate?
(b) Is this the same for the rsyncs on both ends, or is there
    some asymmetry there?
(c) Does it matter whether pushing or pulling?

I'm -not- asking "how many bytes/meg of a -single- file must rsync
allocate while it's dealing with it?", since that's already been
answered, and isn't relevant to me (I expect a large number of small
to middling files, not a 10GB whopper).

I'm asking because I may conceivably be rsyncing a large number of
files from a fast machine with lots of memory to a slow machine with
not very much.  I'd like to know whether I'm likely to need more
memory for the slow machine, or whether it's likely to turn into a
significant bottleneck in the whole process.  (By "fast" and "slow",
I'm talking about a 1200MHz machine with 512meg of memory talking to
a P90 or 266Mhz or somesuch with probably 64meg.  The latter is likely
to be some random old junked machine, but it'd be nice to know in
advance if it needs to be somewhat more capable.  The connection will
be no faster than a 10Mbps Ethernet and probably more like 2-3Mbps.
I don't much care if the slow machine means that I can't use 100% of
that bandwidth; I -do- care if its speed (as opposed to other factors)
means I can only use 1%, or if it runs of memory and can't finish at all.)

By the way, this does seem to be (once again) a potential argument for
the --files-from switch:  doing it -that- way means (I hope!) that
rsync would not be building up an in-memory copy of the filesystem,
and its memory requirements would presumably only increase until it
had enough files in its current queue to keep its network connections
streaming at full speed, and would then basically stabilize.  So
presumably it might know about the 10-100 files it's currently trying
to compute checksums for and get across the network, but not 100,000
files.

Thanks!

Reply via email to