Greg Wooledge wrote: > On Thu, Feb 26, 2026 at 13:37:55 -0500, [email protected] wrote: >> But, overall, it seems like a lot of overhead, including reading both the >> source and destination file, calculating checksums, and then (partially) >> rewriting the destination file. Seems worth it for a remote system, but I'm >> not sure about on the same machine. > > In the most common case (a backup that's performed multiple times), > a lot of the files will have the same name, size, owner, group, permissions > and timestamps. Rsync will skip those entirely. > > The checksum algorithm is only used for files that have been altered. > I'm not intimately familiar with the algorithm, so I don't know at what > point it says "Hey, this is a whole new file, so I'm just gonna copy the > whole thing", but I assume there is such a point.
It's probably worth adding that the checksum and partial
file reconstruction is disabled by default for this
particular use (copying a directory locally):
--whole-file, -W
This option disables rsync's delta-transfer algorithm,
which causes all transferred files to be sent whole. The
transfer may be faster if this option is used when the
bandwidth between the source and destination machines is
higher than the bandwidth to disk (especially when the
"disk" is actually a networked filesystem). This is the
default when both the source and destination are specified
as local paths, but only if no batch-writing option is in
effect.
--
Todd
signature.asc
Description: PGP signature

