That's why I asked, I wanted to know if there was something inherently bad with 
"-z".  I had a situation where Postgresql was replicating 16M files every few 
minutes ("log shipping") on approximately 10 systems, got behind which resulted 
in almost continuous file transfer (of mostly null 16M files) and saturated the 
common link.  Specifying compression with file transfer cut transmission time 
by 5-10x resolving the problem.

From: CentOS <> on behalf of Simon Matter via CentOS 
Sent: Wednesday, March 25, 2020 1:15 PM
To: CentOS mailing list <>
Subject: [EXTERNAL] Re: [CentOS] Need help to fix bug in rsync



Leroy Tennison
Network Information/Cyber Security Specialist


2220 Bush Dr
McKinney, Texas

This message has been sent on behalf of a company that is part of the Harris 
Operating Group of Constellation Software Inc.

If you prefer not to be contacted by Harris Operating Group please notify 

This message is intended exclusively for the individual or entity to which it 
is addressed. This communication may contain information that is proprietary, 
privileged or confidential or otherwise legally exempt from disclosure. If you 
are not the named addressee, you are not authorized to read, print, retain, 
copy or disseminate this message or any part of it. If you have received this 
message in error, please notify the sender immediately by e-mail and delete all 
copies of the message.

t; On Wed, 2020-03-25 at 14:39 +0000, Leroy Tennison wrote:
>> Since you state that using -z is almost always a bad idea, could you
>> provide the rationale for that?  I must be missing something.
> I think the "rationale" is that at some point the
> compression/decompression takes longer than the time reduction from
> sending a compressed file.  It depends on the relative speeds of the
> machines and the network.
> You have most to gain from compressing large files, but if they are
> already compressed, then you have nothing to gain from just doing small
> files.
> It obviously depends on your network speed and if you have a metered
> connection, but does anyone really have such an ancient network
> connection still these days - I mean if you have fast enough machines
> at both ends to do rapid compression/decompression, it seems unlikely
> that you will have a damp piece of string connecting them.

I really don't understand the discussion here. What is wrong with using -z
with rsync? We're using rsync with -z for backups and just don't want to
waste bandwidth for nothing. We have better use for our bandwidth and it
makes quite a difference when backing up terabytes of data.

The only reason why I asked for help is because we don't want to double
compress data which is already compressed. This is what currently is
broken in rsync without manually specifying a skip-compress list. Fixing
it would help all those who don't know it's broken now.


CentOS mailing list
CentOS mailing list

Reply via email to