On Fri, Dec 12, 2014 at 08:27:59AM -0500, Robert Haas wrote: > On Thu, Dec 11, 2014 at 11:34 AM, Bruce Momjian <br...@momjian.us> wrote: > >> compression = 'on' : 1838 secs > >> = 'off' : 1701 secs > >> > >> Different is around 140 secs. > > > > OK, so the compression took 2x the cpu and was 8% slower. The only > > benefit is WAL files are 35% smaller? > > Compression didn't take 2x the CPU. It increased user CPU from 354.20 > s to 562.67 s over the course of the run, so it took about 60% more > CPU. > > But I wouldn't be too discouraged by that. At least AIUI, there are > quite a number of users for whom WAL volume is a serious challenge, > and they might be willing to pay that price to have less of it. Also, > we have talked a number of times before about incorporating Snappy or > LZ4, which I'm guessing would save a fair amount of CPU -- but the > decision was made to leave that out of the first version, and just use > pg_lz, to keep the initial patch simple. I think that was a good > decision.
Well, the larger question is why wouldn't we just have the user compress the entire WAL file before archiving --- why have each backend do it? Is it the write volume we are saving? I though this WAL compression gave better performance in some cases. -- Bruce Momjian <br...@momjian.us> http://momjian.us EnterpriseDB http://enterprisedb.com + Everyone has their own god. + -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers