On Tue, Dec 30, 2014 at 6:21 PM, Jeff Davis <pg...@j-davis.com> wrote: > On Fri, 2013-08-30 at 09:57 +0300, Heikki Linnakangas wrote: >> Speeding up the CRC calculation obviously won't help with the WAL volume >> per se, ie. you still generate the same amount of WAL that needs to be >> shipped in replication. But then again, if all you want to do is to >> reduce the volume, you could just compress the whole WAL stream. > > Was this point addressed? Compressing the whole record is interesting for multi-insert records, but as we need to keep the compressed data in a pre-allocated buffer until WAL is written, we can only compress things within a given size range. The point is, even if we define a lower bound, compression is going to perform badly with an application that generates for example many small records that are just higher than the lower bound... Unsurprisingly for small records this was bad: http://www.postgresql.org/message-id/cab7npqsc97o-ue5paxfmukwcxe_jioyxo1m4a0pmnmyqane...@mail.gmail.com Now are there still people interested in seeing the amount of time spent in the CRC calculation depending on the record length? Isn't that worth speaking on the CRC thread btw? I'd imagine that it would be simple to evaluate the effect of the CRC calculation within a single process using a bit getrusage.
> How much benefit is there to compressing the data before it goes into the WAL > stream versus after? Here is a good list: http://www.postgresql.org/message-id/20141212145330.gk31...@awork2.anarazel.de Regards, -- Michael -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers