On Fri, Dec 12, 2014 at 1:34 AM, Bruce Momjian <br...@momjian.us> wrote:

> On Thu, Dec 11, 2014 at 01:26:38PM +0530, Rahila Syed wrote:
> > >I am sorry but I can't understand the above results due to wrapping.
> > >Are you saying compression was twice as slow?
> >
> > CPU usage at user level (in seconds)  for compression set 'on' is 562
> secs
> > while that for compression  set 'off' is 354 secs. As per the readings,
> it
> > takes little less than double CPU time to compress.
> > However , the total time  taken to run 250000 transactions for each of
> the
> > scenario is as follows,
> >
> > compression = 'on'  : 1838 secs
> >             = 'off' : 1701 secs
> >
> >
> > Different is around 140 secs.
> OK, so the compression took 2x the cpu and was 8% slower.  The only
> benefit is WAL files are 35% smaller?

That depends as well on the compression algorithm used. I am far from being
a specialist in this area, but I guess that there are things consuming less
CPU for a lower rate of compression and that there are no magic solutions.
A correct answer would be to either change the compression algorithm
present in core to something that is more compliant to the FPW compression,
or to add hooks to allow people to plug in the compression algorithm they
want for the compression and decompression calls. In any case and for any
type of compression (be it different algo, record-level compression or FPW
compression), what we have here is a tradeoff, and a switch for people who
care more about I/O than CPU usage. And we would still face in any case CPU
bursts at checkpoints because I can't imagine FPWs not being compressed
even if we do something at record level (thinking so what we have here is
the light-compression version).

Reply via email to