On 12 December 2014 at 21:40, Robert Haas <robertmh...@gmail.com> wrote: > On Fri, Dec 12, 2014 at 1:51 PM, Simon Riggs <si...@2ndquadrant.com> wrote: >> What I don't understand is why we aren't working on double buffering, >> since that cost would be paid in a background process and would be >> evenly spread out across a checkpoint. Plus we'd be able to remove >> FPWs altogether, which is like 100% compression. > > The previous patch to implement that - by somebody at vmware - was an > epic fail. I'm not opposed to seeing somebody try again, but it's a > tricky problem. When the double buffer fills up, then you've got to > finish flushing the pages whose images are stored in the buffer to > disk before you can overwrite it, which acts like a kind of > mini-checkpoint. That problem might be solvable, but let's use this > thread to discuss this patch, not some other patch that someone might > have chosen to write but didn't.
No, I think its relevant. WAL compression looks to me like a short term tweak, not the end game. On that basis, we should go for simple and effective, user-settable compression of FPWs and not spend too much Valuable Committer Time on it. -- Simon Riggs http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers