On Wed, Mar 11, 2015 at 3:57 PM, Fujii Masao masao.fu...@gmail.com wrote:
On Wed, Mar 11, 2015 at 7:08 AM, Rahila Syed rahilasye...@gmail.com wrote:
Hello,
I have some minor comments
The comments have been implemented in the attached patch.
Thanks for updating the patch! I just changed a
On Wed, Mar 11, 2015 at 7:08 AM, Rahila Syed rahilasye...@gmail.com wrote:
Hello,
I have some minor comments
The comments have been implemented in the attached patch.
Thanks for updating the patch! I just changed a bit and finally pushed it.
Thanks everyone involved in this patch!
Regards,
On Mon, Mar 9, 2015 at 9:08 PM, Michael Paquier wrote:
On Mon, Mar 9, 2015 at 4:29 PM, Fujii Masao wrote:
Thanks for updating the patch! Attached is the refactored version of the
patch.
Fujii-san and I had a short chat about tuning a bit the PGLZ strategy
which is now PGLZ_strategy_default in
On Mon, Mar 9, 2015 at 9:08 PM, Michael Paquier
michael.paqu...@gmail.com wrote:
On Mon, Mar 9, 2015 at 4:29 PM, Fujii Masao masao.fu...@gmail.com wrote:
On Thu, Mar 5, 2015 at 10:08 PM, Michael Paquier
michael.paqu...@gmail.com wrote:
On Thu, Mar 5, 2015 at 9:14 PM, Syed, Rahila
Hello,
I have some minor comments
The comments have been implemented in the attached patch.
I think that extra parenthesis should be used for the first expression
with BKPIMAGE_HAS_HOLE.
Parenthesis have been added to improve code readability.
Thank you,
Rahila Syed
On Mon, Mar 9, 2015 at
On Mon, Mar 9, 2015 at 4:29 PM, Fujii Masao masao.fu...@gmail.com wrote:
On Thu, Mar 5, 2015 at 10:08 PM, Michael Paquier
michael.paqu...@gmail.com wrote:
On Thu, Mar 5, 2015 at 9:14 PM, Syed, Rahila rahila.s...@nttdata.com wrote:
Please find attached a patch. As discussed, flag to denote
On Thu, Mar 5, 2015 at 10:08 PM, Michael Paquier
michael.paqu...@gmail.com wrote:
On Thu, Mar 5, 2015 at 9:14 PM, Syed, Rahila rahila.s...@nttdata.com wrote:
Please find attached a patch. As discussed, flag to denote compression and
presence of hole in block image has been added in
On Mon, Feb 16, 2015 at 9:08 PM, Andres Freund and...@2ndquadrant.com wrote:
On 2015-02-16 20:55:20 +0900, Michael Paquier wrote:
On Mon, Feb 16, 2015 at 8:30 PM, Syed, Rahila rahila.s...@nttdata.com
wrote:
Regarding the sanity checks that have been added recently. I think that
they are
On Thu, Mar 5, 2015 at 10:28 PM, Andres Freund and...@2ndquadrant.com wrote:
On 2015-03-05 12:14:04 +, Syed, Rahila wrote:
Please find attached a patch. As discussed, flag to denote
compression and presence of hole in block image has been added in
XLogRecordImageHeader rather than block
Hello,
Please find attached a patch. As discussed, flag to denote compression and
presence of hole in block image has been added in XLogRecordImageHeader rather
than block header.
Following are WAL numbers based on attached test script posted by Michael
earlier in the thread.
On Thu, Mar 5, 2015 at 9:14 PM, Syed, Rahila rahila.s...@nttdata.com wrote:
Please find attached a patch. As discussed, flag to denote compression and
presence of hole in block image has been added in XLogRecordImageHeader
rather than block header.
Following are WAL numbers based on
On 2015-03-05 12:14:04 +, Syed, Rahila wrote:
Please find attached a patch. As discussed, flag to denote
compression and presence of hole in block image has been added in
XLogRecordImageHeader rather than block header.
FWIW, I personally won't commit it with things done that way. I think
Hello,
Are there any other flag bits that we should or are planning to add into WAL
header newly, except the above two? If yes and they are required by even a
block which doesn't have an image, I will change my mind and agree to add
something like chunk ID to a block header.
But I guess the
Hello,
It would be good to get those problems fixed first. Could you send an updated
patch?
Please find attached updated patch with WAL replay error fixed. The patch
follows chunk ID approach of xlog format.
Following are brief measurement numbers.
On Wed, Mar 4, 2015 at 12:41 AM, Syed, Rahila rahila.s...@nttdata.com wrote:
Please find attached updated patch with WAL replay error fixed. The patch
follows chunk ID approach of xlog format.
(Review done independently of the chunk_id stuff being good or not,
already gave my opinion on the
On Tue, Mar 3, 2015 at 9:34 AM, Michael Paquier
michael.paqu...@gmail.com wrote:
On Tue, Mar 3, 2015 at 9:24 AM, Andres Freund and...@2ndquadrant.com wrote:
On 2015-03-03 08:59:30 +0900, Michael Paquier wrote:
Already mentioned upthread, but I agree with Fujii-san here: adding
information
On 2015-03-03 08:59:30 +0900, Michael Paquier wrote:
Already mentioned upthread, but I agree with Fujii-san here: adding
information related to the state of a block image in
XLogRecordBlockHeader makes little sense because we are not sure to
have a block image, perhaps there is only data
On Tue, Mar 3, 2015 at 9:24 AM, Andres Freund and...@2ndquadrant.com wrote:
On 2015-03-03 08:59:30 +0900, Michael Paquier wrote:
Already mentioned upthread, but I agree with Fujii-san here: adding
information related to the state of a block image in
XLogRecordBlockHeader makes little sense
On Tue, Mar 3, 2015 at 5:17 AM, Rahila Syed rahilasye...@gmail.com wrote:
Hello,
When I test the WAL or replication related features, I usually run
make installcheck and pgbench against the master at the same time
after setting up the replication environment.
I will conduct these tests before
On Fri, Feb 27, 2015 at 12:44 PM, Michael Paquier
michael.paqu...@gmail.com wrote:
On Fri, Feb 27, 2015 at 8:01 AM, Michael Paquier
michael.paqu...@gmail.com wrote:
On Fri, Feb 27, 2015 at 6:54 AM, Rahila Syed rahilasye...@gmail.com wrote:
Even this patch doesn't work fine. The standby emit the
Hello,
When I test the WAL or replication related features, I usually run
make installcheck and pgbench against the master at the same time
after setting up the replication environment.
I will conduct these tests before sending updated version.
Seems this increases the header size of WAL record
On Tue, Feb 24, 2015 at 6:46 PM, Syed, Rahila rahila.s...@nttdata.com wrote:
Hello ,
I've not read this logic yet, but ISTM there is a bug in that new WAL format
because I got the following error and the startup process could not replay
any WAL records when I set up replication and enabled
On Fri, Feb 27, 2015 at 6:54 AM, Rahila Syed rahilasye...@gmail.com wrote:
Hello,
Even this patch doesn't work fine. The standby emit the following
error messages.
Yes this bug remains unsolved. I am still working on resolving this.
Following chunk IDs have been added in the attached patch
On Fri, Feb 27, 2015 at 8:01 AM, Michael Paquier
michael.paqu...@gmail.com wrote:
On Fri, Feb 27, 2015 at 6:54 AM, Rahila Syed rahilasye...@gmail.com wrote:
Even this patch doesn't work fine. The standby emit the following
error messages.
Yes this bug remains unsolved. I am still working on
On 2015-02-24 16:03:41 +0900, Michael Paquier wrote:
Looking at this code, I think that it is really confusing to move the data
related to the status of the backup block out of XLogRecordBlockImageHeader
to the chunk ID itself that may *not* include a backup block at all as it
is conditioned
On Mon, Feb 23, 2015 at 5:28 PM, Rahila Syed rahilasye...@gmail.com wrote:
Hello,
Attached is a patch which has following changes,
As suggested above block ID in xlog structs has been replaced by chunk ID.
Chunk ID is used to distinguish between different types of xlog record
fragments.
] Re: Compression of full-page-writes
On 2015-02-16 11:30:20 +, Syed, Rahila wrote:
- * As a trivial form of data compression, the XLOG code is aware that
- * PG data pages usually contain an unused hole in the middle,
which
- * contains only zero bytes. If hole_length 0 then we have
On Mon, Feb 16, 2015 at 8:55 PM, Andres Freund and...@2ndquadrant.com wrote:
On 2015-02-16 11:30:20 +, Syed, Rahila wrote:
- * As a trivial form of data compression, the XLOG code is aware that
- * PG data pages usually contain an unused hole in the middle, which
- * contains only zero
will post a patch based on this.
Thank you,
Rahila Syed
-Original Message-
From: Andres Freund [mailto:and...@2ndquadrant.com]
Sent: Monday, February 16, 2015 5:26 PM
To: Syed, Rahila
Cc: Michael Paquier; Fujii Masao; PostgreSQL mailing lists
Subject: Re: [HACKERS] [REVIEW] Re: Compression
Hello,
Thank you for reviewing and testing the patch.
+ /* leave if data cannot be compressed */
+ if (compressed_len == 0)
+ return false;
This should be 0, pglz_compress returns -1 when compression fails.
+ if (pglz_decompress(block_image,
On 2015-02-16 11:30:20 +, Syed, Rahila wrote:
- * As a trivial form of data compression, the XLOG code is aware that
- * PG data pages usually contain an unused hole in the middle, which
- * contains only zero bytes. If hole_length 0 then we have removed
- * such a hole from the stored
On Mon, Feb 16, 2015 at 8:30 PM, Syed, Rahila rahila.s...@nttdata.com
wrote:
Regarding the sanity checks that have been added recently. I think that
they are useful but I am suspecting as well that only a check on the record
CRC is done because that's reliable enough and not doing those
On 2015-02-16 20:55:20 +0900, Michael Paquier wrote:
On Mon, Feb 16, 2015 at 8:30 PM, Syed, Rahila rahila.s...@nttdata.com
wrote:
Regarding the sanity checks that have been added recently. I think that
they are useful but I am suspecting as well that only a check on the record
CRC is
Thank you for comments. Please find attached the updated patch.
This patch fails to compile:
xlogreader.c:1049:46: error: extraneous ')' after condition, expected a
statement
blk-with_hole blk-hole_offset =
0))
This has been rectified.
Note as well
On Wed, Feb 11, 2015 at 11:03 PM, Syed, Rahila rahila.s...@nttdata.com
wrote:
IMO, we should add details about how this new field is used in the
comments on top of XLogRecordBlockImageHeader, meaning that when a page
hole is present we use the compression info structure and when there is no
: Compression of full-page-writes
Hello,
Do we always need extra two bytes for compressed backup block?
ISTM that extra bytes are not necessary when the hole length is zero.
In this case the length of the original backup block (i.e.,
uncompressed) must be BLCKSZ, so we don't need to save
On Mon, Feb 9, 2015 at 10:27 PM, Syed, Rahila wrote:
(snip)
Thanks for showing up here! I have not tested the test the patch,
those comments are based on what I read from v17.
Do we always need extra two bytes for compressed backup block?
ISTM that extra bytes are not necessary when the hole
attached.
Thank you,
Rahila Syed
-Original Message-
From: Michael Paquier [mailto:michael.paqu...@gmail.com]
Sent: Friday, February 06, 2015 6:00 PM
To: Fujii Masao
Cc: Syed, Rahila; PostgreSQL mailing lists
Subject: Re: [HACKERS] [REVIEW] Re: Compression of full-page-writes
On Fri, Feb 6
On Fri, Feb 6, 2015 at 3:42 AM, Michael Paquier
michael.paqu...@gmail.com wrote:
Fujii Masao wrote:
I wrote
This is an inspiration from lz4 APIs. Wouldn't it be buggy for a
compression algorithm to return a size of 0 bytes as compressed or
decompressed length btw? We could as well make it
On Fri, Feb 6, 2015 at 4:15 AM, Michael Paquier
michael.paqu...@gmail.com wrote:
On Thu, Feb 5, 2015 at 11:06 PM, Syed, Rahila rahila.s...@nttdata.com wrote:
/*
+* We recheck the actual size even if pglz_compress() report success,
+* because it might be satisfied with having saved as
Cc: PostgreSQL mailing lists
Subject: Re: [HACKERS] [REVIEW] Re: Compression of full-page-writes
On Thu, Feb 5, 2015 at 11:06 PM, Syed, Rahila rahila.s...@nttdata.com wrote:
/*
+* We recheck the actual size even if pglz_compress() report success,
+* because it might be satisfied with having
On Fri, Feb 6, 2015 at 4:30 PM, Michael Paquier wrote:
On Fri, Feb 6, 2015 at 3:03 PM, Fujii Masao wrote:
Do we always need extra two bytes for compressed backup block?
ISTM that extra bytes are not necessary when the hole length is zero.
In this case the length of the original backup block
On Fri, Feb 6, 2015 at 3:03 PM, Fujii Masao wrote:
Do we always need extra two bytes for compressed backup block?
ISTM that extra bytes are not necessary when the hole length is zero.
In this case the length of the original backup block (i.e., uncompressed)
must be BLCKSZ, so we don't need to
On Fri, Feb 6, 2015 at 6:35 PM, Syed, Rahila wrote:
The compression patch can use the latest interface
MemoryContextAllocExtended to proceed without compression when sufficient
memory is not available for
scratch buffer.
The attached patch introduces OutOfMem flag which is set on when
you,
Rahila Syed
-Original Message-
From: pgsql-hackers-ow...@postgresql.org
[mailto:pgsql-hackers-ow...@postgresql.org] On Behalf Of Michael Paquier
Sent: Wednesday, January 07, 2015 9:32 AM
To: Rahila Syed
Cc: PostgreSQL mailing lists
Subject: Re: [HACKERS] [REVIEW] Re: Compression
On Thu, Feb 5, 2015 at 11:06 PM, Syed, Rahila rahila.s...@nttdata.com wrote:
/*
+* We recheck the actual size even if pglz_compress() report success,
+* because it might be satisfied with having saved as little as one byte
+* in the compressed data.
+*/
+ *len = (uint16)
On Tue, Jan 6, 2015 at 11:09 AM, Michael Paquier
michael.paqu...@gmail.com wrote:
On Mon, Jan 5, 2015 at 10:29 PM, Fujii Masao masao.fu...@gmail.com wrote:
On Sun, Dec 28, 2014 at 10:57 PM, Michael Paquier wrote:
The patch 1 cannot be applied to the master successfully because of
recent
Fujii Masao wrote:
I wrote
This is an inspiration from lz4 APIs. Wouldn't it be buggy for a
compression algorithm to return a size of 0 bytes as compressed or
decompressed length btw? We could as well make it return a negative
value when a failure occurs if you feel more comfortable with it.
On Wed, Jan 7, 2015 at 12:51 AM, Rahila Syed rahilasyed...@gmail.com wrote:
Following are some comments,
Thanks for the feedback.
uint16 hole_offset:15, /* number of bytes in hole */
Typo in description of hole_offset
Fixed. That's before hole.
for (block_id = 0; block_id =
On Sun, Dec 28, 2014 at 10:57 PM, Michael Paquier
michael.paqu...@gmail.com wrote:
On Fri, Dec 26, 2014 at 4:16 PM, Michael Paquier michael.paqu...@gmail.com
wrote:
On Fri, Dec 26, 2014 at 3:24 PM, Fujii Masao masao.fu...@gmail.com
wrote:
pglz_compress() and pglz_decompress() still use
On Mon, Jan 5, 2015 at 10:29 PM, Fujii Masao masao.fu...@gmail.com wrote:
On Sun, Dec 28, 2014 at 10:57 PM, Michael Paquier wrote:
The patch 1 cannot be applied to the master successfully because of
recent change.
Yes, that's caused by ccb161b. Attached are rebased versions.
- The real stuff
On Fri, Dec 26, 2014 at 4:16 PM, Michael Paquier michael.paqu...@gmail.com
wrote:
On Fri, Dec 26, 2014 at 3:24 PM, Fujii Masao masao.fu...@gmail.com
wrote:
pglz_compress() and pglz_decompress() still use PGLZ_Header, so the
frontend
which uses those functions needs to handle PGLZ_Header. But it
On Wed, Dec 24, 2014 at 9:03 PM, Michael Paquier
michael.paqu...@gmail.com wrote:
Returning only a boolean is fine for me (that's what my first patch
did), especially if we add at some point hooks for compression and
decompression calls.
Here is a patch rebased on current HEAD (60838df) for the
On Thu, Dec 25, 2014 at 10:10 PM, Michael Paquier
michael.paqu...@gmail.com wrote:
On Wed, Dec 24, 2014 at 9:03 PM, Michael Paquier
michael.paqu...@gmail.com wrote:
Returning only a boolean is fine for me (that's what my first patch
did), especially if we add at some point hooks for
On Fri, Dec 26, 2014 at 12:31 PM, Michael Paquier
michael.paqu...@gmail.com wrote:
On Thu, Dec 25, 2014 at 10:10 PM, Michael Paquier
michael.paqu...@gmail.com wrote:
On Wed, Dec 24, 2014 at 9:03 PM, Michael Paquier
michael.paqu...@gmail.com wrote:
Returning only a boolean is fine for me
On Fri, Dec 26, 2014 at 3:24 PM, Fujii Masao masao.fu...@gmail.com wrote:
pglz_compress() and pglz_decompress() still use PGLZ_Header, so the frontend
which uses those functions needs to handle PGLZ_Header. But it basically
should
be handled via the varlena macros. That is, the frontend still
On Fri, Dec 19, 2014 at 12:19 AM, Michael Paquier
michael.paqu...@gmail.com wrote:
On Thu, Dec 18, 2014 at 5:27 PM, Fujii Masao masao.fu...@gmail.com wrote:
Thanks!
Thanks for your input.
+else
+memcpy(compression_scratch, page, page_len);
I don't think
On Wed, Dec 24, 2014 at 8:44 PM, Fujii Masao masao.fu...@gmail.com wrote:
On Fri, Dec 19, 2014 at 12:19 AM, Michael Paquier
michael.paqu...@gmail.com wrote:
Firstly I'm thinking to commit the
0001-Move-pg_lzcompress.c-to-src-common.patch.
pg_lzcompress.h still exists in include/utils, but it
On Thu, Dec 18, 2014 at 2:21 PM, Michael Paquier
michael.paqu...@gmail.com wrote:
On Wed, Dec 17, 2014 at 11:33 PM, Rahila Syed rahilasye...@gmail.com
wrote:
I had a look at code. I have few minor points,
Thanks!
+ bkpb.fork_flags |= BKPBLOCK_HAS_IMAGE;
+
+ if
Isn't it better to allocate the memory for compression_scratch in
InitXLogInsert()
like hdr_scratch?
I think making compression_scratch a statically allocated global variable
is the result of following discussion earlier,
On Thu, Dec 18, 2014 at 7:31 PM, Rahila Syed rahilasye...@gmail.com wrote:
Isn't it better to allocate the memory for compression_scratch in
InitXLogInsert()
like hdr_scratch?
I think making compression_scratch a statically allocated global variable
is the result of following discussion
On Thu, Dec 18, 2014 at 7:31 PM, Rahila Syed rahilasye...@gmail.com wrote:
Isn't it better to allocate the memory for compression_scratch in
InitXLogInsert()
like hdr_scratch?
I think making compression_scratch a statically allocated global variable
is the result of following discussion
On Thu, Dec 18, 2014 at 5:27 PM, Fujii Masao masao.fu...@gmail.com wrote:
Thanks!
Thanks for your input.
+else
+memcpy(compression_scratch, page, page_len);
I don't think the block image needs to be copied to scratch buffer here.
We can try to compress
Hello,
Patches as well as results are attached.
I had a look at code. I have few minor points,
+ bkpb.fork_flags |= BKPBLOCK_HAS_IMAGE;
+
+ if (is_compressed)
{
- rdt_datas_last-data = page;
- rdt_datas_last-len = BLCKSZ;
+
On Wed, Dec 17, 2014 at 1:34 AM, Michael Paquier
michael.paqu...@gmail.com wrote:
On Wed, Dec 17, 2014 at 12:00 AM, Michael Paquier
michael.paqu...@gmail.com wrote:
Actually, the original length of the compressed block in saved in
PGLZ_Header, so if we are fine to not check the size of the
On Thu, Dec 18, 2014 at 1:05 PM, Fujii Masao masao.fu...@gmail.com wrote:
On Wed, Dec 17, 2014 at 1:34 AM, Michael Paquier
michael.paqu...@gmail.com wrote:
I think that neither pg_control nor xl_parameter_change need to have the info
about WAL compression because each backup block has that
On Wed, Dec 17, 2014 at 11:33 PM, Rahila Syed rahilasye...@gmail.com
wrote:
I had a look at code. I have few minor points,
Thanks!
+ bkpb.fork_flags |= BKPBLOCK_HAS_IMAGE;
+
+ if (is_compressed)
{
- rdt_datas_last-data = page;
-
On Tue, Dec 16, 2014 at 8:35 AM, Michael Paquier michael.paqu...@gmail.com
wrote:
On Tue, Dec 16, 2014 at 3:46 AM, Robert Haas robertmh...@gmail.com
wrote:
On Sat, Dec 13, 2014 at 9:36 AM, Michael Paquier
michael.paqu...@gmail.com wrote:
Something to be aware of btw is that this patch
Michael Paquier wrote:
And here are attached fresh patches reducing the WAL record size to what it
is in head when the compression switch is off. Looking at the logic in
xlogrecord.h, the block header stores the hole length and hole offset. I
changed that a bit to store the length of raw
On Tue, Dec 16, 2014 at 11:24 PM, Alvaro Herrera alvhe...@2ndquadrant.com
wrote:
Michael Paquier wrote:
And here are attached fresh patches reducing the WAL record size to what
it
is in head when the compression switch is off. Looking at the logic in
xlogrecord.h, the block header stores
On Tue, Dec 16, 2014 at 11:30 PM, Michael Paquier michael.paqu...@gmail.com
wrote:
On Tue, Dec 16, 2014 at 11:24 PM, Alvaro Herrera alvhe...@2ndquadrant.com
wrote:
Michael Paquier wrote:
And here are attached fresh patches reducing the WAL record size to
what it
is in head when the
On Mon, Dec 15, 2014 at 5:37 PM, Michael Paquier
michael.paqu...@gmail.com wrote:
On Tue, Dec 16, 2014 at 5:14 AM, Merlin Moncure mmonc...@gmail.com wrote:
OTOH, Our built in compressor as we all know is a complete dog in
terms of cpu when stacked up against some more modern implementations.
On Wed, Dec 17, 2014 at 12:12 AM, Merlin Moncure mmonc...@gmail.com wrote:
Compressing *one* file with lz4 and a quick/n/dirty plgz i hacked out
of the source (borrowing heavily from
https://github.com/maropu/pglz_bench/blob/master/pglz_bench.cpp), I
tested the results:
lz4 real time:
On Sat, Dec 13, 2014 at 9:36 AM, Michael Paquier
michael.paqu...@gmail.com wrote:
Something to be aware of btw is that this patch introduces an
additional 8 bytes per block image in WAL as it contains additional
information to control the compression. In this case this is the
uint16
On Fri, Dec 12, 2014 at 8:27 AM, Andres Freund and...@2ndquadrant.com wrote:
On 2014-12-12 09:24:27 -0500, Bruce Momjian wrote:
On Fri, Dec 12, 2014 at 03:22:24PM +0100, Andres Freund wrote:
Well, the larger question is why wouldn't we just have the user compress
the entire WAL file before
Note: this patch has been moved to CF 2014-12 and I marked myself as
an author if that's fine... I've finished by being really involved in
that.
--
Michael
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
On 12 December 2014 at 21:40, Robert Haas robertmh...@gmail.com wrote:
On Fri, Dec 12, 2014 at 1:51 PM, Simon Riggs si...@2ndquadrant.com wrote:
What I don't understand is why we aren't working on double buffering,
since that cost would be paid in a background process and would be
evenly
On Fri, Dec 12, 2014 at 11:50 PM, Michael Paquier
michael.paqu...@gmail.com wrote:
On Wed, Dec 10, 2014 at 11:25 PM, Bruce Momjian br...@momjian.us wrote:
On Wed, Dec 10, 2014 at 07:40:46PM +0530, Rahila Syed wrote:
The tests ran for around 30 mins.Manual checkpoint was run before each
On 13 December 2014 at 14:36, Michael Paquier michael.paqu...@gmail.com wrote:
Something to be aware of btw is that this patch introduces an
additional 8 bytes per block image in WAL as it contains additional
information to control the compression. In this case this is the
uint16 compress_len
On Sun, Dec 14, 2014 at 5:45 AM, Simon Riggs si...@2ndquadrant.com wrote:
On 13 December 2014 at 14:36, Michael Paquier michael.paqu...@gmail.com
wrote:
Something to be aware of btw is that this patch introduces an
additional 8 bytes per block image in WAL as it contains additional
On Thu, Dec 11, 2014 at 11:34 AM, Bruce Momjian br...@momjian.us wrote:
compression = 'on' : 1838 secs
= 'off' : 1701 secs
Different is around 140 secs.
OK, so the compression took 2x the cpu and was 8% slower. The only
benefit is WAL files are 35% smaller?
Compression didn't
On 2014-12-12 08:27:59 -0500, Robert Haas wrote:
On Thu, Dec 11, 2014 at 11:34 AM, Bruce Momjian br...@momjian.us wrote:
compression = 'on' : 1838 secs
= 'off' : 1701 secs
Different is around 140 secs.
OK, so the compression took 2x the cpu and was 8% slower. The only
On Fri, Dec 12, 2014 at 08:27:59AM -0500, Robert Haas wrote:
On Thu, Dec 11, 2014 at 11:34 AM, Bruce Momjian br...@momjian.us wrote:
compression = 'on' : 1838 secs
= 'off' : 1701 secs
Different is around 140 secs.
OK, so the compression took 2x the cpu and was 8% slower.
On 2014-12-12 09:18:01 -0500, Bruce Momjian wrote:
On Fri, Dec 12, 2014 at 08:27:59AM -0500, Robert Haas wrote:
On Thu, Dec 11, 2014 at 11:34 AM, Bruce Momjian br...@momjian.us wrote:
compression = 'on' : 1838 secs
= 'off' : 1701 secs
Different is around 140 secs.
On Fri, Dec 12, 2014 at 03:22:24PM +0100, Andres Freund wrote:
On 2014-12-12 09:18:01 -0500, Bruce Momjian wrote:
On Fri, Dec 12, 2014 at 08:27:59AM -0500, Robert Haas wrote:
On Thu, Dec 11, 2014 at 11:34 AM, Bruce Momjian br...@momjian.us wrote:
compression = 'on' : 1838 secs
On 2014-12-12 09:24:27 -0500, Bruce Momjian wrote:
On Fri, Dec 12, 2014 at 03:22:24PM +0100, Andres Freund wrote:
Well, the larger question is why wouldn't we just have the user compress
the entire WAL file before archiving --- why have each backend do it?
Is it the write volume we are
Hello,
Well, the larger question is why wouldn't we just have the user compress
the entire WAL file before archiving --- why have each backend do it?
Is it the write volume we are saving?
IIUC, the idea here is to not only save the on disk size of WAL but to
reduce the overhead of flushing WAL
On Fri, Dec 12, 2014 at 03:27:33PM +0100, Andres Freund wrote:
On 2014-12-12 09:24:27 -0500, Bruce Momjian wrote:
On Fri, Dec 12, 2014 at 03:22:24PM +0100, Andres Freund wrote:
Well, the larger question is why wouldn't we just have the user compress
the entire WAL file before archiving
On Wed, Dec 10, 2014 at 11:25 PM, Bruce Momjian br...@momjian.us wrote:
On Wed, Dec 10, 2014 at 07:40:46PM +0530, Rahila Syed wrote:
The tests ran for around 30 mins.Manual checkpoint was run before each
test.
Compression WAL generated%compressionLatency-avg CPU usage
On 2014-12-12 09:46:13 -0500, Bruce Momjian wrote:
On Fri, Dec 12, 2014 at 03:27:33PM +0100, Andres Freund wrote:
On 2014-12-12 09:24:27 -0500, Bruce Momjian wrote:
On Fri, Dec 12, 2014 at 03:22:24PM +0100, Andres Freund wrote:
Well, the larger question is why wouldn't we just have the
On 2014-12-12 23:50:43 +0900, Michael Paquier wrote:
I got curious to see how the compression of an entire record would perform
and how it compares for small WAL records, and here are some numbers based
on the patch attached, this patch compresses the whole record including the
block headers,
On Fri, Dec 12, 2014 at 10:04 AM, Andres Freund and...@anarazel.de wrote:
Note that autovacuum and fsync are off.
=# select phase, user_diff, system_diff,
pg_size_pretty(pre_update - pre_insert),
pg_size_pretty(post_update - pre_update) from results;
phase| user_diff |
On 2014-12-12 11:08:52 -0500, Robert Haas wrote:
Unless I'm missing something, this test is showing that FPW
compression saves 298MB of WAL for 17.3 seconds of CPU time, as
against master. And compressing the whole record saves a further 1MB
of WAL for a further 13.39 seconds of CPU time.
On Fri, Dec 12, 2014 at 11:12 AM, Andres Freund and...@anarazel.de wrote:
On 2014-12-12 11:08:52 -0500, Robert Haas wrote:
Unless I'm missing something, this test is showing that FPW
compression saves 298MB of WAL for 17.3 seconds of CPU time, as
against master. And compressing the whole
On 2014-12-12 11:15:46 -0500, Robert Haas wrote:
On Fri, Dec 12, 2014 at 11:12 AM, Andres Freund and...@anarazel.de wrote:
On 2014-12-12 11:08:52 -0500, Robert Haas wrote:
Unless I'm missing something, this test is showing that FPW
compression saves 298MB of WAL for 17.3 seconds of CPU
On Fri, Dec 12, 2014 at 05:19:42PM +0100, Andres Freund wrote:
On 2014-12-12 11:15:46 -0500, Robert Haas wrote:
On Fri, Dec 12, 2014 at 11:12 AM, Andres Freund and...@anarazel.de wrote:
On 2014-12-12 11:08:52 -0500, Robert Haas wrote:
Unless I'm missing something, this test is showing
On 12 December 2014 at 18:04, Bruce Momjian br...@momjian.us wrote:
Well, it seems we need to see some actual cases where compression does
help before moving forward. I thought Amit had some amazing numbers for
WAL compression --- has that changed?
For background processes, like VACUUM, then
On Fri, Dec 12, 2014 at 1:51 PM, Simon Riggs si...@2ndquadrant.com wrote:
What I don't understand is why we aren't working on double buffering,
since that cost would be paid in a background process and would be
evenly spread out across a checkpoint. Plus we'd be able to remove
FPWs altogether,
On Sat, Dec 13, 2014 at 1:08 AM, Robert Haas robertmh...@gmail.com wrote:
On Fri, Dec 12, 2014 at 10:04 AM, Andres Freund and...@anarazel.de wrote:
Note that autovacuum and fsync are off.
=# select phase, user_diff, system_diff,
pg_size_pretty(pre_update - pre_insert),
On Fri, Dec 12, 2014 at 7:25 PM, Michael Paquier
michael.paqu...@gmail.com wrote:
On Sat, Dec 13, 2014 at 1:08 AM, Robert Haas robertmh...@gmail.com wrote:
On Fri, Dec 12, 2014 at 10:04 AM, Andres Freund and...@anarazel.de wrote:
Note that autovacuum and fsync are off.
=# select phase,
1 - 100 of 240 matches
Mail list logo