Hello,

A bug had been introduced in the latest versions of the patch. The order of 
parameters passed to pglz_decompress was wrong.
Please find attached patch with following correction,

Original code,
+               if (pglz_decompress(block_image, record->uncompressBuf,
+                                                       bkpb->bkp_len, 
bkpb->bkp_uncompress_len) == 0)
Correction
+               if (pglz_decompress(block_image, bkpb->bkp_len,
+                                                       record->uncompressBuf, 
bkpb->bkp_uncompress_len) == 0)


>For example here you should not have a space between the function definition 
>and the variable declarations:
>+{
>+
>+    int orig_len = BLCKSZ - hole_length;
>This is as well incorrect in two places:
>if(hole_length != 0)
>There should be a space between the if and its condition in parenthesis.

Also corrected above code format mistakes.

Thank you,
Rahila Syed

-----Original Message-----
From: pgsql-hackers-ow...@postgresql.org 
[mailto:pgsql-hackers-ow...@postgresql.org] On Behalf Of Syed, Rahila
Sent: Monday, February 09, 2015 6:58 PM
To: Michael Paquier; Fujii Masao
Cc: PostgreSQL mailing lists
Subject: Re: [HACKERS] [REVIEW] Re: Compression of full-page-writes

Hello,

>> Do we always need extra two bytes for compressed backup block?
>> ISTM that extra bytes are not necessary when the hole length is zero.
>> In this case the length of the original backup block (i.e.,
>> uncompressed) must be BLCKSZ, so we don't need to save the original 
>> size in the extra bytes.

>Yes, we would need a additional bit to identify that. We could steal it from 
>length in XLogRecordBlockImageHeader.

This is implemented in the attached patch by dividing length field as follows,
        uint16  length:15,                      
                with_hole:1; 

>"2" should be replaced with the macro variable indicating the size of 
>extra header for compressed backup block.
Macro SizeOfXLogRecordBlockImageCompressionInfo is used instead of 2

>You can refactor XLogCompressBackupBlock() and move all the above code 
>to it for more simplicity
This is also implemented in the patch attached.

Thank you,
Rahila Syed


-----Original Message-----
From: Michael Paquier [mailto:michael.paqu...@gmail.com]
Sent: Friday, February 06, 2015 6:00 PM
To: Fujii Masao
Cc: Syed, Rahila; PostgreSQL mailing lists
Subject: Re: [HACKERS] [REVIEW] Re: Compression of full-page-writes

On Fri, Feb 6, 2015 at 3:03 PM, Fujii Masao wrote:
> Do we always need extra two bytes for compressed backup block?
> ISTM that extra bytes are not necessary when the hole length is zero.
> In this case the length of the original backup block (i.e.,
> uncompressed) must be BLCKSZ, so we don't need to save the original 
> size in the extra bytes.

Yes, we would need a additional bit to identify that. We could steal it from 
length in XLogRecordBlockImageHeader.

> Furthermore, when fpw compression is disabled and the hole length is 
> zero, we seem to be able to save one byte from the header of backup 
> block. Currently we use 4 bytes for the header, 2 bytes for the length 
> of backup block, 15 bits for the hole offset and 1 bit for the flag 
> indicating whether block is compressed or not. But in that case, the 
> length of backup block doesn't need to be stored because it must be 
> BLCKSZ. Shouldn't we optimize the header in this way? Thought?

If we do it, that's something to tackle even before this patch on HEAD, because 
you could use the 16th bit of the first 2 bytes of XLogRecordBlockImageHeader 
to do necessary sanity checks, to actually not reduce record by 1 byte, but 2 
bytes as hole-related data is not necessary. I imagine that a patch optimizing 
that wouldn't be that hard to write as well.

> +                int page_len = BLCKSZ - hole_length;
> +                char *scratch_buf;
> +                if (hole_length != 0)
> +                {
> +                    scratch_buf = compression_scratch;
> +                    memcpy(scratch_buf, page, hole_offset);
> +                    memcpy(scratch_buf + hole_offset,
> +                           page + (hole_offset + hole_length),
> +                           BLCKSZ - (hole_length + hole_offset));
> +                }
> +                else
> +                    scratch_buf = page;
> +
> +                /* Perform compression of block */
> +                if (XLogCompressBackupBlock(scratch_buf,
> +                                            page_len,
> +                                            regbuf->compressed_page,
> +                                            &compress_len))
> +                {
> +                    /* compression is done, add record */
> +                    is_compressed = true;
> +                }
>
> You can refactor XLogCompressBackupBlock() and move all the above code 
> to it for more simplicity.

Sure.
--
Michael

______________________________________________________________________
Disclaimer: This email and any attachments are sent in strictest confidence for 
the sole use of the addressee and may contain legally privileged, confidential, 
and proprietary data. If you are not the intended recipient, please advise the 
sender by replying promptly to this email and then delete and destroy this 
email and any attachments without any further use, copying or forwarding.

______________________________________________________________________
Disclaimer: This email and any attachments are sent in strictest confidence
for the sole use of the addressee and may contain legally privileged,
confidential, and proprietary data. If you are not the intended recipient,
please advise the sender by replying promptly to this email and then delete
and destroy this email and any attachments without any further use, copying
or forwarding.

Attachment: Support-compression-for-full-page-writes-in-WAL_v17.patch
Description: Support-compression-for-full-page-writes-in-WAL_v17.patch

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to