On Thu, Jan 16, 2014 at 12:49 AM, Robert Haas <robertmh...@gmail.com> wrote:
> On Wed, Jan 15, 2014 at 7:28 AM, Amit Kapila <amit.kapil...@gmail.com> wrote:
>> Unpatched
>> -------------------
>>                 testname                             | wal_generated |
>>     duration
>> ----------------------------------------------------------+----------------------+------------------
>>  one short and one long field, no change |    1054923224 |  33.101135969162
>>
>> After pgrb_delta_encoding_v4
>> ---------------------------------------------
>>
>>                 testname                             | wal_generated |
>>     duration
>> ----------------------------------------------------------+----------------------+------------------
>>  one short and one long field, no change |     877859144 | 30.6749138832092
>>
>>
>> Temporary Changes
>> (Revert Max Chunksize = 4 and logic of finding longer match)
>> ---------------------------------------------------------------------------------------------
>>
>>                  testname                            | wal_generated |
>>     duration
>> ----------------------------------------------------------+----------------------+------------------
>>  one short and one long field, no change |     677337304 | 25.4048750400543
>
> Sure, but watch me not care.
>
> If we're interested in taking advantage of the internal
> compressibility of tuples, we can do a lot better than this patch.  We
> can compress the old tuple and the new tuple.  We can compress
> full-page images.  We can compress inserted tuples.  But that's not
> the point of this patch.
>
> The point of *this* patch is to exploit the fact that the old and new
> tuples are likely to be very similar, NOT to squeeze out every ounce
> of compression from other sources.

   Okay, got your point.
   Another minor thing is that in latest patch which I have sent yesterday,
   I have modified it such that while formation of chunks if there is a data
   at end of string which doesn't have special pattern and is less than max
   chunk size, we also consider that as a chunk. The reason of doing this
   was that let us say if we have 104 bytes string which contains no special
   bit pattern, then it will just have one 64 byte chunk and will leave the
   remaining bytes, which might miss the chance of doing compression for
   that data.



With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to