On Mon, Dec 2, 2013 at 7:40 PM, Haribabu kommi
<haribabu.ko...@huawei.com> wrote:
> On 29 November 2013 03:05 Robert Haas wrote:
>> On Wed, Nov 27, 2013 at 9:31 AM, Amit Kapila <amit.kapil...@gmail.com>
>> wrote:
>
> I tried modifying the existing patch to support the dynamic rollup as follows.
> For every 32 bytes mismatch between the old and new tuple and it resets back 
> whenever it found a match.
>
> 1. pglz-with-micro-optimization-compress-using-newdata-5:
>
> Adds all old tuple data to history and then check for the match from new 
> tuple.
> For every 32 bytes mismatch, it checks for the match for 2 bytes once. Like 
> this
> It repeats until it found a match or end of data.
>
> 2. pglz-with-micro-optimization-compress-using-newdata_snappy_hash-1:
>
> Adds only first byte of old tuple data to the history and then check for the 
> match
> From new tuple. If any match found, then next unmatched byte from old tuple 
> is added
> To the history and repeats the process.
>
> If no match founds then adds the next byte of the old tuple history followed 
> by the
> Unmatched byte from new tuple data to the history.
>
> In this case the performance is good, but if there is any forward references 
> in the
> New data with old data then it will not compress the data.

The performance data has still same problem that is on fast disks
(tempfs data) it is low.
I am already doing chunk-wise implementation to see if it can improve
the situation, please wait
and then we can decide what is the best way to proceed.

With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to