Hello, Eduardo.
> Why do you use a dictionary compression and not zlib/lz4/bzip/anyother?
Internally PostgreSQL already has LZ77 family algorithm - PGLZ. I didn't
try to replace it, only to supplement. PGLZ compresses every piece of
data (JSONB documents in this case) independently. What I did
On 5 October 2016 at 16:58, Aleksander Alekseev
wrote:
> What about evolving schema of JSON/JSONB/XML? For instance,
> adding/removing keys in new versions of the application. UPDATE
> COMPRESSION DICTIONARY?
You can add to a dictionary, but not remove things. I'm not
> > I could align ZSON to PostgreSQL code style. I only need to run pgindent
> > and write a few comments. Do you think community would be interested in
> > adding it to /contrib/ ? I mean doesn't ZSON solve a bit too specific
> > problem for this?
>
> CREATE COMPRESSION DICTIONARY
On 4 October 2016 at 16:34, Aleksander Alekseev
wrote:
> Hello, Simon.
>
> Thanks for you interest to this project!
>
>> Will you be submitting this to core?
>
> I could align ZSON to PostgreSQL code style. I only need to run pgindent
> and write a few comments. Do you
> > I could align ZSON to PostgreSQL code style. I only need to run pgindent
> > and write a few comments. Do you think community would be interested in
> > adding it to /contrib/ ? I mean doesn't ZSON solve a bit too specific
> > problem for this?
>
> I find the references to pglz quite
> ~everyone wants lower data storage and wants some kind of compression.
> Can this be made to automatically retrain when analyzing (makes sense?)?
> And create a new dictionary only if it changes compared to the last one.
It's an interesting idea. However I doubt it could be automated in
On 10/4/16, Dorian Hoxha wrote:
> On Tue, Oct 4, 2016 at 5:34 PM, Aleksander Alekseev
> wrote:
>> Hello, Simon.
>>
>> Thanks for you interest to this project!
>>
>> > Will you be submitting this to core?
>>
>> I could align ZSON to PostgreSQL
On Wed, Oct 5, 2016 at 12:34 AM, Aleksander Alekseev
wrote:
> I could align ZSON to PostgreSQL code style. I only need to run pgindent
> and write a few comments. Do you think community would be interested in
> adding it to /contrib/ ? I mean doesn't ZSON solve a bit
@Aleksander
~everyone wants lower data storage and wants some kind of compression.
Can this be made to automatically retrain when analyzing (makes sense?)?
And create a new dictionary only if it changes compared to the last one.
On Tue, Oct 4, 2016 at 5:34 PM, Aleksander Alekseev <
Hello, Simon.
Thanks for you interest to this project!
> Will you be submitting this to core?
I could align ZSON to PostgreSQL code style. I only need to run pgindent
and write a few comments. Do you think community would be interested in
adding it to /contrib/ ? I mean doesn't ZSON solve a bit
On Tue, Oct 4, 2016 at 4:20 PM, Simon Riggs wrote:
> On 30 September 2016 at 16:58, Aleksander Alekseev
> wrote:
>
> > I've just uploaded ZSON extension on GitHub:
> >
> > https://github.com/afiskon/zson
> >
> > ZSON learns on your common JSONB
On 30 September 2016 at 16:58, Aleksander Alekseev
wrote:
> I've just uploaded ZSON extension on GitHub:
>
> https://github.com/afiskon/zson
>
> ZSON learns on your common JSONB documents and creates a dictionary
> with strings that are frequently used in all
I like this, seeing that the keys of JSON docs are replicated in every
record.
I makes my old-school DBA-Sense start to itch.
On Fri, Sep 30, 2016 at 8:58 AM, Aleksander Alekseev <
a.aleks...@postgrespro.ru> wrote:
> Hello.
>
> I've just uploaded ZSON extension on GitHub:
>
>
Hello.
I've just uploaded ZSON extension on GitHub:
https://github.com/afiskon/zson
ZSON learns on your common JSONB documents and creates a dictionary
with strings that are frequently used in all documents. After that you
can use ZSON type to compress documents using this dictionary. When
14 matches
Mail list logo