On Sep 4, 2014, at 7:26 PM, Jan Wieck <j...@wi3ck.info> wrote:

> This is only because the input data was exact copies of the same strings over 
> and over again. PGLZ can very well compress slightly less identical strings 
> of varying lengths too. Not as well, but well enough. But I suspect such 
> input data would make it fail again, even with lengths.

We had a bit of discussion about JSONB compression at PDXPUG Day this morning. 
Josh polled the room, and about half though we should apply the patch for 
better compression, while the other half seemed to want faster access 
operations. (Some folks no doubt voted for both.) But in the ensuing 
discussion, I started to think that maybe we should leave it as it is, for two 
reasons:

1. There has been a fair amount of discussion about ways to better deal with 
this in future releases, such as hints to TOAST about how to compress, or the 
application of different compression algorithms (or pluggable compression). I’m 
assuming that leaving it as-is does not remove those possibilities.

2. The major advantage of JSONB is fast access operations. If those are not as 
important for a given use case as storage space, there’s still the JSON type, 
which *does* compress reasonably well. IOW, We already have a JSON alternative 
the compresses well. So why make the same (or similar) trade-offs with JSONB?

Just my $0.02. I would like to see some consensus on this, soon, though, as I 
am eager to get 9.4 and JSONB, regardless of the outcome!

Best,

David

Attachment: signature.asc
Description: Message signed with OpenPGP using GPGMail

Reply via email to