On Wed, Apr 29, 2015 at 9:12 PM, Tomas Vondra
tomas.von...@2ndquadrant.com wrote:
The whole script (doing a lot of estimates) takes 1:50 with pglz and only
1:25 with lz4. That's ~25-30% improvement.
Still pretty good
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise
On Mon, Apr 20, 2015 at 9:03 AM, Tomas Vondra
tomas.von...@2ndquadrant.com wrote:
Sure, it's not an ultimate solution, but it might help a bit. I do have
other ideas how to optimize this, but in the planner every milisecond
counts. Looking at 'perf top' and seeing pglz_decompress() in top 3.
I
On 30/04/15 00:44, Tom Lane wrote:
Robert Haas robertmh...@gmail.com writes:
On Mon, Apr 20, 2015 at 9:03 AM, Tomas Vondra
tomas.von...@2ndquadrant.com wrote:
Sure, it's not an ultimate solution, but it might help a bit. I do have
other ideas how to optimize this, but in the planner every
Robert Haas robertmh...@gmail.com writes:
On Mon, Apr 20, 2015 at 9:03 AM, Tomas Vondra
tomas.von...@2ndquadrant.com wrote:
Sure, it's not an ultimate solution, but it might help a bit. I do have
other ideas how to optimize this, but in the planner every milisecond
counts. Looking at 'perf
Hi,
On 04/29/15 23:54, Robert Haas wrote:
On Mon, Apr 20, 2015 at 9:03 AM, Tomas Vondra
tomas.von...@2ndquadrant.com wrote:
Sure, it's not an ultimate solution, but it might help a bit. I do have
other ideas how to optimize this, but in the planner every milisecond
counts. Looking at 'perf
On Wed, Apr 29, 2015 at 6:55 PM, Tomas Vondra
tomas.von...@2ndquadrant.com wrote:
I'm not convinced not compressing the data is a good idea - it suspect it
would only move the time to TOAST, increase memory pressure (in general and
in shared buffers). But I think that using a more efficient
On 04/30/15 02:42, Robert Haas wrote:
On Wed, Apr 29, 2015 at 6:55 PM, Tomas Vondra
tomas.von...@2ndquadrant.com wrote:
I'm not convinced not compressing the data is a good idea - it suspect it
would only move the time to TOAST, increase memory pressure (in general and
in shared buffers). But
On 04/20/15 05:07, Andres Freund wrote:
Hi,
On 2015-04-19 22:51:53 +0200, Tomas Vondra wrote:
The reason why I'm asking about this is the multivariate statistics patch -
while optimizing the planning overhead, I realized that considerable amount
of time is spent decompressing the statistics
On 04/19/2015 11:51 PM, Tomas Vondra wrote:
Hi there,
in the past we've repeatedly discussed the option of using a different
compression algorithm (e.g. lz4), but every time the discussion died off
because of fear of possible patent issues [1] [2] and many other
threads. Have we decided it's
On Mon, Apr 20, 2015 at 5:51 AM, Tomas Vondra wrote:
I'm a bit confused though, because I've noticed various other FOSS projects
adopting lz4 over the past few years and I'm yet to find a project voicing
the same concerns about patents. So either they're reckless or we're
excessively paranoid.
Hi,
On 2015-04-19 22:51:53 +0200, Tomas Vondra wrote:
The reason why I'm asking about this is the multivariate statistics patch -
while optimizing the planning overhead, I realized that considerable amount
of time is spent decompressing the statistics (serialized as bytea), and
using an
* Michael Paquier (michael.paqu...@gmail.com) wrote:
On Mon, Apr 20, 2015 at 5:51 AM, Tomas Vondra wrote:
I'm a bit confused though, because I've noticed various other FOSS projects
adopting lz4 over the past few years and I'm yet to find a project voicing
the same concerns about patents.
12 matches
Mail list logo