On Wed, Apr 29, 2015 at 6:55 PM, Tomas Vondra <[email protected]> wrote: > I'm not convinced not compressing the data is a good idea - it suspect it > would only move the time to TOAST, increase memory pressure (in general and > in shared buffers). But I think that using a more efficient compression > algorithm would help a lot. > > For example, when profiling the multivariate stats patch (with multiple > quite large histograms), the pglz_decompress is #1 in the profile, occupying > more than 30% of the time. After replacing it with the lz4, the data are bit > larger, but it drops to ~0.25% in the profile and planning the drops > proportionally.
That seems to imply a >100x improvement in decompression speed. Really??? -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list ([email protected]) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
