On 04/20/15 05:07, Andres Freund wrote:
Hi,
On 2015-04-19 22:51:53 +0200, Tomas Vondra wrote:
The reason why I'm asking about this is the multivariate statistics patch -
while optimizing the planning overhead, I realized that considerable amount
of time is spent decompressing the statistics (serialized as bytea), and
using an algorithm with better decompression performance (lz4 comes to mind)
would help a lot. The statistics may be a few tens/hundreds kB, and in the
planner every millisecond counts.
I've a hard time believing that a different compression algorithm is
going to be the solution for that. Decompressing, quite possibly
repeatedly, several hundreds of kb during planning isn't going to be
fun. Even with a good, fast, compression algorithm.
Sure, it's not an ultimate solution, but it might help a bit. I do have
other ideas how to optimize this, but in the planner every milisecond
counts. Looking at 'perf top' and seeing pglz_decompress() in top 3.
regards
Tomas
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers