On Fri, Sep 25, 2015 at 11:37 AM, Peter Geoghegan <p...@heroku.com> wrote:
>> So, as I understand it: if the system runs low on memory for an
>> extended period, and/or the file grows beyond 1GB (MaxAlloc), garbage
>> collection stops entirely, meaning it starts leaking disk space until
>> a manual intervention.
> I don't think that there is much more to discuss here: this is a bug.
> I will try and write a patch to fix it shortly.
I should add that it only leaks disk space at the rate at which new
queries are observed that are not stored within pg_stat_statements
(due to an error originating in the planner or something -- they
remain "sticky" entries). The reason we've not heard far more problem
reports is that it usually never gets out of hand in the first place.
Come to think of it, you'd have to repeatedly have new queries that
are never "unstickied"; if you have substantively the same query as an
error-during-planning "sticky" entry, it will still probably be able
to use that existing entry (it will become "unstickied" by this second
execution of what the fingerprinting logic considers to be the same
In short, you have to have just the right workload to hit the bug.
Sent via pgsql-hackers mailing list (firstname.lastname@example.org)
To make changes to your subscription: