On 08/22/2015 06:25 AM, Tomas Vondra wrote:
On 08/21/2015 08:37 PM, Tom Lane wrote:
Tomas Vondra writes:
I also don't think logging just subset of the stats is a lost case.
Sure, we can't know which of the lines are important, but for example
logging just the top-level contexts with a summary
Tomas Vondra writes:
> One question regarding the proposed patch though - if I get it right
> (just looking at the diff), it simply limits the output to first 100
> child contexts at each level independently. So if each of those 100
> child contexts has >100 child contexts on it's own, we get 1
On 08/22/2015 06:06 PM, Tom Lane wrote:
Tomas Vondra writes:
Couldn't we make it a bit smarter to handle even cases like this? For
example we might first count/sum the child contexts, and then print
either all child contexts (if there are only a few of them) or just
those that are >5% of the
Tomas Vondra writes:
> On 08/21/2015 08:37 PM, Tom Lane wrote:
>> ... But suppose we add a parameter to memory context stats
>> collection that is the maximum number of child contexts to print *per
>> parent context*. If there are more than that, summarize the rest as per
>> your suggestion.
>>
On 08/21/2015 08:37 PM, Tom Lane wrote:
Tomas Vondra writes:
I also don't think logging just subset of the stats is a lost case.
Sure, we can't know which of the lines are important, but for example
logging just the top-level contexts with a summary of the child contexts
would be OK.
I thoug
I wrote:
> I thought a bit more about this. Generally, what you want to know about
> a given situation is which contexts have a whole lot of allocations
> and/or a whole lot of child contexts. What you suggest above won't work
> very well if the problem is buried more than about two levels down i
Tomas Vondra writes:
> On 08/20/2015 11:04 PM, Stefan Kaltenbrunner wrote:
>> On 08/20/2015 06:09 PM, Tom Lane wrote:
>>> (The reason I say "lobotomize" is that there's no particularly
>>> good reason to assume that the first N lines will tell you what you
>>> need to know. And the filter rule wou
Hi,
On 08/20/2015 11:04 PM, Stefan Kaltenbrunner wrote:
On 08/20/2015 06:09 PM, Tom Lane wrote:
Stefan Kaltenbrunner writes:
I wonder if we should have a default of capping the dump to say 1k lines
or such and only optionally do a full one.
-1. It's worked like this for the last fifteen yea
On 08/20/2015 06:09 PM, Tom Lane wrote:
> Stefan Kaltenbrunner writes:
>> I wonder if we should have a default of capping the dump to say 1k lines
>> or such and only optionally do a full one.
>
> -1. It's worked like this for the last fifteen years or thereabouts,
> and you're the first one to
Stefan Kaltenbrunner writes:
> I wonder if we should have a default of capping the dump to say 1k lines
> or such and only optionally do a full one.
-1. It's worked like this for the last fifteen years or thereabouts,
and you're the first one to complain. I suspect some weirdness in
your loggi
On 08/20/2015 08:51 AM, Stefan Kaltenbrunner wrote:
This is 9.1.14 on Debian Wheezy/amd64 fwiw - but I dont think we have
made relevant changes in more recent versions.
It seems we may also want to consider a way to drop those prepared
queries after a period time of non use.
JD
regard
Hi all!
We just had a case of a very long running process of ours that creates
does a lot of prepared statements through Perls DBD:Pg running into:
https://rt.cpan.org/Public/Bug/Display.html?id=88827
This resulted in millions of prepared statements created, but not
removed in the affected
12 matches
Mail list logo