Tomas Vondra <tomas.von...@2ndquadrant.com> writes: > One question regarding the proposed patch though - if I get it right > (just looking at the diff), it simply limits the output to first 100 > child contexts at each level independently. So if each of those 100 > child contexts has >100 child contexts on it's own, we get 100x100 > lines. And so on, if the hierarchy is deeper. This probably is not > addressable without introducing some global counter of printed contexts, > and it may not be an issue at all (all the cases I could remember have a > single huge context or many sibling contexts).
Right. The situation Stefan was complaining of, almost certainly, involved a huge number of children of the same context. This patch would successfully abbreviate that case, no matter where it happened in the context tree exactly. In principle, if you were getting that sort of expansion at multiple levels of the context tree concurrently, you could still get a mighty long context dump ... but I really doubt that would happen in practice. (And if it did happen, an overall limit on the number of contexts printed would hide the fact that it was happening, which wouldn't be desirable.) >> One thing we could consider doing to improve the odds that it's fine >> would be to rearrange things so that child contexts of the same >> parent are more likely to be "similar" --- for example, break out >> all relcache entries to be children of a RelCacheContext instead of >> the generic CacheMemoryContext, likewise for cached plans, etc. But >> I'm not yet convinced that'd be worth the trouble. > That'd be nice but I see that as an independent improvement - it might > improve the odds for internal contexts, but what about contexts coming > from user code (e.g. custom aggregates)? Yeah, cases like custom aggregates would be hard to classify. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers