On Tue, Nov 13, 2012 at 05:50:08PM -0500, Stephen Frost wrote:
> * Robert Haas (robertmh...@gmail.com) wrote:
> > Yeah.  The thing that concerns me is that I think we have a pretty
> > decent number of memory contexts where the expected number of
> > allocations is very small ... and we have the context *just in case*
> > we do more than that in certain instances.  I've seen profiles where
> > the setup/teardown costs of memory contexts are significant ... which
> > doesn't mean that those examples would perform better with fewer
> > memory contexts, but it's enough to make me pause for thought.
> 
> So, for my 2c, I'm on the other side of this, personally.  We have
> memory contexts for more-or-less exactly this issue.  It's one of the
> great things about PG- it's resiliant and very unlikely to have large or
> bad memory leaks in general, much of which can, imv, be attributed to
> our use of memory contexts.

If the problem is that we create memory context overhead which is not
necessary in many cases, perhaps we can reduce the overhead somehow. 
IIRC we have a seperate function for resetting a context and freeing it
entirely.  If there was a quick test we could do such that resetting a
context did nothing unless at least (say) 16k had been allocated, that
might reduce the cost for many very small allocations.

Ofcourse, unless someone comes up with a way to measure the cost this
is all handwaving, but it might a nice project for someone interested
in learning to hack postgres.

Have a nice day,
-- 
Martijn van Oosterhout   <klep...@svana.org>   http://svana.org/kleptog/
> He who writes carelessly confesses thereby at the very outset that he does
> not attach much importance to his own thoughts.
   -- Arthur Schopenhauer

Attachment: signature.asc
Description: Digital signature

Reply via email to