"Fields, Zachary J. (MU-Student)" <[email protected]> writes:
> Thanks for getting back to me! I had already discovered freeing the 
> SPI_tuptable each time, and you are correct, it made a big difference. 
> However, I still was only able to achieve 140+ million before it crashed.

> My current working implementation is to reset the "current" memory context 
> after X number of iterations, which keeps memory in check. This seems like a 
> big hammer for the job, and I'm sure it is not optimal. Speed is very 
> important to my application, so I would prefer to use a scalpel instead of a 
> hatchet. If I am already freeing the SPI_tuptable created by the cursor, 
> where else should I be looking for memory leaks?

There are a lot of places that could be leaking memory, for instance if
the array you're working with are large enough then they could be
toasted, and DatumGetArrayTypeP would involve making a working copy.
I'm not too sure that you're not ever leaking "candidate" structs,
either.

The usual theory in Postgres is that a memory context reset is cheaper,
as well as much less leak-prone, than trying to make sure you've pfree'd
each individual allocation.  So we tend to work with short-lived
contexts that can be reset at the end of each tuple cycle --- or in this
example, probably once per cursor fetch would be good.  The main problem
I'd have with what you're doing is that it's not very safe for a
function to reset the whole SPI Proc context: you might be clobbering
some storage that's still in use, eg related to the cursor you're using.
Instead create a context that's a child of that context, switch into
that to do your processing, and reset it every so often.

                        regards, tom lane


-- 
Sent via pgsql-hackers mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to