Hello everybody,

we've developed a function which reads a huge amount
of data from postgres and, being recursive, does
several memory-intensive elaborations and writes the
results back on two postgres tables. No memory context
switch has been done in our function.

Now we have to compare this function with another one
which performs the same elaborations but reads the
data from a binary file and stores the results on
another file.

Both of them work exactly in the same way (as we've
simply ported our postgres module to work in memory)
but we've noticed a rather different memory usage in
the two cases. The in-memory function seems to have a
lot more of memory to work on, while the postgres one
stops for memory exhausted as soon as the data size
increases over a certain limit.

As far as we know, this could be due to the limited
size of the TopMemoryContext in which the dynamically
loadable modules work.

Is there a way to expand the size of memory available
to our function?

Thanks a lot!

alice and lorena

Yahoo! Mail: 6MB di spazio gratuito, 30MB per i tuoi allegati, l'antivirus, il filtro 

---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
    (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])

Reply via email to