On 8/3/07, Fred Moyer <[EMAIL PROTECTED]> wrote:
> Ben Tilly wrote:
> > I am having a problem with a SELECT query against a large dataset.
> > (Several million rows.)  Because DBD::Pg pre-fetches data before
> > allowing me to process it, I'm using up a very large amount of RAM.
> > Enough that I'm concerned about running out.
>
> I remember hearing about this problem a couple years ago and asking
> about it on #postgresql on freenode.  The crux of the issue was that the
> underlying libpq functions were doing the prefetching.  Whether that is
> (still) accurate I do not know, I haven't followed development closely
> enough, I always managed to work with a data slice that was within my
> memory limits.

I know that there is a way in the Java driver to have it turn the
prefetching off.  I don't know how it is done though.  That may be a
recent improvement in PostgreSQL.

> > Is there any way to tell it that I wish to process data incrementally
> > and do not want all of the data in RAM at once?
>
> You could use a cursor to fetch the data incrementally:
>
> http://groups.google.com/group/pgsql.general/browse_thread/thread/f38a21bd4fc765b/c9357adab6bbacac%23c9357adab6bbacac
>
Thanks, that looks like the solution that I want to use. :-)

Ben

Reply via email to