On Thu, Jan 24, 2002 at 12:40:55PM -0600, James.FitzGibbon wrote:
> 
> My real issue is that apparently there are Pro*C programs here which
> fetch multiple rows of very large recordsets and process them in chunks.
> A million-row recordset might be processed as 1000 chunks of 1000 rows
> each for example.
> 
> I am unclear as to whether doing this results in the entire recordset
> getting returned at once, or whether Pro*C just does n fetches for an
> array with n elements.

The latter.

> I've been lead to believe that the former is true.
> If not, this whole question becomes academic, because I can just do my
> own fetch-one-row-at-a-time-into-a-list logic.

> Presuming that it does fetch multiple rows at once, this is where things
> get interesting with the DBI.  I can get one row at a time using fetchrow,
> or I can get everything at once using fetchall, but there doesn't seem
> to be anything in between.  I am for obvious reasons concerned about
> memory usage on the client side for the above-cited million row example.

DBD::Oracle uses Oracle's row cache. See the $dbh->{RowCache} attribute.

Enable trace (around level 3 or 4) to see what size row cache you're
getting by default. Set $dbh->{RowCache} before prepare() to adjust it.

Tim.

Reply via email to