On Mon, Jul 30, 2012 at 9:59 PM, Jan Wieck <janwi...@yahoo.com> wrote:

> On 7/30/2012 8:11 PM, Leon Smith wrote:
>> One other possibility,  Tom Lane fretted ever so slightly about the use
>> of malloc/free per row... what about instead of PQsetSingleRowMode,  you
>> have PQsetChunkedRowMode that takes a chunkSize parameter.   A chunkSize
>> <= 0 would be equivalent to what we have today,   a chunkSize of 1 means
>> you get what you have from PQsetSingleRowMode,  and larger chunkSizes
>> would wait until n rows have been received before returning them all in
>> a single result.      I don't know that this suggestion is all that
>> important, but it seems like an obvious generalization that might
>> possibly be useful.
> It is questionable if that actually adds any useful functionality.

This is true, I'm not sure my suggestion is necessarily useful.   I'm just
throwing it out there.

> Any "collecting" of multiple rows will only run the risk to stall
> receiving the following rows while processing this batch. Processing each
> row as soon as it is available will ensure making most use network buffers.

This is not necessarily true,  on multiple levels.   I mean,  some of the
programs I write are highly concurrent,  and this form of batching would
have almost no risk of stalling the network buffer.    And the possible use
case would be when you are dealing with very small rows,  when there would
typically be several rows inside a single network packet or network buffer.

> Collecting multiple rows, like in the FETCH command for cursors does,
> makes sense when each batch introduces a network round trip, like for the
> FETCH command. But does it make any sense for a true streaming mode, like
> what is discussed here?

Maybe?    I mean,  I anticipate that there are (probably) still use cases
for FETCH,  even when the row-at-a-time interface is a viable option and
the transport between postgres and the client has reasonable flow-control.


Reply via email to