Problem solved.. I set the fetchSize to a reasonable value instead of
the default of unlimited in the PreparedStatement and now the query
is . After some searching it seeems this is a common problem, would it
make sense to change the default value to something other than 0 in
the JDBC driver?
If I get some extra time I'll look into libpq and see what is required
to fix the API. Most thirdparty programs and existing JDBC apps won't
work with the current paradigm when returning large result sets.
On Mon, 13 Sep 2004 21:49:14 -0400, Tom Lane <[EMAIL PROTECTED]> wrote:
> Stephen Crowley <[EMAIL PROTECTED]> writes:
> > On Mon, 13 Sep 2004 21:11:07 -0400, Tom Lane <[EMAIL PROTECTED]> wrote:
> >> Stephen Crowley <[EMAIL PROTECTED]> writes:
> >>> Does postgres cache the entire result set before it begins returning
> >>> data to the client?
> >> The backend doesn't, but libpq does, and I think JDBC does too.
> > That is incredible. Why would libpq do such a thing?
> Because the API it presents doesn't allow for the possibility of query
> failure after having given you back a PGresult: either you have the
> whole result available with no further worries, or you don't.
> If you think it's "incredible", let's see you design an equally
> easy-to-use API that doesn't make this assumption.
> (Now having said that, I would have no objection to someone extending
> libpq to offer an alternative streaming API for query results. It
> hasn't got to the top of anyone's to-do list though ... and I'm
> unconvinced that psql could use it if it did exist.)
---------------------------(end of broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]