Tom, I agree this is entirely a client-side issue. Regardless, as Simon
says it would be useful to have some documentation for client implementors.

Sehrope, thanks for the JDBC link! I was actually thinking of going about
it another way in Npgsql:

   1. Send messages normally until the first Execute message is sent.
   2. From that point on, socket writes should simply be non-blocking. As
   long as buffers aren't full, there's no issue, we continue writing. The
   moment a non-blocking write exits because it would block, we transfer
   control to the user, who can now read data from queries (the ADO.NET.API
   allows for multiple resultsets).
   3. When the user finishes processing the resultsets, control is
   transferred back to Npgsql which continues sending messages (back to step
   1).

This approach has the advantage of not caring about buffer sizes or trying
to assume how many bytes are sent by the server: we simply write as much as
we can without blocking, then switch to reading until we've exhausted
outstanding data, and back to writing. The main issue I'm concerned about
is SSL/TLS, which is a layer on top of the sockets and which might not work
well with non-blocking sockets...

Any comments?

Shay

On Sat, Jun 13, 2015 at 5:08 AM, Simon Riggs <si...@2ndquadrant.com> wrote:

> On 12 June 2015 at 20:06, Tom Lane <t...@sss.pgh.pa.us> wrote:
>
>> Simon Riggs <si...@2ndquadrant.com> writes:
>> > On 11 June 2015 at 22:12, Shay Rojansky <r...@roji.org> wrote:
>> >> Just in case it's interesting to you... The reason we implemented
>> things
>> >> this way is in order to avoid a deadlock situation - if we send two
>> queries
>> >> as P1/D1/B1/E1/P2/D2/B2/E2, and the first query has a large resultset,
>> >> PostgreSQL may block writing the resultset, since Npgsql isn't reading
>> it
>> >> at that point. Npgsql on its part may get stuck writing the second
>> query
>> >> (if it's big enough) since PostgreSQL isn't reading on its end (thanks
>> to
>> >> Emil Lenngren for pointing this out originally).
>>
>> > That part does sound like a problem that we have no good answer to.
>> Sounds
>> > worth starting a new thread on that.
>>
>> I do not accept that the backend needs to deal with that; it's the
>> responsibility of the client side to manage buffering properly if it is
>> trying to overlap sending the next query with receipt of data from a
>> previous one.  See commit 2a3f6e368 for a related issue in libpq.
>>
>
> Then it's our responsibility to define what "manage buffering properly"
> means and document it.
>
> People should be able to talk to us without risk of deadlock.
>
> --
> Simon Riggs                http://www.2ndQuadrant.com/
> <http://www.2ndquadrant.com/>
> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
>

Reply via email to