On 12 June 2015 at 20:06, Tom Lane <t...@sss.pgh.pa.us> wrote:

> Simon Riggs <si...@2ndquadrant.com> writes:
> > On 11 June 2015 at 22:12, Shay Rojansky <r...@roji.org> wrote:
> >> Just in case it's interesting to you... The reason we implemented things
> >> this way is in order to avoid a deadlock situation - if we send two
> queries
> >> as P1/D1/B1/E1/P2/D2/B2/E2, and the first query has a large resultset,
> >> PostgreSQL may block writing the resultset, since Npgsql isn't reading
> it
> >> at that point. Npgsql on its part may get stuck writing the second query
> >> (if it's big enough) since PostgreSQL isn't reading on its end (thanks
> to
> >> Emil Lenngren for pointing this out originally).
>
> > That part does sound like a problem that we have no good answer to.
> Sounds
> > worth starting a new thread on that.
>
> I do not accept that the backend needs to deal with that; it's the
> responsibility of the client side to manage buffering properly if it is
> trying to overlap sending the next query with receipt of data from a
> previous one.  See commit 2a3f6e368 for a related issue in libpq.
>

Then it's our responsibility to define what "manage buffering properly"
means and document it.

People should be able to talk to us without risk of deadlock.

-- 
Simon Riggs                http://www.2ndQuadrant.com/
<http://www.2ndquadrant.com/>
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

Reply via email to