On 14 October 2016 at 18:09, Shay Rojansky <r...@roji.org> wrote:
>> > It has recently come to my attention that this implementation is
>> > problematic
>> > because it forces the batch to occur within a transaction; in other
>> > words,
>> > there's no option for a non-transactional batch.
>> That's not strictly the case. If you explicitly BEGIN and COMMIT,
>> those operations are honoured within the batch.
> I wasn't precise in my formulation (although I think we understand each
> other)... The problem I'm trying to address here is the fact that in the
> "usual" batching implementation (i.e. where a single Sync message is sent at
> the end of the batch), there's no support for batches which have no
> transactions whatsoever (i.e. where each statement is auto-committed and
> errors in earlier statements don't trigger skipping of later statements).

Right, you can't use implicit transactions delimited by protocol
message boundaries when batching.

>> The design I have in libpq allows for this by allowing the client to
>> delimit batches without ending batch mode, concurrently consuming a
>> stream of multiple batches. Each endbatch is a Sync. So a client that
>> wants autocommit-like behavour can send a series of 1-query batches.
> Ah, I see. libpq's API is considerably more low-level than what Npgsql needs
> to provide. If I understand correctly, you allow users to specify exactly
> where to insert Sync messages (if at all)

They don't get total control, but they can cause a Sync to be emitted
after any given statement when in batch mode.

> so that any number of statements
> arbitrarily interleaved with Sync messages can be sent without starting to
> read any results.


> if so, then the user indeed has everything they need to
> control the exact transactional behavior they want (including full
> auto-commit) without compromising on performance in any way (i.e. by
> increasing roundtrips).


> The only minor problem I can see is that PQsendEndBatch not only adds a Sync
> message to the buffer, but also flushes it.

It only does a non-blocking flush.

> This means that you may be
> forcing users to needlessly flush the buffer just because they wanted to
> insert a Sync. In other words, users can't send the following messages in a
> single buffer/packet:
> Prepare1/Bind1/Describe1/Execute1/Sync1/Prepare2/Bind2/Describe2/Execute2/Sync2
> - they have to be split into different packets.

Oh, right. That's true, but I'm not really sure we care. I suspect
that the OS will tend to coalesce them anyway, since we're not
actually waiting until the socket sends the message. At least when the
socket isn't able to send as fast as input comes in. It might matter
for local performance in incredibly high throughput applications, but
I suspect there will be a great many other things that come first.

Anyway, the client can already control this with TCP_CORK.

>  Of course, this is a
> relatively minor performance issue (especially when compared to the overall
> performance benefits provided by batching), and providing an API distinction
> between adding a Sync and flushing the buffer may over-complicate the API. I
> just thought I'd mention it.

Unnecessary IMO. If we really want to add it later we'd probably do so
by setting a "no flush on endbatch" mode and adding an explicit flush
call. But I expect TCP_CORK will satisfy all realistic needs.

> That's a good point. I definitely don't want to depend on client-side
> parsing of SQL in any way (in fact a planned feature is to allow using
> Npgsql without any sort of client-side parsing/manipulation of SQL).
> However, the fact that BEGIN/COMMIT can be sent in batches doesn't appear
> too problematic to me.
> When it's about to send a batch, Npgsql knows whether it's in an (explicit)
> transaction or not (by examining the transaction indicator on the last
> ReadyForQuery message it received). If it's not in an (explicit)
> transaction, it automatically inserts a Sync message after every Execute. If
> some statement happens to be a BEGIN, it will be executed as a normal
> statement and so on. The only issue is that if an error occurs after a
> sneaked-in BEGIN, all subsequent statements will fail, and all have the Sync
> messages Npgsql inserted. The end result will be a series of errors that
> will be raised up to the user, but this isn't fundamentally different from
> the case of a simple auto-commit batch containing multiple failures (because
> of unique constraint violation or whatever) - multiple errors is something
> that will have to be handled in any case.

I'm a bit hesitant about how this will interact with multi-statements
containing embedded BEGIN or COMMIT, where a single protocol message
contains multiple ; delimited queries. But at least at this time of
night I can't give a concrete problematic example.

 Craig Ringer                   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services

Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:

Reply via email to