Re: [HACKERS] PATCH: Batch/pipelining support for libpq

2021-03-18 Thread Matthieu Garrigues
Thanks a lot for the merge. I did some tests and the master branch
runs up to 15% faster than the last
patch I tried (v22). Amazing!

Cheers,
Matthieu Garrigues

On Tue, Mar 16, 2021 at 9:00 PM Andres Freund  wrote:
>
> Hi,
>
> On 2021-03-05 21:35:59 -0300, Alvaro Herrera wrote:
> > I'll take the weekend to think about the issue with conn->last_query and
> > conn->queryclass that I mentioned yesterday; other than that detail my
> > feeling is that this is committable, so I'll be looking at getting this
> > pushed early next weeks, barring opinions from others.
>
> It is *very* exciting to see this being merged. Thanks for all the work
> to all that contributed!
>
> Greetings,
>
> Andres Freund




Re: PATCH: Batch/pipelining support for libpq

2020-11-12 Thread Matthieu Garrigues
Hi David,

Thanks for the feedback. I did rework a bit the doc based on your
remarks. Here is the v24 patch.

Matthieu Garrigues

On Tue, Nov 3, 2020 at 6:21 PM David G. Johnston
 wrote:
>
> On Mon, Nov 2, 2020 at 8:58 AM Alvaro Herrera  wrote:
>>
>> On 2020-Nov-02, Alvaro Herrera wrote:
>>
>> > In v23 I've gone over docs; discovered that PQgetResults docs were
>> > missing the new values.  Added those.  No significant other changes yet.
>>
>
> Just reading the documentation of this patch, haven't been following the 
> longer thread:
>
> Given the caveats around blocking mode connections why not just require 
> non-blocking mode, in a similar fashion to how synchronous functions are 
> disallowed?
>
> "Batched operations will be executed by the server in the order the client
> sends them. The server will send the results in the order the statements
> executed."
>
> Maybe:
>
> "The server executes statements, and returns results, in the order the client 
> sends them."
>
> Using two sentences and relying on the user to mentally link the two "in the 
> order" descriptions together seems to add unnecessary cognitive load.
>
> + The client interleaves result
> + processing with sending batch queries, or for small batches may
> + process all results after sending the whole batch.
>
> Suggest: "The client may choose to interleave result processing with sending 
> batch queries, or wait until the complete batch has been sent."
>
> I would expect to process the results of a batch only after sending the 
> entire batch to the server.  That I don't have to is informative but knowing 
> when I should avoid doing so, and why, is informative as well.  To the 
> extreme while you can use batch mode and interleave if you just poll 
> getResult after every command you will make the whole batch thing pointless.  
> Directing the reader from here to the section "Interleaving Result Processing 
> and Query Dispatch" seems worth considering.  The dynamics of small sizes and 
> sockets remains a bit unclear as to what will break (if anything, or is it 
> just process memory on the server) if interleaving it not performed and sizes 
> are large.
>
> I would suggest placing commentary about "all transactions subsequent to a 
> failed transaction in a batch are ignored while previous completed 
> transactions are retained" in the "When to Use Batching".  Something like 
> "Batching is less useful, and more complex, when a single batch contains 
> multiple transactions (see Error Handling)."
>
> My imagined use case would be to open a batch, start a transaction, send all 
> of its components, end the transaction, end the batch, check for batch 
> failure and if it doesn't fail have the option to easily continue without 
> processing individual pgResults (or if it does fail, have the option to 
> extract the first error pgResult and continue, ignoring the rest, knowing 
> that the transaction as a whole was reverted and the batch unapplied).  I've 
> never interfaced with libpq directly.  Though given how the existing C API 
> works what is implemented here seems consistent.
>
> The "queueing up queries into a pipeline to be executed as a batch on the 
> server" can be read as a client-side behavior where nothing is sent to the 
> server until the batch has been completed.  Reading further it becomes clear 
> that all it basically is is a sever-side toggle that instructs the server to 
> continue processing incoming commands even while prior commands have their 
> results waiting to be ingested by the client.
>
> Batch seems like the user-visible term to describe this feature.  Pipeline 
> seems like an implementation detail that doesn't need to be mentioned in the 
> documentation - especially given that pipeline doesn't get a mentioned beyond 
> the first two paragraphs of the chapter and never without being linked 
> directly to "batch".  I would probably leave the indexterm and have a 
> paragraph describing that batching is implemented using a query pipeline so 
> that people with the implementation detail on their mind can find this 
> chapter, but the prose for the user should just stick to batching.
>
> Sorry, that all is a bit unfocused, but the documentation for the user of the 
> API could be cleaned up a bit and some more words spent on what trade-offs 
> are being made when using batching versus normal command-response processing. 
>  That said, while I don't see all of this purely a matter of style I'm also 
> not seeing anything demonstrably wrong with the documentation at the moment.  
> Hopefully my pers

Re: PATCH: Batch/pipelining support for libpq

2020-11-03 Thread Matthieu Garrigues
I implemented a C++ async HTTP server using this new batch mode and it
provides everything I needed to transparently batch sql requests.
It gives a performance boost  between x2 and x3 on this benchmark:
https://www.techempower.com/benchmarks/#section=test=3097dbae-5228-454c-ba2e-2055d3982790=ph=query=2=zik0zj-zik0zj-zik0zj-zik0zj-zieepr-zik0zj-zik0zj-zik0zj-zik0zj-zik0zj-zik0zj

I'll ask other users interested in this to review the API.

Matthieu Garrigues

On Tue, Nov 3, 2020 at 4:56 PM Dave Cramer  wrote:
>
>
>
> On Tue, 3 Nov 2020 at 08:42, Alvaro Herrera  wrote:
>>
>> Hi Dave,
>>
>> On 2020-Nov-03, Dave Cramer wrote:
>>
>> > On Mon, 2 Nov 2020 at 10:57, Alvaro Herrera  
>> > wrote:
>> >
>> > > On 2020-Nov-02, Alvaro Herrera wrote:
>> > >
>> > > > In v23 I've gone over docs; discovered that PQgetResults docs were
>> > > > missing the new values.  Added those.  No significant other changes 
>> > > > yet.
>> >
>> > Thanks for looking at this.
>> >
>> > What else does it need to get it in shape to apply?
>>
>> I want to go over the code in depth to grok the design more fully.
>>
>> It would definitely help if you (and others) could think about the API
>> being added: Does it fulfill the promises being made?  Does it offer the
>> guarantees that real-world apps want to have?  I'm not much of an
>> application writer myself -- particularly high-traffic apps that would
>> want to use this.  As a driver author I would welcome your insight in
>> these questions.
>>
>
> I'm sort of in the same boat as you. While I'm closer to the client. I don't 
> personally write that much client code.
>
> I'd really like to hear from the users here.
>
>
> Dave Cramer
> www.postgres.rocks




Re: PATCH: Batch/pipelining support for libpq

2020-10-01 Thread Matthieu Garrigues
This patch fixes compilation on windows and compilation of the documentation.

Matthieu Garrigues

On Thu, Oct 1, 2020 at 8:41 AM Matthieu Garrigues
 wrote:
>
> On Thu, Oct 1, 2020 at 6:35 AM Michael Paquier  wrote:
>
> > The documentation is failing to build, and the patch does not build
> > correctly on Windows.  Could you address that?
> > --
> > Michael
>
> Yes I'm on it.
>
> --
> Matthieu
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index 92556c7ce0..932561d0c5 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -4829,6 +4829,465 @@ int PQflush(PGconn *conn);
 
  
 
+ 
+  Batch mode and query pipelining
+
+  
+   libpq
+   batch mode
+  
+
+  
+   libpq
+   pipelining
+  
+
+  
+   libpq supports queueing up queries into
+   a pipeline to be executed as a batch on the server. Batching queries allows
+   applications to avoid a client/server round-trip after each query to get
+   the results before issuing the next query.
+  
+
+  
+   When to use batching
+
+   
+Much like asynchronous query mode, there is no performance disadvantage to
+using batching and pipelining. It increases client application complexity
+and extra caution is required to prevent client/server deadlocks but
+can sometimes offer considerable performance improvements.
+   
+
+   
+Batching is most useful when the server is distant, i.e. network latency
+(ping time) is high, and when many small operations are being performed in
+rapid sequence. There is usually less benefit in using batches when each
+query takes many multiples of the client/server round-trip time to execute.
+A 100-statement operation run on a server 300ms round-trip-time away would take
+30 seconds in network latency alone without batching; with batching it may spend
+as little as 0.3s waiting for results from the server.
+   
+
+   
+Use batches when your application does lots of small
+INSERT, UPDATE and
+DELETE operations that can't easily be transformed into
+operations on sets or into a
+COPY operation.
+   
+
+   
+Batching is not useful when information from one operation is required by the
+client before it knows enough to send the next operation. The client must
+introduce a synchronisation point and wait for a full client/server
+round-trip to get the results it needs. However, it's often possible to
+adjust the client design to exchange the required information server-side.
+Read-modify-write cycles are especially good candidates; for example:
+
+ BEGIN;
+ SELECT x FROM mytable WHERE id = 42 FOR UPDATE;
+ -- result: x=2
+ -- client adds 1 to x:
+ UPDATE mytable SET x = 3 WHERE id = 42;
+ COMMIT;
+
+could be much more efficiently done with:
+
+ UPDATE mytable SET x = x + 1 WHERE id = 42;
+
+   
+
+   
+
+ The batch API was introduced in PostgreSQL 14.0, but clients using PostgresSQL 14.0 version of libpq can
+ use batches on server versions 7.4 and newer. Batching works on any server
+ that supports the v3 extended query protocol.
+
+   
+
+  
+
+  
+   Using batch mode
+
+   
+To issue batches the application must switch
+a connection into batch mode. Enter batch mode with PQenterBatchMode(conn) or test
+whether batch mode is active with PQbatchStatus(conn). In batch mode only asynchronous operations are permitted, and
+COPY is not recommended as it most likely will trigger failure in batch processing. 
+Using any synchronous command execution functions such as PQfn,
+PQexec or one of its sibling functions are error conditions.
+Functions allowed in batch mode are described in . 
+   
+
+   
+The client uses libpq's asynchronous query functions to dispatch work,
+marking the end of each batch with PQbatchSendQueue.
+And to get results, it uses PQgetResult. It may eventually exit
+batch mode with PQexitBatchMode once all results are
+processed.
+   
+
+   
+
+ It is best to use batch mode with libpq in
+ non-blocking mode. If used in
+ blocking mode it is possible for a client/server deadlock to occur. The
+ client will block trying to send queries to the server, but the server will
+ block trying to send results from queries it has already processed to the
+ client. This only occurs when the client sends enough queries to fill its
+ output buffer and the server's receive buffer before switching to
+ processing input from the server, but it's hard to predict exactly when
+ that'll happen so it's best to always use non-blocking mode.
+ Batch mode consumes more memory when send/recv is not done as required even in non-blocking mode.
+
+   
+
+   
+Issuing queries
+
+
+ After entering batch mode the application dispatches requests
+ using normal asynchronous libpq functions such as 
+ PQsendQuery

Re: PATCH: Batch/pipelining support for libpq

2020-10-01 Thread Matthieu Garrigues
On Thu, Oct 1, 2020 at 6:35 AM Michael Paquier  wrote:

> The documentation is failing to build, and the patch does not build
> correctly on Windows.  Could you address that?
> --
> Michael

Yes I'm on it.

-- 
Matthieu




Re: PATCH: Batch/pipelining support for libpq

2020-09-21 Thread Matthieu Garrigues
Hi Dave,
I merged PQbatchProcessQueue into PQgetResult.
One first init call to PQbatchProcessQueue was also required in
PQsendQueue to have
PQgetResult ready to read the first batch query.

Tests and documentation are updated accordingly.

Matthieu Garrigues

On Mon, Sep 21, 2020 at 3:39 PM Dave Cramer  wrote:
>
>
>
> On Mon, 21 Sep 2020 at 09:21, Matthieu Garrigues 
>  wrote:
>>
>> Matthieu Garrigues
>>
>> On Mon, Sep 21, 2020 at 3:09 PM Dave Cramer  
>> wrote:
>> >>
>> > There was a comment upthread a while back that people should look at the 
>> > comments made in 
>> > https://www.postgresql.org/message-id/20180322.211148.187821341.horiguchi.kyotaro%40lab.ntt.co.jp
>> >  by Horiguchi-San.
>> >
>> > From what I can tell this has not been addressed. The one big thing is the 
>> > use of PQbatchProcessQueue vs just putting it in PQgetResult.
>> >
>> > The argument is that adding PQbatchProcessQueue is unnecessary and just 
>> > adds another step. Looking at this, it seems like putting this inside 
>> > PQgetResult would get my vote as it leaves the interface unchanged.
>> >
>>
>> Ok. I'll merge PQbatchProcessQueue into PQgetResult. But just one
>> thing: I'll keep PQgetResult returning null between the result of two
>> batched query so the user
>> can know which result comes from which query.
>
>
> Fair enough.
>
> There may be other things in his comments that need to be addressed. That was 
> the big one that stuck out for me.
>
> Thanks for working on this!
>
>
> Dave Cramer
> www.postgres.rocks
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index 92556c7ce0..15d0c03c89 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -4829,6 +4829,465 @@ int PQflush(PGconn *conn);
 
  
 
+ 
+  Batch mode and query pipelining
+
+  
+   libpq
+   batch mode
+  
+
+  
+   libpq
+   pipelining
+  
+
+  
+   libpq supports queueing up queries into
+   a pipeline to be executed as a batch on the server. Batching queries allows
+   applications to avoid a client/server round-trip after each query to get
+   the results before issuing the next query.
+  
+
+  
+   When to use batching
+
+   
+Much like asynchronous query mode, there is no performance disadvantage to
+using batching and pipelining. It increases client application complexity
+and extra caution is required to prevent client/server deadlocks but
+can sometimes offer considerable performance improvements.
+   
+
+   
+Batching is most useful when the server is distant, i.e. network latency
+(ping time) is high, and when many small operations are being performed in
+rapid sequence. There is usually less benefit in using batches when each
+query takes many multiples of the client/server round-trip time to execute.
+A 100-statement operation run on a server 300ms round-trip-time away would take
+30 seconds in network latency alone without batching; with batching it may spend
+as little as 0.3s waiting for results from the server.
+   
+
+   
+Use batches when your application does lots of small
+INSERT, UPDATE and
+DELETE operations that can't easily be transformed into
+operations on sets or into a
+COPY operation.
+   
+
+   
+Batching is not useful when information from one operation is required by the
+client before it knows enough to send the next operation. The client must
+introduce a synchronisation point and wait for a full client/server
+round-trip to get the results it needs. However, it's often possible to
+adjust the client design to exchange the required information server-side.
+Read-modify-write cycles are especially good candidates; for example:
+
+ BEGIN;
+ SELECT x FROM mytable WHERE id = 42 FOR UPDATE;
+ -- result: x=2
+ -- client adds 1 to x:
+ UPDATE mytable SET x = 3 WHERE id = 42;
+ COMMIT;
+
+could be much more efficiently done with:
+
+ UPDATE mytable SET x = x + 1 WHERE id = 42;
+
+   
+
+   
+
+ The batch API was introduced in PostgreSQL 10.0, but clients using PostgresSQL 10.0 version of libpq can
+ use batches on server versions 7.4 and newer. Batching works on any server
+ that supports the v3 extended query protocol.
+
+   
+
+  
+
+  
+   Using batch mode
+
+   
+To issue batches the application must switch
+a connection into batch mode. Enter batch mode with PQenterBatchMode(conn) or test
+whether batch mode is active with PQbatchStatus(conn). In batch mode only asynchronous operations are permitted, and
+COPY is not recommended as it most likely will trigger failure in batch processing. 
+Using any synchronous command execution functions such as PQfn,
+PQexec or one of its sibling fu

Re: PATCH: Batch/pipelining support for libpq

2020-09-21 Thread Matthieu Garrigues
On Mon, Sep 21, 2020 at 3:39 PM Dave Cramer  wrote:
>
>
>
> On Mon, 21 Sep 2020 at 09:21, Matthieu Garrigues 
>  wrote:
>>
>> Matthieu Garrigues
>>
>> On Mon, Sep 21, 2020 at 3:09 PM Dave Cramer  
>> wrote:
>> >>
>> > There was a comment upthread a while back that people should look at the 
>> > comments made in 
>> > https://www.postgresql.org/message-id/20180322.211148.187821341.horiguchi.kyotaro%40lab.ntt.co.jp
>> >  by Horiguchi-San.
>> >
>> > From what I can tell this has not been addressed. The one big thing is the 
>> > use of PQbatchProcessQueue vs just putting it in PQgetResult.
>> >
>> > The argument is that adding PQbatchProcessQueue is unnecessary and just 
>> > adds another step. Looking at this, it seems like putting this inside 
>> > PQgetResult would get my vote as it leaves the interface unchanged.
>> >
>>
>> Ok. I'll merge PQbatchProcessQueue into PQgetResult. But just one
>> thing: I'll keep PQgetResult returning null between the result of two
>> batched query so the user
>> can know which result comes from which query.
>
>
> Fair enough.
>
> There may be other things in his comments that need to be addressed. That was 
> the big one that stuck out for me.
>
> Thanks for working on this!
>

Yes I already addressed the other things in the v19 patch:
https://www.postgresql.org/message-id/flat/cajkzx4t5e-2cqe3dtv2r78dyfvz+in8py7a8marvlhs_pg7...@mail.gmail.com




Re: PATCH: Batch/pipelining support for libpq

2020-09-21 Thread Matthieu Garrigues
Matthieu Garrigues

On Mon, Sep 21, 2020 at 3:09 PM Dave Cramer  wrote:
>>
> There was a comment upthread a while back that people should look at the 
> comments made in 
> https://www.postgresql.org/message-id/20180322.211148.187821341.horiguchi.kyotaro%40lab.ntt.co.jp
>  by Horiguchi-San.
>
> From what I can tell this has not been addressed. The one big thing is the 
> use of PQbatchProcessQueue vs just putting it in PQgetResult.
>
> The argument is that adding PQbatchProcessQueue is unnecessary and just adds 
> another step. Looking at this, it seems like putting this inside PQgetResult 
> would get my vote as it leaves the interface unchanged.
>

Ok. I'll merge PQbatchProcessQueue into PQgetResult. But just one
thing: I'll keep PQgetResult returning null between the result of two
batched query so the user
can know which result comes from which query.




Re: PATCH: Batch/pipelining support for libpq

2020-08-31 Thread Matthieu Garrigues
Hi,

It seems like this patch is nearly finished. I fixed all the remaining
issues. I'm also asking
a confirmation of the test scenarios you want to see in the next
version of the patch.

> Hi,
>
> On 2020-07-10 19:01:49 -0400, Alvaro Herrera wrote:
> > Totally unasked for, here's a rebase of this patch series.  I didn't do
> > anything other than rebasing to current master, solving a couple of very
> > trivial conflicts, fixing some whitespace complaints by git apply, and
> > running tests to verify everthing works.
> >
> > I don't foresee working on this at all, so if anyone is interested in
> > seeing this feature in, I encourage them to read and address
> > Horiguchi-san's feedback.
>
> Nor am I planning to do so, but I do think its a pretty important
> improvement.
>
>
Fixed

>
>
> > +/*
> > + * PQrecyclePipelinedCommand
> > + * Push a command queue entry onto the freelist. It must be a dangling 
> > entry
> > + * with null next pointer and not referenced by any other entry's next 
> > pointer.
> > + */
>
> Dangling sounds a bit like it's already freed.
>
>
Fixed

>
>
> > +/*
> > + * PQbatchSendQueue
> > + * End a batch submission by sending a protocol sync. The connection will
> > + * remain in batch mode and unavailable for new synchronous command 
> > execution
> > + * functions until all results from the batch are processed by the client.
>
> I feel like the reference to the protocol sync is a bit too low level
> for an external API. It should first document what the function does
> from a user's POV.
>
> I think it'd also be good to document whether / whether not queries can
> already have been sent before PQbatchSendQueue is called or not.
>
Fixed

>
>
>
> > + if (conn->batch_status == PQBATCH_MODE_ABORTED && conn->queryclass != 
> > PGQUERY_SYNC)
> > + {
> > + /*
> > + * In an aborted batch we don't get anything from the server for each
> > + * result; we're just discarding input until we get to the next sync
> > + * from the server. The client needs to know its queries got aborted
> > + * so we create a fake PGresult to return immediately from
> > + * PQgetResult.
> > + */
> > + conn->result = PQmakeEmptyPGresult(conn,
> > +   PGRES_BATCH_ABORTED);
> > + if (!conn->result)
> > + {
> > + printfPQExpBuffer(>errorMessage,
> > +  libpq_gettext("out of memory"));
> > + pqSaveErrorResult(conn);
> > + return 0;
>
> Is there any way an application can recover at this point? ISTM we'd be
> stuck in the previous asyncStatus, no?
>

conn->result is null when malloc(sizeof(PGresult)) returns null. It's
very unlikely unless
the server machine is out of memory, so the server will probably be
unresponsive anyway.

I'm leaving this as it is but if anyone has a solution simple to
implement I'll fix it.

>
>
> > +/* pqBatchFlush
> > + * In batch mode, data will be flushed only when the out buffer reaches 
> > the threshold value.
> > + * In non-batch mode, data will be flushed all the time.
> > + */
> > +static int
> > +pqBatchFlush(PGconn *conn)
> > +{
> > + if ((conn->batch_status == PQBATCH_MODE_OFF)||(conn->outCount >= 
> > OUTBUFFER_THRESHOLD))
> > + return(pqFlush(conn));
> > + return 0; /* Just to keep compiler quiet */
> > +}
>
> unnecessarily long line.
>
Fixed

>
> > +/*
> > + * Connection's outbuffer threshold is set to 64k as it is safe
> > + * in Windows as per comments in pqSendSome() API.
> > + */
> > +#define OUTBUFFER_THRESHOLD 65536
>
> I don't think the comment explains much. It's fine to send more than 64k
> with pqSendSome(), they'll just be send with separate pgsecure_write()
> invocations. And only on windows.
>
> It clearly makes sense to start sending out data at a certain
> granularity to avoid needing unnecessary amounts of memory, and to make
> more efficient use of latency / serer side compute.
>
> It's not implausible that 64k is the right amount for that, I just don't
> think the explanation above is good.
>

Fixed

> > diff --git a/src/test/modules/test_libpq/testlibpqbatch.c 
> > b/src/test/modules/test_libpq/testlibpqbatch.c
> > new file mode 100644
> > index 00..4d6ba266e5
> > --- /dev/null
> > +++ b/src/test/modules/test_libpq/testlibpqbatch.c
> > @@ -0,0 +1,1456 @@
> > +/*
> > + * src/test/modules/test_libpq/testlibpqbatch.c
> > + *
> > + *
> > + * testlibpqbatch.c
> > + * Test of batch execution functionality
> > + */
> > +
> > +#ifdef WIN32
> > +#include 
> > +#endif
>
> ISTM that this shouldn't be needed in a test program like this?
> Shouldn't libpq abstract all of this away?
>

Fixed.

>
> > +static void
> > +simple_batch(PGconn *conn)
> > +{
>
> ISTM that all or at least several of these should include tests of
> transactional behaviour with pipelining (e.g. using both implicit and
> explicit transactions inside a single batch, using transactions across
> batches, multiple explicit transactions inside a batch).
>

@Andres, just to make sure I understood, here is the test scenarios I'll add:

Implicit and explicit multiple transactions:
   start batch:
 

Re: libpq: Request Pipelining/Batching status ?

2020-07-15 Thread Matthieu Garrigues
Did my message made it to the mailing list ? or not yet ?

Matthieu Garrigues


On Fri, Jul 10, 2020 at 5:08 PM Matthieu Garrigues <
matthieu.garrig...@gmail.com> wrote:

> Hi all,
>
> Do you know what is the status of Request Pipelining and/or Batching in
> libpq ?
>
> I could see that I'm not the first one to think about it, I see an item in
> the todolist:
>
> https://web.archive.org/web/20200125013930/https://wiki.postgresql.org/wiki/Todo
>
> And a thread here:
>
> https://www.postgresql-archive.org/PATCH-Batch-pipelining-support-for-libpq-td5904551i80.html
>
> And a patch here:
> https://2ndquadrant.github.io/postgres/libpq-batch-mode.html
>
> Seems like this boost performances a lot, drogon, a c++ framework
> outperform all
> other web framework thanks to this fork:
> https://www.techempower.com/benchmarks/#section=data-r19=ph=update
> https://github.com/TechEmpower/FrameworkBenchmarks/issues/5502
>
> It would be nice to have it it the official libpq so we don't have to use
> an outdated fork
> to have this feature.
> Is anybody working on it ? is there lots of work to finalize this patch ?
>
> Thanks in advance,
> Matthieu
>
>


libpq: Request Pipelining/Batching status ?

2020-07-10 Thread Matthieu Garrigues
Hi all,

Do you know what is the status of Request Pipelining and/or Batching in
libpq ?

I could see that I'm not the first one to think about it, I see an item in
the todolist:
https://web.archive.org/web/20200125013930/https://wiki.postgresql.org/wiki/Todo

And a thread here:
https://www.postgresql-archive.org/PATCH-Batch-pipelining-support-for-libpq-td5904551i80.html

And a patch here:
https://2ndquadrant.github.io/postgres/libpq-batch-mode.html

Seems like this boost performances a lot, drogon, a c++ framework
outperform all
other web framework thanks to this fork:
https://www.techempower.com/benchmarks/#section=data-r19=ph=update
https://github.com/TechEmpower/FrameworkBenchmarks/issues/5502

It would be nice to have it it the official libpq so we don't have to use
an outdated fork
to have this feature.
Is anybody working on it ? is there lots of work to finalize this patch ?

Thanks in advance,
Matthieu