On 19 May 2016 at 01:39, Michael Paquier <michael.paqu...@gmail.com> wrote:

> On Wed, May 18, 2016 at 12:27 PM, Craig Ringer <cr...@2ndquadrant.com>
> wrote:
> > On 18 May 2016 at 06:08, Michael Paquier <michael.paqu...@gmail.com>
> wrote:
> >> > Wouldn’t it make sense to do the insert batch wise e.g. 100 rows ?
> >>
> >> Using a single query string with multiple values, perhaps, but after
> >> that comes into consideration query string limit particularly for
> >> large text values... The query used for the insertion is a prepared
> >> statement since writable queries are supported in 9.3, which makes the
> >> code quite simple actually.
> >
> > This should be done how PgJDBC does batches. It'd require a libpq
> > enhancement, but it's one we IMO need anyway: allow pipelined query
> > execution from libpq.
> That's also something that would be useful for the ODBC driver. Since
> it is using libpq as a hard dependency and does not speak the protocol
> directly, it is doing additional round trips to the server for this
> exact reason when preparing a statement.

Good to know. It'll hurt especially badly when statement level rollback is
enabled, since psqlODBC does savepoints then and it'd be able to get rid of
an extra pair of round trips.

It looks like there's plenty of use for this. FDWs, psqlODBC, client
applications doing batches, and postgres XL would benefit from it too.

 Craig Ringer                   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services

Reply via email to