I've finished the secion on batch updates in the JDBC 2.0
compliance documentation on
http://lab.applinet.nl/postgresql-jdbc/ (see the quote of the
relevant part below).

In the short term I think two things need to be fixed:
1) don't begin, commit or rollback a transaction implicitly in
2) have executeBatch() throw a BatchUpdateException when it is
required to do so by the JDBC spec

If there are no objections from this list I intend to submit a
patch that fixes 1), and perhaps also 2). 

Note that this may cause backward compatibility issues with JDBC
applications that have come to rely on the incorrect behaviour.
OTOH, there have been complaints on this list before, and those
people would certainly be happy about the fix. E.g.

In the long run it would be nice if the backend would support
returning one update count (and perhaps an OID) per statement
send in a semicolon separated multi-statement call. Would this
be something for the backend TODO list? OTOH, I'm not sure if
this (small?) performance improvement is worth the trouble.

"Batch updates

The driver supports batch updates with the addBatch, clearBatch
and executeBatch methods of Statement, PreparedStatement and
CallableStatement. DatabaseMetaData.supportsBatchUpdates()
returns true.

However, executing statements in a batch does not provide a
performance improvement with PostgreSQL, since all statements
are internally send to the backend and processed one-by-one.
That defeats the purpose of the batch methods. The intended
behaviour is to send a set of update/insert/delete/DDL
statements in one round trip to the database. Unfortunately,
this optional JDBC feature cannot be implemented correctly with
PostgreSQL, since the backend only returns the update count of
the last statement send in one call with multiple statements.
JDBC requires it to return an array with the update counts of
all statements in the batch. Even though the batch processing
feature currently provides no performance improvement, it should
not be removed from the driver for reasons of backward

The current implementation of Statement.executeBatch() in
PostgreSQL starts a new transaction and commits or aborts it.
This is not in compliance with the JDBC specification, which
does not mention transactions in the description of
Statement.executeBatch() at all. The confusion is probably
caused by a JDBC tutorial from Sun with example code which
disables autocommit before calling executeBatch "so that the
transaction will not be automatically committed or rolled back
when the method executeBatch is called". This comment in the
tutorials appears to be a misunderstanding. A good reason to
disable autocommit before calling executeUpdate() is to be able
to commit or rollback all statements in a batch as a unit. With
autocommit enabled, the application would not know which
statements had and had not been processed when an exception is
thrown. It is the responsibility of the application, however, to
disable autocommit and to commit or rollback a transaction. Note
that Oracle's implementation of executeBatch() also does not
commit or rollback a transaction implicitly. The implementation
of Statement.executeBatch() in PostgreSQL should be changed to
not begin, commit or rollback a transaction.

Support for BatchUpdateException is not yet implemented. The
implementation of executeBatch is incorrect, therefore, since it
is required to throw a BatchUpdateException if one of the
commands in the batch returns something other than an update

René Pijlman

---------------------------(end of broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]

Reply via email to