Hi All

I understand your point,

may be I didn't understand everyone or everyone didn't understand me

one feature of PostgreSQL is implemented into another feature of Java ( i
say subject PostgreSQL::autocommit Vs JDBC::setAutoCommit ),
i.e PostgreSQL::"set autocommit to FALSE" is implemented as

currently PostgreSQL::"set autocommit to FALSE ( not supported )

say in future, if PostgreSQL come with proper fix/support for "set
autocommit to FALSE" then will JDBC-team change the to code to JDBC::"set
autocommit to FALSE" ?, then what about existing behaviors dependency
applications ?

this could have handled in different way in blogs saying to add "BEGIN-END"
from JDBC-connection-query with warning

simple, if PostgreSQL DB is not support then same with PostgreSQL JDBC too,
if still JDBC want to support then need to support with expected behavior
way only, how come other feature is added to this ?

basically, decision/review seems to be wrong, may be bug in the decision

and why for this we are continuing/forcing the loop is, because

1. "every/entire application developers expected behavior are matching,
only PostgreSQL::JDBC-team is not in sync"
2. "every organisation want there applications to be multi-database
compatible, only PostgreSQL::JDBC-team <don't know what to say>"

however, looping hackers and ending the loop

sorry, for using hard words(if any), but as open-source we need to complete


On Thu, Feb 18, 2016 at 11:03 PM, Kevin Wooten <kd...@me.com> wrote:

> Using ‘psql’ executing your example would yield the same result, a command
> error would cause a required rollback before proceeding.  This tells you
> that this is how PostgreSQL, the database, is designed to work. It has
> nothing to do with the Java driver implementation.
> You are asking the creators of a client driver implementation to change a
> fundamental behavior of the database.  Repeatedly people have suggested you
> take this up with those creating the actual database (that’s the request to
> move this to the ‘-hackers’ list); yet you persist.
> I’m only chiming in because it’s getting quite annoying to have you keep
> this thread alive when the situation has been made quite clear to you.
> On Feb 18, 2016, at 9:57 AM, Sridhar N Bamandlapally <
> sridhar....@gmail.com> wrote:
> There are many reasons why this is required,
> 1. Postgres migrated client percentage is high,
> 2. For application developers this looks like bug in Postgres, as it throw
> exception for next transaction even when current exception
> suppressed/handled,
> 3. Most of non-financial application or data-ware-house application have
> batch transaction process where successful transaction goes into
> data-tables and failed transactions goes into error-log-tables,
> this is most generic requirement
> cannot effort any reason if client think about rollback to old database or
> feel not meeting requirements  -- please ignore
> On Thu, Feb 18, 2016 at 7:06 PM, Mark Rotteveel <m...@lawinegevaar.nl>
> wrote:
>> On Thu, 18 Feb 2016 13:48:04 +0100 (CET), Andreas Joseph Krogh
>> <andr...@visena.com> wrote:
>> >  I understand that and indeed this isn't something that should be
>> handled
>> >  by the driver, however some of the response in this thread seem to
>> think
>> >  it
>> >  is an absurd expectation from the OP that failure of one statement
>> should
>> >  still allow a commit. Which it isn't if you look at what other database
>> >  systems do.
>> >
>> >  Mark
>> >
>> > If that one failed statement doesn't raise an exception, how does the
>> > client
>> > (code) know that it failed? If it does raise an exception, then what
>> > standard
>> > specifies that that specific exceptions is to be treated as "don't
>> > rollback for
>> > this type of error"?
>> Of course an exception is raised, but the exact handling could then be
>> left to the client. For example the client could catch the exception,
>> decide based on the specific error to execute another statement to "fix"
>> the error condition and then commit. Think of INSERT, duplicate key, then
>> UPDATE before the existence of 'UPSERT'-like statements; if the occurrence
>> of duplicate key is rare it can be cheaper to do than to first SELECT to
>> check for existence and then INSERT or UPDATE, or to UPDATE, INSERT when
>> update count = 0. Another situation could be where the failure is not
>> important (eg it was only a log entry that is considered supporting, not
>> required), so the exception is ignored and the transaction as a whole is
>> committed.
>> Sure, in most cases it is abusing exceptions for flow control and likely
>> an example of bad design, but the point is that it is not outlandish to
>> allow execution of other statements and eventually a commit of a
>> transaction even if one or more statements failed in that transaction; as
>> demonstrated by systems that do allow this (for SQL Server you need to set
>> XACT_ABORT mode on to get similar behavior as PostgreSQL).
>> As to standards, for batch execution, JDBC expects that a driver either
>> process up to the first failure and raise a BatchUpdateException with the
>> update counts of the successfully executed statements, or continue
>> processing after failure(s) and only raise the exception after processing
>> the remainder of the batch (where the exception contains a mix of update
>> counts + failure indications). In both cases a commit for the statements
>> that were processed successfully would still be possible if the client so
>> wishes (see section 14.1.3 "Handling Failures during Execution" of JDBC
>> 4.2).
>> Mark
>> --
>> Sent via pgsql-jdbc mailing list (pgsql-j...@postgresql.org)
>> To make changes to your subscription:
>> http://www.postgresql.org/mailpref/pgsql-jdbc

Reply via email to