UPDATE statements are not completely atomic in that index entries are
updated
separately for each row. A query interchanging two values within a column
declared
UNIQUE will fail due to the attempt of inserting a duplicate temporarily. It
seems
like Postgres validates constraints on indexes each ti
> UPDATE sometable SET unique_col =
>CASE WHEN unique_col = firstvalue THEN secondvalue
> ELSE firstvalue
>END
> WHERE unique_col = firstvalue
> OR unique_col = secondvalue
(See last comment)
> BEGIN;
> SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
>
> UPDATE sometabl
> The first is what you want. PostgreSQL needs some work in the
> evaluation of unique indexes to properly support it.
>
> Namely, when it sees a conflict when inserting into the index, it needs
> to record the fact, and revisit the conflict at the end of the command.
> Lots of work...
OK. The
On Sat, 2003-03-08 at 16:48, daniel alvarez wrote:
> > The first is what you want. PostgreSQL needs some work in the
> > evaluation of unique indexes to properly support it.
> >
> > Namely, when it sees a conflict when inserting into the index, it needs
> > to record the fact, and revisit the con
>> There must be a better solution than the additional dummy update.
How about swapping all the other columns, and preserving the identity of
the primary key? Arguably, swapping primary keys is a violation of the
relational model to begin with.
regards, tom lane
On Thu, Mar 06, 2003 at 08:31:56AM -0600, Mark Mitchell wrote:
> These are select queries that may have been keyed incorrectly that I
> need to cancel. So a transaction won't help in this case.
What about setting maximum query execution time in postgresql
configuration? There are some options help