It's a standard (indeed, required) best practice of concurrent database programming across any brand of database to ensure that multi-row transactions always acquire the locks they use in a predictable order based on row identities, e.g. for the classic banking debit-credit pair, doing something like this (Java / JDBC, simplified for brevity and clarity):
PreparedStatement debit = conn.prepareStatement("update account set balance = balance - ? where acc_no = ? and balance > ?"); debit.setLong(1, amount); debit.setLong(2, debit_acct); debit.setLong(3, amount); PreparedStatement credit = conn.prepareStatement("update account set balance = balance + ? where acc_no = ?"); credit.setLong(1, amount); credit.setLong(2, credit_acct); try { // always acquire row locks in increasing account number order conn.beginTransaction(); if (credit_acct < debit_acct) { credit.executeUpdate(); if (debit.executeUpdate() < 1) throw new SQLException("Insufficient balance"); } else { if (debit.executeUpdate() < 1) throw new SQLException("Insufficient balance"); credit.executeUpdate(); } } catch (SQLException e) { System.err.println("Oops. transaction failed: ", e.getMessage()); conn.rollback(); } conn.commit(); If you're doing straight SQL bulk updates, then as someone suggested, you could use an ORDER BY on a subquery, but I don't know if that is a guarantee, if you're not actually displaying the results then the DB may be technically allowed to optimize it out from underneath you. The only way to be sure is a cursor / procedure. In short, this boils down to learning more about database programming. PG is performing as it should. Cheers Dave On Mon, Jun 14, 2010 at 10:36 AM, Elias Ghanem <e.gha...@acteos.com> wrote: > Hi, > Actually i guess the problem is related to the way PG uses to aquire lock > on the rows that will be updated. > Suppose the update query will affect 5 rows: A, B, C, D and E. > Apparently the folowing senario is happening: > 1- Transaction1 locks row A > 2- Trnasaction2 locks row B > 3- Transaction1 updates row A > 4- Tranasaction2 updates row B > 5- Transaction1 *tries *to acquire lock on row B(and fail because row > B is still locked by transaction2) > 6- Transaction2 *tries *to acquire lock on row A(and fail because row > A is still locked by transaction1) > Hence the dead lock. > Is this a plausible explanation of what is going on? > If yes, what can be done to avoid the dead lock? > Thanks again. > > > > -------- Original Message -------- Subject: Dead lock Date: Mon, 14 Jun > 2010 14:50:43 +0300 From: Elias Ghanem > <e.gha...@acteos.com><e.gha...@acteos.com> To: > pgsql-performance@postgresql.org > > Hi all, > I have 2 data bases trying to perform an update query at the same time on a > same table in a third data base using db link. > I'm getting a dead lock exception: > ERROR: deadlock detected > DETAIL: Process 27305 waits for ShareLock on transaction 55575; blocked by > process 27304. > Process 27304 waits for ShareLock on transaction 55576; blocked by process > 27305. > HINT: See server log for query details. > Actually the folowing function is installed on 2 dbs DB1 and DB2. This > function issues an update query on DB3. > When this function is running simultaneously on DB1 and DB2, it produces a > dead lock making one of the functions (in DB1 or DB2) stop with the above > exception: > Is it normal? should'nt postgres be able to handle such situations, for > ex: let one transaction wait untill the other commits or rollback then > continue with the first transaction? > Is there a parameter that should be set in postgresql.conf to allow > handling of concurrent transaction...? > > CREATE OR REPLACE FUNCTION TEST_DB_LINK(VARCHAR) > RETURNS VOID AS' > DECLARE > C INTEGER; > P ALIAS FOR $1; > DUMMY VARCHAR; > BEGIN > C:= 0; > LOOP > EXIT WHEN C > 15; > C:= C+1; > SELECT INTO DUMMY DBLINK_EXEC(''CONNECTION_STRING TO DB3', ''UPDATE > IN_FICHE_PRODUIT SET VALIDE = 1''); > RAISE NOTICE ''%, %'', C,P; > END LOOP; > END;' > LANGUAGE 'plpgsql'; > > Thanks for your time. > >