On Tue, Jun 1, 2010 at 7:10 AM, Hardik Belani <hardikbel...@gmail.com> wrote:
> If we introduce postgres triggers (trigger functions) on some of the tables
> to track insert/update/delete operations, (This is done by keeping and
> updating a counter field for every insert, update and delete operation
> performed on a set of tables inside trigger function) at this point, one or
> more threads get stuck in lock while executing the query, to the extent that
> sometimes even with the pgadmin, the database tables cannot be updated.

Well, if you have a lot of concurrent backends trying to update the
same counters, it is pretty reasonable to think that you're going to
have some lock contention.  There are a variety of ways to work around
this - insert rows (instead of updating) and summarize them later,
write the data to a flatfile somewhere and summarize it later, use the
built-in statistics mechanisms, etc.

Fundamentally the problem is that if transaction A is adding 1 to a
counter and transaction B is adding 1 to a counter, one must wait for
the other to commit.  If A gets there first and updates the counter
from, say, 37 to 38, B has to wait to see whether A commits or aborts.
 If A aborts, B must add 1 to 37; if A commits, B must add 1 to 38.
Until A commits or aborts, B doesn't know what value it's adding 1
*to*.  So every transaction updating that counter serializes on the
counter itself.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to