> I’m having this very disturbing problem. I got a table with about > 100,000 rows in it. Our software deletes the majority of these rows > and then bulk loads another 100,000 rows into the same table. All this > is happening within a single transaction. I then perform a simple > “select count(*) from …” statement that never returns. In the mean
COUNT(*) is always slow; but either way if the process is deleting and then adding records, can't you just keep track of how may records you loaded [aka count++] rather than turning around and asking the database before any statistics have had a chance to be updated. > time, the backend Postgres process is taking close to 100% of the > CPU. The hang-up does not always happen on the same statement but > eventually it happens 2 out of 3 times. If I dump and then restore the > schema where this table resides the problem is gone until the next > time we run through the whole process of deleting, loading and > querying the table. > There is no other activity in the database. All requested locks are > granted. ---------------------------(end of broadcast)--------------------------- TIP 6: explain analyze is your friend