Ben wrote:
My transaction calls the same stored procedure many times over. Over the lifetime of the transaction, that stored procedure slows down by roughly 2 orders of magnitude. The procedure itself tries to look up several strings in dictionary tables, and if the strings aren't there (most of them will be) it inserts them. All those dictionary tables have indexes. After it has converted most of the strings into ids, it does another lookup on a table and if it finds a matching row (should be the common case) it updates a timestamp column of that row; otherwise, it inserts a new row.

Which would suggest Heikki's guess was pretty much right and it's dead rows that are causing the problem.

Assuming most updates are to this timestamp, could you try a test case that does everything *except* update the timestamp. If that runs blazingly fast then we've found the problem.

If that is the problem, there's two areas to look at:
1. Avoid updating the same timestamp more than once (if that's happening)
2. Update timestamps in one go at the end of the transaction (perhaps by loading updates into a temp table).
3. Split the transaction in smaller chunks of activity.

So.... there isn't much table size changing, but there are a lot of updates. Based on pg_stat_user_tables I suspect that the procedure is using indexes more than table scans. Is there a better way to know?

Not really. You can check the plans of queries within the function, but there's no way to capture query plans of running functions.

--
  Richard Huxton
  Archonet Ltd

---------------------------(end of broadcast)---------------------------
TIP 1: if posting/reading through Usenet, please send an appropriate
      subscribe-nomail command to [EMAIL PROTECTED] so that your
      message can get through to the mailing list cleanly

Reply via email to