[EMAIL PROTECTED] (Alison Winters) writes:
>>> Our application requires a number of processes to select and update rows
>>> from a very small (<10 rows) Postgres table on a regular and frequent
>>> basis. These processes often run for weeks at a time, but over the
>>> space of a few days we find that updates start getting painfully slow.
No wonder, considering that your "less than 10 rows" table contains
something upwards of 100000 tuples:
> INFO: --Relation public.plc_fldio--
> INFO: Pages 1221: Changed 3, reaped 256, Empty 0, New 0; Tup 108137: Vac
> 4176, Keep/VTL 108133/108133, UnUsed 19, MinLen 84, MaxLen 84; Re-using:
> Free/Avail. Space 445176/371836; EndEmpty/Avail. Pages 0/256.
> CPU 0.04s/0.14u sec elapsed 0.18 sec.
What you need to do is find out why VACUUM is unable to reclaim all
those dead row versions. The reason is likely that some process is
sitting on a open transaction for days at a time.
> Isn't it normal to have processes that keep a single database
> connection open for days at a time?
Database connection, sure. Single transaction, no.
> Regarding the question another poster asked: all the transactions are
> very short.
Somewhere you have one that isn't. Try watching the backends with ps,
or look at the pg_stat_activity view if your version of PG has it,
to see which sessions are staying "idle in transaction" indefinitely.
regards, tom lane
---------------------------(end of broadcast)---------------------------
TIP 4: Have you searched our list archives?