I am using a simple PostgreSQL 7.3 database in a soft-realtime
application.

I have a problem where an update on a record within a (fully indexed)
table containing less than ten records needs to occur as fast as
possible.

Immediately after performing a vaccum, updates take upto 50 milliseconds
to occur, however the update performance degrades over time, such that
after a few hours of continuous updates, each update takes about half a
second. Regular vacuuming improves the performance temporarily, but
during the vacuum operation (which takes upto 2 minutes), performance of
concurrent updates falls below an acceptable level (sometimes > 2
seconds per update).

According to the documentation, PostgreSQL keeps the old versions of the
tuples in case of use by other transactions (i.e. each update is
actually extending the table). I believe this behaviour is what is
causing my performance problem.

Is there a way to disable this behaviour such that an update operation
would overwrite the current record and does not generate an outdated
tuple each time? (My application does not need transactional support).
 
I believe this would give me the performance gain I need, and would
eliminate the need for regular vacuuming too.
 
Thanks in advance,

Neil Cooper.

This communication (including any attachments) is intended for the use of the intended 
recipient(s) only and may contain information that is confidential, privileged or 
legally protected. Any unauthorized use or dissemination of this communication is 
strictly prohibited. If you have received this communication in error, please 
immediately notify the sender by return e-mail message and delete all copies of the 
original communication. Thank you for your cooperation.

---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
    (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])

Reply via email to