> Immediately after performing a vaccum, updates take upto 50 > milliseconds to occur, however the update performance > degrades over time, such that after a few hours of continuous > updates, each update takes about half a second. Regular > vacuuming improves the performance temporarily, but during > the vacuum operation (which takes upto 2 minutes), > performance of concurrent updates falls below an acceptable > level (sometimes > 2 seconds per update).
You must be doing an enormous number of updates! You can vacuum as often as you like, and should usually do so at least as often as the time it takes for 'all' tuples to be updated. So, in your case, every 10 updates. OK, that seems unnecessary, how about every 100 updates? > According to the documentation, PostgreSQL keeps the old > versions of the tuples in case of use by other transactions > (i.e. each update is actually extending the table). I believe > this behaviour is what is causing my performance problem. Yes, it probably is. > Is there a way to disable this behaviour such that an update > operation would overwrite the current record and does not > generate an outdated tuple each time? (My application does > not need transactional support). No, I don't believe there is. If you really don't need transaction support then you might want to reconsider whether postgres is really the right tool. M ---------------------------(end of broadcast)--------------------------- TIP 8: explain analyze is your friend