[EMAIL PROTECTED] wrote:


I have been talking about two types of problems which are both based on PostgreSQL's behavior with frequently updated tables.

Summary table: In the single row table system, you have to vacuum very
requently, and this affects performance.

Frequently updated tables: think about the session table for a website.
Each new user gets a new session row. Everytime they refresh or act in the
site, the row is updated. When they leave or their session times out, the
row is deleted. I wrote a RAM only session manager for PHP because
PostgreSQL couldn't handle the volume. (2000 hits a second)



It would be interesting to see if the vacuum delay patch, fsm tuning + vacuum scheduling could have changed this situation. Clearly there is an issue here (hence a patch...), but ISTM that just as significant is the fact that it is difficult to know how to configure the various bits and pieces, and also difficult to know if it has been done optimally.

If you have an active site, with hundreds or thousands of hits a second,
vacuuming the table constantly is not practical.

I don't think anyone who has seriously looked at these issues has
concluded that PostgreSQL works fine in these cases. The question is what,
if anything, can be done? The frequent update issue really affects
PostgreSQL's acceptance in web applications, and one which MySQL seems to
do a better job.




As an aside, I have had similar issues with DB2 and high update tables - lock escalations (locklist tuning needed). It is not just non-overwriting storage managers that need the magic tuning wand :-)

regards

Mark

---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster

Reply via email to