> On Wed, Jun 09, 2004 at 13:41:27 -0400, > [EMAIL PROTECTED] wrote: >> >> Sigh, because vacuums take away from performance. Imagine a table that >> has >> to be updated on the order of a few thousand times a minute. Think about >> the drop in performance during the vacuum. >> >> On a one row table, vacuum is not so bad, but try some benchmarks on a >> table with a goodly number of rows. > > But you only need to rapidly vacuum the one table that is keeping your > totals record. This isn't going to be a big hit in performance relative > to the updates that are going on. You don't need to vacuum the tables > you are doing the inserts or updates to at that same rate. >
I have been talking about two types of problems which are both based on PostgreSQL's behavior with frequently updated tables. Summary table: In the single row table system, you have to vacuum very requently, and this affects performance. Frequently updated tables: think about the session table for a website. Each new user gets a new session row. Everytime they refresh or act in the site, the row is updated. When they leave or their session times out, the row is deleted. I wrote a RAM only session manager for PHP because PostgreSQL couldn't handle the volume. (2000 hits a second) If you have an active site, with hundreds or thousands of hits a second, vacuuming the table constantly is not practical. I don't think anyone who has seriously looked at these issues has concluded that PostgreSQL works fine in these cases. The question is what, if anything, can be done? The frequent update issue really affects PostgreSQL's acceptance in web applications, and one which MySQL seems to do a better job. IMHO, this issue, a two stage commit based replication system, and a real and usable setup/configuration system are all that stands between PostgreSQL and the serious enterprise deployment. ---------------------------(end of broadcast)--------------------------- TIP 4: Don't 'kill -9' the postmaster