Josh Berkus wrote:

800MB for sort mem? Are you sure you typed that correctly? You must be counting on not having a lot of concurrent queries. It sure will speed up index updating, though!

800MB is correct, yes... There are usually only 10-30 postgres processes active (imagine 5-10 people working on the web front-end while cron jobs access the db occasionally). Very few queries can use such large amounts of memory for sorting, but they do exist.

I think you might do well to experiment with using the checkpoint_delay and checkpoint_sibilings settings in order to get more efficient batch processing of updates while selects are going on.

I thought that could improve only concurrent transactions...

What have you set max_fsm_relations and max_fsm_pages to? The latter should be very high for you, like 10,000,000

good guess ;-) the former is set to 10,000 (I'm not sure how useful this is for those temporary tables)

For that matter, what *version* of PostgreSQL are you running?


Also, make sure that your tables get vaccuumed regularly.

There is a noticeable difference between a properly vacuumed db (nightly "vacuum full") and a non-vacuumed one and people will start complaining immediately if something goes wrong there...

Well, a battery-backed RAID controller with a fast cache would certainly help.
(RAID-5 with 9 15k rpm drives; at a hindsight, perhaps we should have tried a 0+1)

You'll also be glad to know that a *lot* of the improvements in the upcoming PostgreSQL 7.5 are aimed at giving better peformance on large, high-activity databases like yours.

That's good to hear...


---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
     subscribe-nomail command to [EMAIL PROTECTED] so that your
     message can get through to the mailing list cleanly

Reply via email to