Tom Lane wrote:
The thought behind my suggestion was that the current max_fsm_pages default of 20000 pages is enough to track free space in a database of maybe a few hundred megabytes. The other defaults are sized appropriately for machines with about that much in main memory. This doesn't seem to add up :-(. The default max_fsm_pages probably should be about ten times bigger just to bring it in balance with the other defaults ... after that we could talk about increasing the defaults across-the-board.
Ok, how about this? I based the numbers on your 10*current suggestion and some linear scaling
When we test connection currently, we use shared bufferes if n*5. We could add in a setting of max_fsm_pages = n * 1000 in line with the arithmetic - not sure if it's worth it.
When we test n shared buffers, let's add in a max_fsm_pages setting of n * 200.
Another alternative I thought might be better would be that instead of fixing the default max_fsm_pages at 20000, we set the default at a fixed ratio (say 200:1) to shared_buffers. Not sure how easy that is to do via the GUC mechanism.
Lastly, I would suggest that we increase the limits we try modestly - adding in 400, 350, 300, 250, 200, and 150 to the number of cennections tried, and perhaps 3000, 2500, 2000 and 1500 to number of buffers tried.
These numbers aren't entirely plucked out of the air. The number of connections is picked to match then number of clients a default apache setup can have under a hybrid MPM, and the number of shared buffers is picked to be somewhat less than the 10% on a modest machine that Peter thought would be too much.
cheers andrew ---------------------------(end of broadcast)--------------------------- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq