I wrote:

Tom Lane said:
I think this probably needs to be more aggressive
though.  In a
situation of limited SHMMAX it's probably more important to keep
shared_buffers as high as we can than to get a high max_connections. We
could think about increasing the 5x multiplier, adding Min and/or Max
limits, or some combination.


Yes. If we were to base it on the current maxima (1000/100), we could use a
factor of 10, or if on the maxima I am now proposing (4000/250) a factor of
16. Something in that range is about right I suspect.



In experimenting I needed to set this at 20 for it to bite much. If we wanted to fine tune it I'd be inclined to say that we wanted 20*connections buffers for the first, say, 50 or 100 connections and 10 or 16 times for each connection over that. But that might be getting a little too clever - something we should leave to a specialised tuning tool. After all, we try these in fairly discrete jumps anyway. Maybe a simple factor around 20 would be sufficient.

Leaving aside the question of max_connections, which seems to be the most controversial, is there any objection to the proposal to increase the settings tried for shared_buffers (up to 4000) and max_fsm_pages (up to 200000) ? If not, I'll apply a patch for those changes shortly.

cheers

andrew

---------------------------(end of broadcast)---------------------------
TIP 1: if posting/reading through Usenet, please send an appropriate
      subscribe-nomail command to [EMAIL PROTECTED] so that your
      message can get through to the mailing list cleanly

Reply via email to