"scott.marlowe" <[EMAIL PROTECTED]> writes:
> ... The original choice of 32 was set because the original 
> choice of 64 shared memory blocks as the most we could hope for on common 
> OS installs.  Now that we're looking at cranking that up to 1000, 
> shouldn't max connections get a look too?

Actually I think max-connections at 32 was set because of SEMMAX limits,
and had only the most marginal connection to shared_buffers (anyone care
to troll the archives to check?)  But sure, let's take another look at
the realistic limits today.

> ... If he starts running out of semaphores, that's a 
> problem he can address while his database is still up and running in most 
> operating systems, at least in the ones I use.

Back in the day, this took a kernel rebuild and system reboot to fix.
If this has changed, great ... but on exactly which Unixen can you
alter SEMMAX on the fly?

> So, my main point is that any setting that requires you to shut down 
> postgresql to make the change, we should pick a compromise value that 
> means you never likely will have to shut down the database once you've 
> started it up and it's under load.

When I started using Postgres, it did not allocate the max number of
semas it might need at startup, but was instead prone to fail when you
tried to open the 17th or 33rd or so connection.  It was universally
agreed to be an improvement to refuse to start at all if we could not
meet the specified max_connections setting.  I don't want to backtrack
from that.  If we can up the default max_connections setting, great ...
but let's not increase the odds of failing under load.

                        regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?


Reply via email to