On Tue, 2003-02-11 at 13:01, Tom Lane wrote:
> "Jon Griffin" <[EMAIL PROTECTED]> writes:
> > So it appears that linux at least is way above your 8 meg point, unless I
> > am missing something.
> 
> Yeah, AFAIK all recent Linuxen are well above the range of parameters
> that I was suggesting (and even if they weren't, Linux is particularly
> easy to change the SHMMAX setting on).  It's other Unixoid platforms
> that are likely to have a problem.  Particularly the ones where you
> have to rebuild the kernel to change SHMMAX; people may be afraid to
> do that.

The issue as I see it is: 
Better performing vs. More Compatible Out of the box Defaults.

Perhaps a compromise (hack?):
Set the default to some default value that performs well, a value we all
agree is not too big (16M? 32M?). On startup, if the OS can't give us
what we want, instead of failing, we can try again with a smaller
amount, perhaps half the default, if that fails try again with half
until we reach some bottom threshold (1M?).

The argument against this might be: When I set shared_buffers=X, I want
X shared buffers. I don't want it to fail silently and give me less than
what I need / want.  To address this we might want to add a guc option
that controls this behavior. So we ship postgresql.conf with 32M of
shared memory and auto_shared_mem_reduction = true.  With a comment that
the administrator might want to turn this off for production.

Thoughts?  

I think this will allow most uninformed users get decent performing
defaults as most systems will accommodate this larger value.


---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster

Reply via email to