Re: [HACKERS] [PATCHES] default resource limits

2006-01-02 Thread Andrew Dunstan



Tom Lane wrote:


Andrew Dunstan [EMAIL PROTECTED] writes:
 

That's easily fixed, I think. We just need to remember what we have 
proved works.
   



 


I can apply the attached patch if you think that's worth doing.
   



If you like; but if so, remove the comment saying that there's a
connection between the required list entries.


 



done.

cheers

andrew

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] [PATCHES] default resource limits

2006-01-01 Thread Andrew Dunstan



Tom Lane wrote:


Andrew Dunstan [EMAIL PROTECTED] writes:
 

In experimenting I needed to set this at 20 for it to bite much. If we 
wanted to fine tune it I'd be inclined to say that we wanted 
20*connections buffers for the first, say, 50  or 100 connections and 10 
or 16 times for each connection over that. But that might be getting a 
little too clever - something we should leave to a specialised tuning 
tool. After all, we try these in fairly discrete jumps anyway. Maybe a 
simple factor around 20 would be sufficient.
   



I think 10 is probably a good compromise value.  If we set it to 20
we'll find make check failing on Darwin because Apple's standard
SHMMAX value doesn't allow more than about 300 buffers ... and the
regression tests want max_connections to be at least 20.
 



Well, we could do something like:

#define MIN_BUFS_FOR_CONNS(nconns) ((nconns) = 20 ? (nconns) * 10 : 200 
+ (((nconns)  - 20) * 20))


But I'm happy just to live with 10 :-)



I noticed while fooling with this on my laptop that initdb was selecting
a shared_buffers value less than the value it had just proved would work
:-(.  This is because the list of shared_buffers values it was probing
did not include all the values corresponding to values tried during the
max_connections scan.  I've added documentation about that gotcha.


 



That's easily fixed, I think. We just need to remember what we have 
proved works.


I can apply the attached patch if you think that's worth doing.

Thanks for cleaning this up. The remaining question is whether to leave 
the max connections tried at 100 on all platforms or bump it for those 
that won't hurt from extra semaphore use. I can see arguments both ways 
- I'm less concerned about this than the shared buffers numbers.


cheers

andrew

Index: initdb.c
===
RCS file: /cvsroot/pgsql/src/bin/initdb/initdb.c,v
retrieving revision 1.103
diff -c -r1.103 initdb.c
*** initdb.c	31 Dec 2005 23:50:59 -	1.103
--- initdb.c	2 Jan 2006 00:52:58 -
***
*** 1122,1128 
  status,
  test_conns,
  test_buffs,
! test_max_fsm;
  
  	printf(_(selecting default max_connections ... ));
  	fflush(stdout);
--- 1122,1130 
  status,
  test_conns,
  test_buffs,
! 		test_max_fsm,
! 		ok_buffers = 0;
! 	  
  
  	printf(_(selecting default max_connections ... ));
  	fflush(stdout);
***
*** 1144,1150 
--- 1146,1155 
   DEVNULL, DEVNULL, SYSTEMQUOTE);
  		status = system(cmd);
  		if (status == 0)
+ 		{
+ 			ok_buffers = test_buffs;
  			break;
+ 		}
  	}
  	if (i = connslen)
  		i = connslen - 1;
***
*** 1158,1163 
--- 1163,1173 
  	for (i = 0; i  bufslen; i++)
  	{
  		test_buffs = trial_bufs[i];
+ 		if (test_buffs = ok_buffers)
+ 		{
+ 			test_buffs = ok_buffers;
+ 			break;
+ 		}
  		test_max_fsm = FSM_FOR_BUFS(test_buffs);
  
  		snprintf(cmd, sizeof(cmd),
***
*** 1173,1181 
  		if (status == 0)
  			break;
  	}
! 	if (i = bufslen)
! 		i = bufslen - 1;
! 	n_buffers = trial_bufs[i];
  	n_fsm_pages = FSM_FOR_BUFS(n_buffers);
  
  	printf(%d/%d\n, n_buffers, n_fsm_pages);
--- 1183,1189 
  		if (status == 0)
  			break;
  	}
! 	n_buffers = test_buffs;
  	n_fsm_pages = FSM_FOR_BUFS(n_buffers);
  
  	printf(%d/%d\n, n_buffers, n_fsm_pages);

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] [PATCHES] default resource limits

2006-01-01 Thread Tom Lane
Andrew Dunstan [EMAIL PROTECTED] writes:
 That's easily fixed, I think. We just need to remember what we have 
 proved works.

 I can apply the attached patch if you think that's worth doing.

If you like; but if so, remove the comment saying that there's a
connection between the required list entries.

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] [PATCHES] default resource limits

2005-12-31 Thread Tom Lane
Andrew Dunstan [EMAIL PROTECTED] writes:
 In experimenting I needed to set this at 20 for it to bite much. If we 
 wanted to fine tune it I'd be inclined to say that we wanted 
 20*connections buffers for the first, say, 50  or 100 connections and 10 
 or 16 times for each connection over that. But that might be getting a 
 little too clever - something we should leave to a specialised tuning 
 tool. After all, we try these in fairly discrete jumps anyway. Maybe a 
 simple factor around 20 would be sufficient.

I think 10 is probably a good compromise value.  If we set it to 20
we'll find make check failing on Darwin because Apple's standard
SHMMAX value doesn't allow more than about 300 buffers ... and the
regression tests want max_connections to be at least 20.

I noticed while fooling with this on my laptop that initdb was selecting
a shared_buffers value less than the value it had just proved would work
:-(.  This is because the list of shared_buffers values it was probing
did not include all the values corresponding to values tried during the
max_connections scan.  I've added documentation about that gotcha.

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] [PATCHES] default resource limits

2005-12-27 Thread Andrew Dunstan



I wrote:



You probably need to fix the max-connections pass so that it applies the
same changes to max_fsm_pages as the second pass does --- otherwise, its
assumption that shared_buffers can really be set that way will be wrong.
Other than that I didn't see any problem with the shared_buffers part of
the patch.



revised patch attached, leaving max_connections alone except as above.



committed, along with minor docs change.

The open question is whether to try more connections, on some or all 
platforms.


cheers

andrew

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [HACKERS] [PATCHES] default resource limits

2005-12-26 Thread Tom Lane
Andrew Dunstan [EMAIL PROTECTED] writes:
 Tom Lane said:
 The existing initdb code actually does try to scale them in sync to
 some extent ---

 Yes, I know. What I meant was that we could try using one phase
 rather than two. But that's only one possible approach.

I think that's a bad idea, mainly because max_connections is constrained
by more things than just SHMMAX.  In a scenario where the number of
semaphores constrains max_connections, you'd probably end up failing to
push shared_buffers up as high as it could be.

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] [PATCHES] default resource limits

2005-12-26 Thread Andrew Dunstan


I wrote:


Tom Lane said:
 


I think this probably needs to be more aggressive
though.  In a
situation of limited SHMMAX it's probably more important to keep
shared_buffers as high as we can than to get a high max_connections. We
could think about increasing the 5x multiplier, adding Min and/or Max
limits, or some combination.

   



Yes. If we were to base it on the current maxima (1000/100), we could use a
factor of 10, or if on the maxima I am now proposing (4000/250) a factor of
16. Something in that range is about right I suspect.


 



In experimenting I needed to set this at 20 for it to bite much. If we 
wanted to fine tune it I'd be inclined to say that we wanted 
20*connections buffers for the first, say, 50  or 100 connections and 10 
or 16 times for each connection over that. But that might be getting a 
little too clever - something we should leave to a specialised tuning 
tool. After all, we try these in fairly discrete jumps anyway. Maybe a 
simple factor around 20 would be sufficient.


Leaving aside the question of max_connections, which seems to be the 
most controversial, is there any objection to the proposal to increase 
the settings tried for shared_buffers (up to 4000) and max_fsm_pages (up 
to 20) ? If not, I'll apply a patch for those changes shortly.


cheers

andrew

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] [PATCHES] default resource limits

2005-12-26 Thread Tom Lane
Andrew Dunstan [EMAIL PROTECTED] writes:
 In experimenting I needed to set this at 20 for it to bite much. If we 
 wanted to fine tune it I'd be inclined to say that we wanted 
 20*connections buffers for the first, say, 50  or 100 connections and 10 
 or 16 times for each connection over that. But that might be getting a 
 little too clever - something we should leave to a specialised tuning 
 tool. After all, we try these in fairly discrete jumps anyway. Maybe a 
 simple factor around 20 would be sufficient.

I was thinking of a linear factor plus clamps to minimum and maximum
values --- does that make it work any better?

 Leaving aside the question of max_connections, which seems to be the 
 most controversial, is there any objection to the proposal to increase 
 the settings tried for shared_buffers (up to 4000) and max_fsm_pages (up 
 to 20) ? If not, I'll apply a patch for those changes shortly.

You probably need to fix the max-connections pass so that it applies the
same changes to max_fsm_pages as the second pass does --- otherwise, its
assumption that shared_buffers can really be set that way will be wrong.
Other than that I didn't see any problem with the shared_buffers part of
the patch.

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] [PATCHES] default resource limits

2005-12-26 Thread Andrew Dunstan



Tom Lane wrote:


I was thinking of a linear factor plus clamps to minimum and maximum
values --- does that make it work any better?
 



Can you suggest some factor/clamp values? Obviously it would be 
reasonable to set the max clamp at the max shared_buffers size we would 
test in the next step, but I'm not sure I see a need for a minimum - all 
the factors I'm thinking of (or any factor above 10) would make us 
exceed our current minumum (100) in all cases anyway.



You probably need to fix the max-connections pass so that it applies the
same changes to max_fsm_pages as the second pass does --- otherwise, its
assumption that shared_buffers can really be set that way will be wrong.
Other than that I didn't see any problem with the shared_buffers part of
the patch.


 



OK, will do.

cheers

andrew

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [HACKERS] [PATCHES] default resource limits

2005-12-26 Thread Andrew Dunstan



Tom Lane wrote:

Leaving aside the question of max_connections, which seems to be the 
most controversial, is there any objection to the proposal to increase 
the settings tried for shared_buffers (up to 4000) and max_fsm_pages (up 
to 20) ? If not, I'll apply a patch for those changes shortly.
   



You probably need to fix the max-connections pass so that it applies the
same changes to max_fsm_pages as the second pass does --- otherwise, its
assumption that shared_buffers can really be set that way will be wrong.
Other than that I didn't see any problem with the shared_buffers part of
the patch.


 



revised patch attached, leaving max_connections alone except as above.

I'll apply this in a day or two, barring objection.

cheers

andrew
Index: src/bin/initdb/initdb.c
===
RCS file: /cvsroot/pgsql/src/bin/initdb/initdb.c,v
retrieving revision 1.101
diff -c -r1.101 initdb.c
*** src/bin/initdb/initdb.c	9 Dec 2005 15:51:14 -	1.101
--- src/bin/initdb/initdb.c	26 Dec 2005 22:44:09 -
***
*** 120,125 
--- 120,126 
  /* defaults */
  static int	n_connections = 10;
  static int	n_buffers = 50;
+ static int  n_fsm_pages = 2; 
  
  /*
   * Warning messages for authentication methods
***
*** 1084,1089 
--- 1085,1097 
  }
  
  /*
+  * max_fsm_pages setting used in both the shared_buffers and max_connections
+  * tests. 
+  */
+ 
+ #define TEST_FSM(x) ( (x)  1000 ? 50 * (x) : 2 )
+ 
+ /*
   * check how many connections we can sustain
   */
  static void
***
*** 1100, 
  
  	for (i = 0; i  len; i++)
  	{
  		snprintf(cmd, sizeof(cmd),
   %s\%s\ -boot -x0 %s 
   -c shared_buffers=%d -c max_connections=%d template1 
    \%s\  \%s\ 21%s,
   SYSTEMQUOTE, backend_exec, boot_options,
!  conns[i] * 5, conns[i],
   DEVNULL, DEVNULL, SYSTEMQUOTE);
  		status = system(cmd);
  		if (status == 0)
--- 1108,1124 
  
  	for (i = 0; i  len; i++)
  	{
+ 		int test_buffs = conns[i] * 5;
+ 		int test_max_fsm =  TEST_FSM(test_buffs);
+ 
  		snprintf(cmd, sizeof(cmd),
   %s\%s\ -boot -x0 %s 
+  -c max_fsm_pages=%d 
   -c shared_buffers=%d -c max_connections=%d template1 
    \%s\  \%s\ 21%s,
   SYSTEMQUOTE, backend_exec, boot_options,
!  test_max_fsm,
!  test_buffs, conns[i],
   DEVNULL, DEVNULL, SYSTEMQUOTE);
  		status = system(cmd);
  		if (status == 0)
***
*** 1125,1146 
  test_buffers(void)
  {
  	char		cmd[MAXPGPATH];
! 	static const int bufs[] = {1000, 900, 800, 700, 600, 500,
! 	400, 300, 200, 100, 50};
  	static const int len = sizeof(bufs) / sizeof(int);
  	int			i,
! status;
  
! 	printf(_(selecting default shared_buffers ... ));
  	fflush(stdout);
  
  	for (i = 0; i  len; i++)
  	{
  		snprintf(cmd, sizeof(cmd),
   %s\%s\ -boot -x0 %s 
   -c shared_buffers=%d -c max_connections=%d template1 
    \%s\  \%s\ 21%s,
   SYSTEMQUOTE, backend_exec, boot_options,
   bufs[i], n_connections,
   DEVNULL, DEVNULL, SYSTEMQUOTE);
  		status = system(cmd);
--- 1138,1167 
  test_buffers(void)
  {
  	char		cmd[MAXPGPATH];
! 	static const int bufs[] = {
! 	  4000, 3500, 3000, 2500, 2000, 1500,
! 	  1000, 900, 800, 700, 600, 500,
! 	  400, 300, 200, 100, 50
! 	};
  	static const int len = sizeof(bufs) / sizeof(int);
  	int			i,
! status,
! 	test_max_fsm_pages;
  
! 	printf(_(selecting default shared_buffers/max_fsm_pages ... ));
  	fflush(stdout);
  
  	for (i = 0; i  len; i++)
  	{
+ 		test_max_fsm_pages = TEST_FSM(bufs[i]);
+ 
  		snprintf(cmd, sizeof(cmd),
   %s\%s\ -boot -x0 %s 
+  -c max_fsm_pages=%d 
   -c shared_buffers=%d -c max_connections=%d template1 
    \%s\  \%s\ 21%s,
   SYSTEMQUOTE, backend_exec, boot_options,
+  test_max_fsm_pages,
   bufs[i], n_connections,
   DEVNULL, DEVNULL, SYSTEMQUOTE);
  		status = system(cmd);
***
*** 1150,1157 
  	if (i = len)
  		i = len - 1;
  	n_buffers = bufs[i];
  
! 	printf(%d\n, n_buffers);
  }
  
  /*
--- 1171,1179 
  	if (i = len)
  		i = len - 1;
  	n_buffers = bufs[i];
+ 	n_fsm_pages = test_max_fsm_pages;
  
! 	printf(%d/%d\n, n_buffers, n_fsm_pages);
  }
  
  /*
***
*** 1177,1182 
--- 1199,1207 
  	snprintf(repltok, sizeof(repltok), shared_buffers = %d, n_buffers);
  	conflines = replace_token(conflines, #shared_buffers = 1000, repltok);
  
+ 	snprintf(repltok, sizeof(repltok), max_fsm_pages = %d, n_fsm_pages);
+ 	conflines = replace_token(conflines, #max_fsm_pages = 2, repltok);
+ 
  #if DEF_PGPORT != 5432
  	snprintf(repltok, sizeof(repltok), #port = %d, DEF_PGPORT);
  	conflines = replace_token(conflines, #port = 5432, repltok);

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   

Re: [HACKERS] [PATCHES] default resource limits

2005-12-25 Thread Andrew Dunstan
Tom Lane said:
 Andrew Dunstan [EMAIL PROTECTED] writes:
 Maybe we need to split this into two pieces, given Tom's legitimate
 concern about semaphore use. How about we increase the allowed range
 for  shared_buffers and max_fsm_pages, as proposed in my patch, and
 leave the  max_connections issue on the table? I also wondered if
 instead of first  setting max_connections and then
 shared_buffers/max_fsm_pages, we should  try to scale them in synch
 somehow.

 The existing initdb code actually does try to scale them in sync to
 some extent --- take a closer look at the arguments being passed during
 the max-connections test phase. It won't choose a large
 max_connections unless it can simultaneously get 5 times that many
 shared_buffers.

Yes, I know. What I meant was that we could try using one phase
rather than two. But that's only one possible approach.

 I think this probably needs to be more aggressive
 though.  In a
 situation of limited SHMMAX it's probably more important to keep
 shared_buffers as high as we can than to get a high max_connections. We
 could think about increasing the 5x multiplier, adding Min and/or Max
 limits, or some combination.


Yes. If we were to base it on the current maxima (1000/100), we could use a
factor of 10, or if on the maxima I am now proposing (4000/250) a factor of
16. Something in that range is about right I suspect.

cheers

andrew




---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] [PATCHES] default resource limits

2005-12-24 Thread Andrew Dunstan


[moving to -hackers]

Peter Eisentraut wrote:


Am Samstag, 24. Dezember 2005 00:20 schrieb Andrew Dunstan:
 


The rationale is one connection per apache thread (which on Windows
defaults to 400). If people think this is too many I could live with
winding it back a bit - the defaults number of apache workers on Unix is
250, IIRC.
   



It's 150.  I don't mind increasing the current 100 to 150, although I find 
tying this to apache pretty bogus.
 



According to 
http://httpd.apache.org/docs/2.0/mod/mpm_common.html#maxclients the 
default for the prefork MPM, which is the default on Unix, is 256. 400 
appears to be what is used for hybrid MPMs like worker, which is not the 
default for any platform. The default Windows MPM (mpm_winnt) is 
apparently governed by the ThreadsPerChild setting, which defaults to 
64, not 400 as I previously stated.


I really don't like the prospect of making the defaults platform specific, 
especially if the only rationale for that would be apache does it.  Why 
does apache allocate more connections on Windows anyway?


 



It uses a *very* different engine.

Maybe referring to apache is not ideal, although playing nicely with a 
very common client doesn't strike me as totally bogus either.


But what is the rationale for the current settings, or for anything else 
that might be proposed? I have yet to hear any. Is there anyone who 
thinks that 1000/2 for shared_buffers/max_fsm_pages is a good set of 
defaults?


Maybe we need to split this into two pieces, given Tom's legitimate 
concern about semaphore use. How about we increase the allowed range for 
shared_buffers and max_fsm_pages, as proposed in my patch, and leave the 
max_connections issue on the table? I also wondered if instead of first 
setting max_connections and then shared_buffers/max_fsm_pages, we should 
try to scale them in synch somehow.


cheers

andrew





---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] [PATCHES] default resource limits

2005-12-24 Thread Tom Lane
Andrew Dunstan [EMAIL PROTECTED] writes:
 Maybe we need to split this into two pieces, given Tom's legitimate 
 concern about semaphore use. How about we increase the allowed range for 
 shared_buffers and max_fsm_pages, as proposed in my patch, and leave the 
 max_connections issue on the table? I also wondered if instead of first 
 setting max_connections and then shared_buffers/max_fsm_pages, we should 
 try to scale them in synch somehow.

The existing initdb code actually does try to scale them in sync to some
extent --- take a closer look at the arguments being passed during the
max-connections test phase.  It won't choose a large max_connections
unless it can simultaneously get 5 times that many shared_buffers.
I think this probably needs to be more aggressive though.  In a
situation of limited SHMMAX it's probably more important to keep
shared_buffers as high as we can than to get a high max_connections.
We could think about increasing the 5x multiplier, adding Min and/or Max
limits, or some combination.

BTW, I fat-fingered the calculations I was doing last night --- the
actual shmem consumption in CVS tip seems to be more like 17K per
max_connection increment, assuming max_locks_per_connection = 64.

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] [PATCHES] default resource limits

2005-12-24 Thread Andrew Dunstan



Tom Lane wrote:


BTW, I fat-fingered the calculations I was doing last night --- the
actual shmem consumption in CVS tip seems to be more like 17K per
max_connection increment, assuming max_locks_per_connection = 64.

 




ITYM max_locks_per_transaction (which as the docs say is confusingly named).

So if we went to 256, say, as an upper limit on max_connections, that 
would account for an extra 2.6Mb of memory use - a pretty modest 
increase, really.



cheers

andrew

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] [PATCHES] default resource limits

2005-12-24 Thread Andrew Dunstan



Robert Treat wrote:

Maybe we should write something in to check if apache is installed if we're so 
concerned about that usage... 



Er, yeah, I'll get right on that. (Don't hold your breath.)

I already know that I set the connection limits 
lower on most of the installations I do (given than most installations are 
not production webservers).  



So do I. In fact, even on production web servers I usually use 
connection pooling and can rarely get an app to approach saturating a 
pool size of around 20 let alone 100. But then you and I know something 
about tuning Postgres.  What I am aiming for is something that is closer 
to the norm on out of the box configuration.


There is also the argument to be made that just 
because systems these days have more memory doesn't mean we have to use it. 

 



Just because we can run with very little memory doesn't mean we have to. 
What is the point of having lots of memory if you don't use it? We are 
talking defaults here. initdb will still scale down on resource-starved 
machines.


cheers

andrew

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [HACKERS] [PATCHES] default resource limits

2005-12-24 Thread Andrew Dunstan

[moved to -hackers]

Petr Jelinek said:
 Andrew Dunstan wrote:

 Just because we can run with very little memory doesn't mean we have to.
 What is the point of having lots of memory if you don't use it? We are
talking defaults here. initdb will still scale down on
 resource-starved  machines.


 Why not just give user few configs tuned to different things like mysql
  does ? Or better, make it initdb option so it will try to set
 higher/lower limits depending on type of config.



And what settings will be tried by distros that automatically run initdb on
first startup?

I don't mind having an initdb option that tunes the settings tried, but that
doesn't remove the need to choose some defaults.

I'm not sure that I think mysql's setup is a good example to follow.

cheers

andrew




---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] [PATCHES] default resource limits

2005-12-23 Thread Andrew Dunstan



Tom Lane wrote:


daveg [EMAIL PROTECTED] writes:
 


I don't understand the motivation for so many connections by default, it
seems wasteful in most cases.
   



I think Andrew is thinking about database-backed Apache servers ...

Some quick checks say that CVS tip's demand for shared memory increases
by about 26kB per max_connection slot. (Almost all of this is lock table
space; very possibly we could afford to decrease max_locks_per_connection
when max_connections is large, to buy back some of that.)  So boosting
the default from 100 to 400 would eat an additional 7.8mB of shared
memory if we don't do anything with max_locks_per_connection.  This is
probably not a lot on modern machines.

A bigger concern is the increase in semaphores or whatever the local
platform uses instead.  I'd be *real* strongly tempted to bound the
default at 100 on Darwin, for example, because on that platform each
semaphore is an open file that has to be passed down to every backend.
Uselessly large max_connections therefore slows backend launch and
risks running the whole machine out of filetable slots.  I don't know
what the story is on Windows but it might have some problems with large
numbers of semas too --- anyone know?

Also, some more thought needs to be given to the tradeoff between
shared_buffers and max_connections.  Given a constrained SHMMAX value,
I think the patch as-is will expend too much space on connections and
not enough on buffers --- the * 5 in test_connections() probably needs
a second look.


 



All very good points. I didn't try to change the existing logic much.

I think we need to take this back to -hackers to get discussion on 
tuning the way initdb selects the defaults.


Here are some questions:
. Do we want/need to make the behaviour platform specific?
. The patch put max_fsm_pages into the mix, and above it's suggested we 
look at max_locks_per_connection. Is there anything else we should add?


My goal here is to pick reasonable defaults, not to tune the 
installation highly. I think we tend to err on the side of being too 
conservative today, but certainly it's possible to err the other way too.


cheers

andrew


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster