Re: [HACKERS] Re: SOMAXCONN (was Re: Solaris source code)

2001-07-17 Thread Nathan Myers

On Thu, Jul 12, 2001 at 11:08:34PM +0200, Peter Eisentraut wrote:
 Nathan Myers writes:
 
  When the system is too heavily loaded (however measured), any further
  login attempts will fail.  What I suggested is, instead of the
  postmaster accept()ing the connection, why not leave the connection
  attempt in the queue until we can afford a back end to handle it?
 
 Because the new connection might be a cancel request.

Supporting cancel requests seems like a poor reason to ignore what
load-shedding support operating systems provide.  

To support cancel requests, it would suffice for PG to listen at 
another socket dedicated to administrative requests.  (It might 
even ignore MaxBackends for connections on that socket.)

Nathan Myers
[EMAIL PROTECTED]

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] Re: SOMAXCONN (was Re: Solaris source code)

2001-07-16 Thread Tom Lane

[EMAIL PROTECTED] (Nathan Myers) writes:
 Considering the Apache comment about some systems truncating instead
 of limiting... 10xff is 16.  Maybe 10239 would be a better choice, 
 or 16383.  

Hmm.  If the Apache comment is real, then that would not help on those
systems.  Remember that the actual listen request is going to be
2*MaxBackends in practically all cases.  The only thing that would save
you from getting an unexpectedly small backlog parameter in such a case
is to set PG_SOMAXCONN to 255.

Perhaps we should just do that and not worry about whether the Apache
info is accurate or not.  But I'd kind of like to see chapter and verse,
ie, at least one specific system that demonstrably fails to perform the
clamp-to-255 for itself, before we lobotomize the code that way.  ISTM a
conformant implementation of listen() would limit the given value to 255
before storing it into an 8-bit field, not just lose high order bits.


 After doing some more reading, I find that most OSes do not reject
 connect requests that would exceed the specified backlog; instead,
 they ignore the connection request and assume the client will retry 
 later.  Therefore, it appears cannot use a small backlog to shed load 
 unless we assume that clients will time out quickly by themselves.

Hm.  newgate is a machine on my local net that's not currently up.

$ time psql -h newgate postgres
psql: could not connect to server: Connection timed out
Is the server running on host newgate and accepting
TCP/IP connections on port 5432?

real1m13.33s
user0m0.02s
sys 0m0.01s
$

That's on HPUX 10.20.  On an old Linux distro, the same timeout
seems to be about 21 seconds, which is still pretty long by some
standards.  Do the TCP specs recommend anything particular about
no-response-to-SYN timeouts?

regards, tom lane

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] Re: SOMAXCONN (was Re: Solaris source code)

2001-07-16 Thread Nathan Myers

On Sat, Jul 14, 2001 at 11:38:51AM -0400, Tom Lane wrote:
 
 The state of affairs in current sources is that the listen queue
 parameter is MIN(MaxBackends * 2, PG_SOMAXCONN), where PG_SOMAXCONN
 is a constant defined in config.h --- it's 1, hence a non-factor,
 by default, but could be reduced if you have a kernel that doesn't
 cope well with large listen-queue requests.  We probably won't know
 if there are any such systems until we get some field experience with
 the new code, but we could have configure select a platform-dependent
 value if we find such problems.

Considering the Apache comment about some systems truncating instead
of limiting... 10xff is 16.  Maybe 10239 would be a better choice, 
or 16383.  

 So, having thought that through, I'm still of the opinion that holding
 off accept is of little or no benefit to us.  But it's not as simple
 as it looks at first glance.  Anyone have a different take on what the
 behavior is likely to be?

After doing some more reading, I find that most OSes do not reject
connect requests that would exceed the specified backlog; instead,
they ignore the connection request and assume the client will retry 
later.  Therefore, it appears cannot use a small backlog to shed load 
unless we assume that clients will time out quickly by themselves.

OTOH, maybe it's reasonable to assume that clients will time out,
and that in the normal case authentication happens quickly.

Then we can use a small listen() backlog, and never accept() if we
have more than MaxBackend back ends.  The OS will keep a small queue
corresponding to our small backlog, and the clients will do our load 
shedding for us.

Nathan Myers
[EMAIL PROTECTED]

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl



[HACKERS] Re: SOMAXCONN (was Re: Solaris source code)

2001-07-14 Thread Tom Lane

mlw [EMAIL PROTECTED] writes:
 Tom Lane wrote:
 Passing listen(5) would probably be sufficient for Postgres.
 
 It demonstrably is not sufficient.  Set it that way in pqcomm.c
 and run the parallel regression tests.  Watch them fail.

 That's interesting, I would not have guessed that. I have written a number of
 server applications which can handle, litterally, over a thousand
 connection/operations a second, which only has a listen(5).

The problem should be considerably reduced in latest sources, since
as of a week or three ago, the top postmaster process' outer loop is
basically just accept() and fork() --- client authentication is now
handled after the fork, instead of before.  Still, we now know that
(a) SOMAXCONN is a lie on many systems, and (b) values as small as 5
are pushing our luck, even though it might not fail so easily anymore.

The state of affairs in current sources is that the listen queue
parameter is MIN(MaxBackends * 2, PG_SOMAXCONN), where PG_SOMAXCONN
is a constant defined in config.h --- it's 1, hence a non-factor,
by default, but could be reduced if you have a kernel that doesn't
cope well with large listen-queue requests.  We probably won't know
if there are any such systems until we get some field experience with
the new code, but we could have configure select a platform-dependent
value if we find such problems.

I believe that this is fine and doesn't need any further tweaking,
pending field experience.  What's still open for discussion is Nathan's
thought that the postmaster ought to stop issuing accept() calls once
it has so many children that it will refuse to fork any more.  I was
initially against that, but on further reflection I think it might be
a good idea after all, because of another recent change related to the
authenticate-after-fork change.  Since the top postmaster doesn't really
know which children have become working backends and which are still
engaged in authentication dialogs, it cannot enforce the MaxBackends
limit directly.  Instead, MaxBackends is checked when the child process
is done with authentication and is trying to join the PROC pool in
shared memory.  The postmaster will spawn up to 2 * MaxBackends child
processes before refusing to spawn more --- this allows there to be
up to MaxBackends children engaged in auth dialog but not yet working
backends.  (It's reasonable to allow extra children since some may fail
the auth dialog, or an extant backend may have quit by the time they
finish auth dialog.  Whether 2*MaxBackends is the best choice is
debatable, but that's what we're using at the moment.)

Furthermore, we intend to install a pretty tight timeout on the overall
time spent in auth phase (a few seconds I imagine, although we haven't
yet discussed that number either).

Given this setup, if the postmaster has reached its max-children limit
then it can be quite certain that at least some of those children will
quit within approximately the auth timeout interval.  Therefore, not
accept()ing is a state that will probably *not* persist for long enough
to cause the new clients to timeout.  By not accept()ing at a time when
we wouldn't fork, we can convert the behavior clients see at peak load
from quick rejection into a short delay before authentication dialog.

Of course, if you are at MaxBackends working backends, then the new
client is still going to get a too many clients error; all we have
accomplished with the change is to expend a fork() and an authentication
cycle before issuing the error.  So if the intent is to reduce overall
system load then this isn't necessarily an improvement.

IIRC, the rationale for using 2*MaxBackends as the maximum child count
was to make it unlikely that the postmaster would refuse to fork; given
a short auth timeout it's unlikely that as many as MaxBackends clients
will be engaged in auth dialog at any instant.  So unless we tighten
that max child count considerably, holding off accept() at max child
count is unlikely to change the behavior under any but worst-case
scenarios anyway.  And in a worst-case scenario, shedding load by
rejecting connections quickly is probably just what you want to do.

So, having thought that through, I'm still of the opinion that holding
off accept is of little or no benefit to us.  But it's not as simple
as it looks at first glance.  Anyone have a different take on what the
behavior is likely to be?

regards, tom lane

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



[HACKERS] Re: SOMAXCONN (was Re: Solaris source code)

2001-07-14 Thread mlw

Tom Lane wrote:
 
 mlw [EMAIL PROTECTED] writes:
  Nathan Myers wrote:
  But using SOMAXCONN blindly is always wrong; that is often 5, which
  is demonstrably too small.
 
  It is rumored that many BSD version are limited to 5.
 
 BSD systems tend to claim SOMAXCONN = 5 in the header files, but *not*
 to have such a small limit in the kernel.  The real step forward that
 we have made in this discussion is to realize that we cannot trust
 sys/socket.h to tell us what the kernel limit actually is.
 
  Passing listen(5) would probably be sufficient for Postgres.
 
 It demonstrably is not sufficient.  Set it that way in pqcomm.c
 and run the parallel regression tests.  Watch them fail.


That's interesting, I would not have guessed that. I have written a number of
server applications which can handle, litterally, over a thousand
connection/operations a second, which only has a listen(5). (I do have it as a
configuration parameter, but have never seen a time when I have had to change
it.)

I figured the closest one could come to an expert in all things socket related
would have to be the Apache web server source. They have a different take on
the listen() parameter:

 from httpd.h 
402 /* The maximum length of the queue of pending connections, as defined
403  * by listen(2).  Under some systems, it should be increased if you
404  * are experiencing a heavy TCP SYN flood attack.
405  *
406  * It defaults to 511 instead of 512 because some systems store it
407  * as an 8-bit datatype; 512 truncated to 8-bits is 0, while 511 is
408  * 255 when truncated.
409  */
410
411 #ifndef DEFAULT_LISTENBACKLOG
412 #define DEFAULT_LISTENBACKLOG 511
413 #endif


I have not found any other location in which DEFAULT_LISTENBACKLOG is defined,
but it is a configuration parameter, and here is what the Apache docs claim:

 http://httpd.apache.org/docs/mod/core.html 

ListenBacklog directive

Syntax: ListenBacklog backlog
Default: ListenBacklog 511
Context: server config
Status: Core
Compatibility: ListenBacklog is only available in Apache versions after 1.2.0. 

The maximum length of the queue of pending connections. Generally no tuning is
needed or desired, however on some systems it is desirable to increase this
when under a TCP SYN flood attack. See the backlog parameter to the listen(2)
system call. 

This will often be limited to a smaller number by the operating system. This
varies from OS to OS. Also note that many OSes do not use exactly what is
specified as the backlog, but use a number based on (but normally larger than)
what is set.
 


Anyway, why not just do what apache does, set it to some extreme default
setting, which even when truncated, is still pretty big, and allow the end user
to change this value in postgresql.conf.

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



[HACKERS] Re: SOMAXCONN (was Re: Solaris source code)

2001-07-13 Thread Nathan Myers

On Fri, Jul 13, 2001 at 07:53:02AM -0400, mlw wrote:
 Zeugswetter Andreas SB wrote:
  I liked the idea of min(MaxBackends, PG_SOMAXCONN), since there is no use in
  accepting more than your total allowed connections concurrently.
 
 I have been following this thread and I am confused why the queue
 argument to listen() has anything to do with Max backends. All the
 parameter to listen does is specify how long a list of sockets open
 and waiting for connection can be. It has nothing to do with the
 number of back end sockets which are open.

Correct.

 If you have a limit of 128 back end connections, and you have 127
 of them open, a listen with queue size of 128 will still allow 128
 sockets to wait for connection before turning others away.

Correct.

 It should be a parameter based on the time out of a socket connection
 vs the ability to answer connection requests within that period of
 time.

It's not really meaningful at all, at present.

 There are two was to think about this. Either you make this parameter
 tunable to give a proper estimate of the usability of the system, i.e.
 tailor the listen queue parameter to reject sockets when some number
 of sockets are waiting, or you say no one should ever be denied,
 accept everyone and let them time out if we are not fast enough.

 This debate could go on, why not make it a parameter in the config
 file that defaults to some system variable, i.e. SOMAXCONN.

With postmaster's current behavior there is no benefit in setting
the listen() argument to anything less than 1000.  With a small
change in postmaster behavior, a tunable system variable becomes
useful.

But using SOMAXCONN blindly is always wrong; that is often 5, which
is demonstrably too small.

 BTW: on linux, the backlog queue parameter is silently truncated to
 128 anyway.

The 128 limit is common, applied on BSD and Solaris as well.
It will probably increase in future releases.

Nathan Myers
[EMAIL PROTECTED]

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] Re: SOMAXCONN (was Re: Solaris source code)

2001-07-13 Thread Peter Eisentraut

Nathan Myers writes:

 When the system is too heavily loaded (however measured), any further
 login attempts will fail.  What I suggested is, instead of the
 postmaster accept()ing the connection, why not leave the connection
 attempt in the queue until we can afford a back end to handle it?

Because the new connection might be a cancel request.

-- 
Peter Eisentraut   [EMAIL PROTECTED]   http://funkturm.homeip.net/~peter


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



AW: [HACKERS] Re: SOMAXCONN (was Re: Solaris source code)

2001-07-13 Thread Zeugswetter Andreas SB


 When the system is too heavily loaded (however measured), any further 
 login attempts will fail.  What I suggested is, instead of the 
 postmaster accept()ing the connection, why not leave the connection 
 attempt in the queue until we can afford a back end to handle it?  

Because the clients would time out ?

 Then, the argument to listen() will determine how many attempts can 
 be in the queue before the network stack itself rejects them without 
 the postmaster involved.

You cannot change the argument to listen() at runtime, or are you suggesting
to close and reopen the socket when maxbackends is reached ? I think 
that would be nonsense.

I liked the idea of min(MaxBackends, PG_SOMAXCONN), since there is no use in 
accepting more than your total allowed connections concurrently.

Andreas

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



[HACKERS] Re: SOMAXCONN (was Re: Solaris source code)

2001-07-13 Thread mlw

Nathan Myers wrote:
  There are two was to think about this. Either you make this parameter
  tunable to give a proper estimate of the usability of the system, i.e.
  tailor the listen queue parameter to reject sockets when some number
  of sockets are waiting, or you say no one should ever be denied,
  accept everyone and let them time out if we are not fast enough.
 
  This debate could go on, why not make it a parameter in the config
  file that defaults to some system variable, i.e. SOMAXCONN.
 
 With postmaster's current behavior there is no benefit in setting
 the listen() argument to anything less than 1000.  With a small
 change in postmaster behavior, a tunable system variable becomes
 useful.
 
 But using SOMAXCONN blindly is always wrong; that is often 5, which
 is demonstrably too small.

It is rumored that many BSD version are limited to 5.
 
  BTW: on linux, the backlog queue parameter is silently truncated to
  128 anyway.
 
 The 128 limit is common, applied on BSD and Solaris as well.
 It will probably increase in future releases.

This point I am trying to make is that the parameter passed to listen() is OS
dependent, on both what it means and its defaults. Trying to tie this to
maxbackends is not the right thought process. It has nothing to do, at all,
with maxbackends.

Passing listen(5) would probably be sufficient for Postgres. Will there ever be
5 sockets in the listen() queue prior to accept()? probably not.  SOMAXCONN
is a system limit, setting a listen() value greater than this, is probably
silently adjusted down to the defined SOMAXCONN.

By making it a parameter, and defaulting to SOMAXCONN, this allows the maximum
number of connections a system can handle, while still allowing the DBA to fine
tune connection behavior on high load systems.

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl



Re: [HACKERS] Re: SOMAXCONN (was Re: Solaris source code)

2001-07-13 Thread Tom Lane

mlw [EMAIL PROTECTED] writes:
 Nathan Myers wrote:
 But using SOMAXCONN blindly is always wrong; that is often 5, which
 is demonstrably too small.

 It is rumored that many BSD version are limited to 5.

BSD systems tend to claim SOMAXCONN = 5 in the header files, but *not*
to have such a small limit in the kernel.  The real step forward that
we have made in this discussion is to realize that we cannot trust
sys/socket.h to tell us what the kernel limit actually is.

 Passing listen(5) would probably be sufficient for Postgres.

It demonstrably is not sufficient.  Set it that way in pqcomm.c
and run the parallel regression tests.  Watch them fail.

regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl



AW: [HACKERS] Re: SOMAXCONN (was Re: Solaris source code)

2001-07-12 Thread Zeugswetter Andreas SB


 The question is really whether you ever want a client to get a
 rejected result from an open attempt, or whether you'd rather they 
 got a report from the back end telling them they can't log in.  The 
 second is more polite but a lot more expensive.  That expense might 
 really matter if you have MaxBackends already running.

One of us has probably misunderstood the listen parameter.
It only limits the number of clients that can connect concurrently.
It has nothing to do with the number of clients that are already connected.
It sort of resembles a maximum queue size for the accept loop.
Incoming connections fill the queue, accept frees the queue by taking the 
connection to a newly forked backend.

Andreas

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] Re: SOMAXCONN (was Re: Solaris source code)

2001-07-11 Thread Tom Lane

Ian Lance Taylor [EMAIL PROTECTED] writes:
 But I wouldn't worry about it, and I wouldn't worry about Nathan's
 suggestion for making the limit configurable, because Postgres
 connections don't spend time on the queue.  The postgres server will
 be picking them off as fast as it can.  If the server can't pick
 processes off fast enough, then your system has other problems;

Right.  Okay, it seems like just making it a hand-configurable entry
in config.h.in is good enough for now.  When and if we find that
that's inadequate in a real-world situation, we can improve on it...

regards, tom lane

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] Re: SOMAXCONN (was Re: Solaris source code)

2001-07-11 Thread Peter Eisentraut

Tom Lane writes:

 The other concern I had could be addressed by making the listen
 parameter be MIN(MaxBackends, PG_SOMAXCONN) where PG_SOMAXCONN
 is set in config.h --- but now we could make the default value
 really large, say 1.  The only reason to change it would be
 if you had a kernel that barfed on large listen() parameters.

We'll never find that out if we don't try it.  If you're concerned about
cooperating with other listen()ing processes, set it to MaxBackends * 2,
if you're not, set it to INT_MAX and watch.

-- 
Peter Eisentraut   [EMAIL PROTECTED]   http://funkturm.homeip.net/~peter


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] Re: SOMAXCONN (was Re: Solaris source code)

2001-07-11 Thread Nathan Myers

On Wed, Jul 11, 2001 at 12:26:43PM -0400, Tom Lane wrote:
 Peter Eisentraut [EMAIL PROTECTED] writes:
  Tom Lane writes:
  Right.  Okay, it seems like just making it a hand-configurable entry
  in config.h.in is good enough for now.  When and if we find that
  that's inadequate in a real-world situation, we can improve on it...
 
  Would anything computed from the maximum number of allowed connections
  make sense?
 
 [ looks at code ... ]  Hmm, MaxBackends is indeed set before we arrive
 at the listen(), so it'd be possible to use MaxBackends to compute the
 parameter.  Offhand I would think that MaxBackends or at most
 2*MaxBackends would be a reasonable value.

 Question, though: is this better than having a hardwired constant?
 The only case I can think of where it might not be is if some platform
 out there throws an error from listen() when the parameter is too large
 for it, rather than silently reducing the value to what it can handle.
 A value set in config.h.in would be simpler to adapt for such a platform.

The question is really whether you ever want a client to get a
rejected result from an open attempt, or whether you'd rather they 
got a report from the back end telling them they can't log in.  The 
second is more polite but a lot more expensive.  That expense might 
really matter if you have MaxBackends already running.

I doubt most clients have tested either failure case more thoroughly 
than the other (or at all), but the lower-level code is more likely 
to have been cut-and-pasted from well-tested code. :-)

Maybe PG should avoid accept()ing connections once it has MaxBackends
back ends already running (as hinted at by Ian), so that the listen()
parameter actually has some meaningful effect, and excess connections 
can be rejected more cheaply.  That might also make it easier to respond 
more adaptively to true load than we do now.

 BTW, while I'm thinking about it: why doesn't pqcomm.c test for a
 failure return from the listen() call?  Is this just an oversight,
 or is there a good reason to ignore errors?

The failure of listen() seems impossible.  In the Linux, NetBSD, and 
Solaris man pages, none of the error returns mentioned are possible 
with PG's current use of the function.  It seems as if the most that 
might be needed now would be to add a comment to the call to socket() 
noting that if any other address families are supported (besides 
AF_INET and AF_LOCAL aka AF_UNIX), the call to listen() might need to 
be looked at.  AF_INET6 (which PG will need to support someday)
doesn't seem to change matters.

Probably if listen() did fail, then one or other of bind(), accept(),
and read() would fail too.

Nathan Myers
[EMAIL PROTECTED]

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] Re: SOMAXCONN (was Re: Solaris source code)

2001-07-11 Thread Tom Lane

Peter Eisentraut [EMAIL PROTECTED] writes:
 Tom Lane writes:
 Right.  Okay, it seems like just making it a hand-configurable entry
 in config.h.in is good enough for now.  When and if we find that
 that's inadequate in a real-world situation, we can improve on it...

 Would anything computed from the maximum number of allowed connections
 make sense?

[ looks at code ... ]  Hmm, MaxBackends is indeed set before we arrive
at the listen(), so it'd be possible to use MaxBackends to compute the
parameter.  Offhand I would think that MaxBackends or at most
2*MaxBackends would be a reasonable value.

Question, though: is this better than having a hardwired constant?
The only case I can think of where it might not be is if some platform
out there throws an error from listen() when the parameter is too large
for it, rather than silently reducing the value to what it can handle.
A value set in config.h.in would be simpler to adapt for such a platform.

BTW, while I'm thinking about it: why doesn't pqcomm.c test for a
failure return from the listen() call?  Is this just an oversight,
or is there a good reason to ignore errors?

regards, tom lane

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] Re: SOMAXCONN (was Re: Solaris source code)

2001-07-11 Thread Bruce Momjian

 Peter Eisentraut [EMAIL PROTECTED] writes:
  Tom Lane writes:
  Right.  Okay, it seems like just making it a hand-configurable entry
  in config.h.in is good enough for now.  When and if we find that
  that's inadequate in a real-world situation, we can improve on it...
 
  Would anything computed from the maximum number of allowed connections
  make sense?
 
 [ looks at code ... ]  Hmm, MaxBackends is indeed set before we arrive
 at the listen(), so it'd be possible to use MaxBackends to compute the
 parameter.  Offhand I would think that MaxBackends or at most
 2*MaxBackends would be a reasonable value.

Don't we have maxbackends configurable at runtime.  If so, any constant
we put in config.h will be inaccurate.  Seems we have to track
maxbackends.


-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] Re: SOMAXCONN (was Re: Solaris source code)

2001-07-11 Thread Peter Eisentraut

Tom Lane writes:

 Right.  Okay, it seems like just making it a hand-configurable entry
 in config.h.in is good enough for now.  When and if we find that
 that's inadequate in a real-world situation, we can improve on it...

Would anything computed from the maximum number of allowed connections
make sense?

-- 
Peter Eisentraut   [EMAIL PROTECTED]   http://funkturm.homeip.net/~peter


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] Re: SOMAXCONN (was Re: Solaris source code)

2001-07-11 Thread Bruce Momjian

 Bruce Momjian [EMAIL PROTECTED] writes:
  Don't we have maxbackends configurable at runtime.
 
 Not after postmaster start, so passing it to the initial listen()
 shouldn't be a problem.
 
 The other concern I had could be addressed by making the listen
 parameter be MIN(MaxBackends, PG_SOMAXCONN) where PG_SOMAXCONN
 is set in config.h --- but now we could make the default value
 really large, say 1.  The only reason to change it would be
 if you had a kernel that barfed on large listen() parameters.

Sounds good to me.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



[HACKERS] Re: SOMAXCONN (was Re: Solaris source code)

2001-07-10 Thread Ian Lance Taylor

Tom Lane [EMAIL PROTECTED] writes:

 [EMAIL PROTECTED] (Nathan Myers) writes:
  If you want to make it more complicated, it would be more useful to 
  be able to set the value lower for runtime environments where PG is 
  competing for OS resources with another daemon that deserves higher 
  priority.
 
 Hmm, good point.  Does anyone have a feeling for the amount of kernel
 resources that are actually sucked up by an accept-queue entry?  If 128
 is the customary limit, is it actually worth worrying about whether
 we are setting it to 128 vs. something smaller?

Not much in the way of kernel resources is required by an entry on the
accept queue.  Basically a socket structure and maybe a couple of
addresses, typically about 200 bytes or so.

But I wouldn't worry about it, and I wouldn't worry about Nathan's
suggestion for making the limit configurable, because Postgres
connections don't spend time on the queue.  The postgres server will
be picking them off as fast as it can.  If the server can't pick
processes off fast enough, then your system has other problems;
reducing the size of the queue won't help those problems.  A large
queue will help when a large number of connections arrives
simultaneously--it will permit Postgres to deal them appropriately,
rather than causing the system to discard them on its terms.

(Matters might be different if the Postgres server were written to not
call accept when it had the maximum number of connections active, and
to just leave connections on the queue in that case.  But that's not
how it works today.)

Ian

---(end of broadcast)---
TIP 842: When the only tool you have is a hammer, you tend to treat
everything as if it were a nail.
-- Abraham Maslow

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly