On Mon, Jul 29, 2024 at 1:24 PM Tom Lane <t...@sss.pgh.pa.us> wrote: > Robert Haas <robertmh...@gmail.com> writes: > > I wonder how this works right now. Is there something that limits the > > number of authentication requests that can be in flight concurrently, > > or is it completely uncapped (except by machine resources)? > > The former. IIRC, the postmaster won't spawn more than 2X max_connections > subprocesses (don't recall the exact limit, but it's around there).
Hmm. Not to sidetrack this thread too much, but multiplying by two doesn't really sound like the right idea to me. The basic idea articulated in the comment for canAcceptConnections() makes sense: some backends might fail authentication, or might be about to exit, so it makes sense to allow for some slop. But 2X is a lot of slop even on a machine with the default max_connections=100, and people with connection management problems are likely to be running with max_connections=500 or max_connections=900 or even (insanely) max_connections=2000. Continuing with a connection attempt because we think that hundreds or thousands of connections that are ahead of us in the queue might clear out of the way before we need a PGPROC is not a good bet. I wonder if we ought to restrict this to a small, flat value, like say 50, or by a new GUC that defaults to such a value if a constant seems problematic. Maybe it doesn't really matter. I'm not sure how much work we'd save by booting out the doomed connection attempt earlier. The unlimited number of dead-end backends doesn't sound too great either. I don't have another idea, but presumably resisting DDOS attacks and/or preserving resources for things that still have a chance of working ought to take priority over printing a nicer error message from a connection that's doomed to fail anyway. -- Robert Haas EDB: http://www.enterprisedb.com