On Wed, Aug 24, 2022 at 3:06 PM Tom Lane wrote:
> Thomas Munro writes:
> > Oh, one comment there is actually obsolete now AFAIK. Unless there is
> > some reason to think personality(ADDR_NO_RANDOMIZE) might not work in
> > some case where sysctl -w kernel.randomize_va_space=0 will, I think we
>
Thomas Munro writes:
> Oh, one comment there is actually obsolete now AFAIK. Unless there is
> some reason to think personality(ADDR_NO_RANDOMIZE) might not work in
> some case where sysctl -w kernel.randomize_va_space=0 will, I think we
> can just remove that.
AFAICS, f3e78069db7 silently does
On Tue, Aug 23, 2022 at 2:42 PM Thomas Munro wrote:
> > 0002 isn't quite related, but while writing 0001 I noticed a nearby
> > use of /proc/sys/... which I thought should be converted to sysctl.
> > IMO /proc/sys pretty much sucks, at least for documentation purposes,
> > for multiple reasons:
On Tue, Aug 23, 2022 at 3:53 PM Tom Lane wrote:
> Thomas Munro writes:
> > On Tue, Aug 23, 2022 at 4:57 AM Tom Lane wrote:
> > +service the requests, with those clients receiving unhelpful
> > +connection failure errors such as Resource temporarily
> > +unavailable.
>
> > LGTM but I
Thomas Munro writes:
> On Tue, Aug 23, 2022 at 4:57 AM Tom Lane wrote:
> +service the requests, with those clients receiving unhelpful
> +connection failure errors such as Resource temporarily
> +unavailable.
> LGTM but I guess I would add "... or Connection refused"?
Is that the
Ok, thanks for the clarification.
On Tue, Aug 23, 2022 at 11:37 AM Tom Lane wrote:
>
> Junwang Zhao writes:
> > Just curious, *backlog* defines the maximum pending connections,
> > why do we need to double the MaxConnections as the queue size?
>
> The postmaster allows up to twice
Junwang Zhao writes:
> Just curious, *backlog* defines the maximum pending connections,
> why do we need to double the MaxConnections as the queue size?
The postmaster allows up to twice MaxConnections child processes
to exist, per the comment in canAcceptConnections:
* We allow more
Just curious, *backlog* defines the maximum pending connections,
why do we need to double the MaxConnections as the queue size?
It seems *listen* with larger *backlog* will tell the OS maintain a
larger buffer?
- maxconn = MaxBackends * 2;
- if (maxconn > PG_SOMAXCONN)
- maxconn = PG_SOMAXCONN;
+
On Tue, Aug 23, 2022 at 4:57 AM Tom Lane wrote:
> 0001 adds a para about how to raise the listen queue length.
+service the requests, with those clients receiving unhelpful
+connection failure errors such as Resource temporarily
+unavailable.
LGTM but I guess I would add "... or
Thanks for your input everyone! I wanted to confirm that increasing the
somaxconn also fixed the issue for me.
Kevin
> $ cat /proc/sys/net/core/somaxconn
> 128
>
> by default, which is right about where the problem starts. After
>
> $ sudo sh -c 'echo 1000 >/proc/sys/net/core/somaxconn'
>
>
OK, here's some proposed patches.
0001 adds a para about how to raise the listen queue length.
0002 isn't quite related, but while writing 0001 I noticed a nearby
use of /proc/sys/... which I thought should be converted to sysctl.
IMO /proc/sys pretty much sucks, at least for documentation
Thomas Munro writes:
> Cool. BTW small correction to something I said about FreeBSD: it'd be
> better to document the new name kern.ipc.soacceptqueue (see listen(2)
> HISTORY) even though the old name still works and matches OpenBSD and
> macOS.
Thanks. Sounds like we get to document at least
On Mon, Aug 22, 2022 at 2:18 PM Tom Lane wrote:
> Thomas Munro writes:
> > On Mon, Aug 22, 2022 at 12:20 PM Tom Lane wrote:
> >> Hmm. It'll be awhile till the 128 default disappears entirely
> >> though, especially if assorted BSDen use that too. Probably
> >> worth the trouble to document.
>
Thomas Munro writes:
> On Mon, Aug 22, 2022 at 12:20 PM Tom Lane wrote:
>> Hmm. It'll be awhile till the 128 default disappears entirely
>> though, especially if assorted BSDen use that too. Probably
>> worth the trouble to document.
> I could try to write a doc patch if you aren't already on
On Mon, Aug 22, 2022 at 12:20 PM Tom Lane wrote:
> Thomas Munro writes:
> > Yeah retrying doesn't seem that nice. +1 for a bit of documentation,
> > which I guess belongs in the server tuning part where we talk about
> > sysctls, perhaps with a link somewhere near max_connections? More
> >
Thomas Munro writes:
> Yeah retrying doesn't seem that nice. +1 for a bit of documentation,
> which I guess belongs in the server tuning part where we talk about
> sysctls, perhaps with a link somewhere near max_connections? More
> recent Linux kernels bumped it to 4096 by default so I doubt
On Mon, Aug 22, 2022 at 10:55 AM Tom Lane wrote:
> Not sure what I think at this point about making libpq retry after
> EAGAIN. It would make sense for this particular undocumented use
> of EAGAIN, but I'm worried about others, especially the documented
> reason. On the whole I'm inclined to
Thomas Munro writes:
> If it's something like that, maybe increasing
> /proc/sys/net/core/somaxconn would help? I think older kernels only
> had 128 here.
Bingo! I see
$ cat /proc/sys/net/core/somaxconn
128
by default, which is right about where the problem starts. After
$ sudo sh -c 'echo
Hi,
On 2022-08-21 17:15:01 -0400, Tom Lane wrote:
> I tried to duplicate this behavior locally (on RHEL8) and got something
> interesting. After increasing the server's max_connections to 1000,
> I can do
>
> $ pgbench -S -c 200 -j 100 -t 100 bench
>
> and it goes through fine. But:
>
> $
On Mon, Aug 22, 2022 at 9:48 AM Tom Lane wrote:
> It's also pretty unclear why the kernel would want to return
> EAGAIN instead of letting the nonblock connection path do the
> waiting, which is why I'm suspecting a bug rather than designed
> behavior.
Could it be that it fails like that if the
Andrew Dunstan writes:
> On 2022-08-21 Su 17:15, Tom Lane wrote:
>> On the whole this is smelling more like a Linux kernel bug than
>> anything else.
> *nod*
Conceivably we could work around this in libpq: on EAGAIN, just
retry the failed connect(), or maybe better to close the socket
and take
On 2022-08-21 Su 17:15, Tom Lane wrote:
> Andrew Dunstan writes:
>> On 2022-08-20 Sa 23:20, Tom Lane wrote:
>>> Kevin McKibbin writes:
What's limiting my DB from allowing more connections?
>> The first question in my mind from the above is where this postgres
>> instance is actually
Andrew Dunstan writes:
> On 2022-08-20 Sa 23:20, Tom Lane wrote:
>> Kevin McKibbin writes:
>>> What's limiting my DB from allowing more connections?
> The first question in my mind from the above is where this postgres
> instance is actually listening. Is it really /var/run/postgresql? Its
>
On 2022-08-20 Sa 23:20, Tom Lane wrote:
> Kevin McKibbin writes:
>> What's limiting my DB from allowing more connections?
>> This is a sample of the output I'm getting, which repeats the error 52
>> times (one for each failed connection)
>> -bash-4.2$ pgbench -c 200 -j 200 -t 100 benchy
>> ...
Sorry Tom for the duplicate email. Resending with the mailing list.
> Thanks for your response. I'm using a Centos Linux environment and have
> the open files set very high:
>
> -bash-4.2$ ulimit -a|grep open
> open files (-n) 65000
>
> What else could be limiting the
Kevin McKibbin writes:
> What's limiting my DB from allowing more connections?
> This is a sample of the output I'm getting, which repeats the error 52
> times (one for each failed connection)
> -bash-4.2$ pgbench -c 200 -j 200 -t 100 benchy
> ...
> connection to database "benchy" failed:
>
26 matches
Mail list logo