On Wed, Dec 22, 2021 at 11:25:10AM +0100, natan <na...@epf.pl> wrote:

> W dniu 21.12.2021 o 18:15, Wietse Venema pisze:
> 10.x.x.10 - is gallera klaster wirth 3 nodes (and max_con set to 1500
> for any nodes)
> 
> when I get this eror I check number of connections
> 
> smtpd : 125
> 
> smtp      inet  n       -       -       -       1       postscreen
> smtpd     pass  -       -       -       -       -       smtpd -o
> receive_override_options=no_address_mappings
> 
> and total: amavis+lmtp-dovecot+smtpd-o
> receive_override_options=no_address_mappings : 335
> from: ps -e|grep smtpd |wc -l
> 
> >> but:
> >> for local lmt port:10025 - 5 connection
> >> for incomming from amavis port: 10027- 132 connections
> >> smtpd - 60 connections (
> >> ps -e|grep smtpd - 196 connections
> > 1) You show two smtpd process counts. What we need are the
> > internet-related smtpd processes counts.
> >
> > 2) Network traffic is not constant. What we need are process counts
> > at the time that postscreen logs the warnings.
> >
> >>> 2) Your kernel cannot support the default_process_limit of 1200.
> >>> In that case a higher default_process_limit would not help. Instead,
> >>> kernel configuration or more memory (or both) would help.
> >> 5486 ?        Ss     6:05 /usr/lib/postfix/sbin/master
> >> cat /proc/5486/limits
> > Those are PER-PROCESS resource limits. I just verified that postscreen
> > does not run into the "Max open files" limit of 4096 as it tries
> > to hand off a connection, because that would result in an EMFILE
> > (Too many open files) kernel error code.
> >
> > Additionally there are SYSTEM-WIDE limits for how much the KERNEL
> > can handle. These are worth looking at when you're trying to handle
> > big traffic on a small (virtual) machine. 
> >
> >     Wietse
> How I check ?

Googling "linux system wide resource limits" shows a
lot of things including
https://www.tecmint.com/increase-set-open-file-limits-in-linux/
which mentions sysctl, /etc/sysctl.conf, ulimit, and
/etc/security/limits.conf.

Then I realised that the problem is with process limits,
not open file limits, but the same methods apply.

On my VM, the hard and soft process limits are 3681:

  # ulimit -Hu
  3681
  # ulimit -Su
  3681

Perhaps yours is less than that.

To change it permanently, add something like the
following to /etc/security/limits.conf (or to a file in
/etc/security/limits.d/):

  * hard nproc 4096
  * soft nproc 4096

Note that this is assuming Linux, and assuming that your
server will be OK with increasing the process limit. That
might not be the case if it's a tiny VM being asked to
do too much. Good luck.

cheers,
raf

Reply via email to