Re: Tons of imap-login processes despite client_limit very high

2023-07-18 Thread D D
Awesome, thank you for the information Aki. :) We'll set vsz_limit and keep 
service_count = 0, as service_count >= 1 is not an option for us due to the 
high memory consumption associated with having many imap-login processes 
running.

The new process_shutdown_filter option looks interesting, though if it 
shutdowns the process "after finishing the current connections", as the doc 
says, that may not resolve the issue in our case due to imap-login processes 
often having lasting connections (preventing the process to be recycled when 
service_count > 1 in the first place).

Thanks again, very helpful!
___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Re: Tons of imap-login processes despite client_limit very high

2023-07-18 Thread D D
It seems to us that the ideal solution would be that once service_count is 
reached, a new process is spawned and the remaining connections are moved to 
that new process so that the old one can die quickly. But I suspect that's not 
a simple change to do.
___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Re: Tons of imap-login processes despite client_limit very high

2023-07-18 Thread D D
Thank you Joseph and Aki!

You got it right, the issue was indeed with this service_count=100. With 
service_count=0 it works as intended (only 4 imap-login processes), though now 
we're concerned about possible memory leaks with this config.

What you described Jospeh 
(https://www.mail-archive.com/dovecot%40dovecot.org/msg85850.html) is what 
we've observed as well. In addition, service_count > 1 + high process_limit 
consumes much more mermory because of all those imap-login processes handling 
juste a few lasting connections. We're consuming about 4x less memory with 
service_count=0, it's day and night.

There's something somewhat close documented on 
https://doc.dovecot.org/configuration_manual/service_configuration/#service-limits:

"Otherwise when the service_count is beginning to be reached, the total number 
of available connections will shrink. With very bad luck that could mean that 
all the processes are simply waiting for the existing connections to die away 
before the process can die and a new one can be created. "

Though not the focus of the discussion, it does say that processes don't die 
until their connections have died.

It could perhaps benefit from mentioning a few more things like:
- service_count = 0 has no protection against potential memory leaks.
- service_count > 1 + high process_limit coud produce many processes since 
these don't actually die until their connections have all died, which consumes 
isgnificantly more memory.

One workaround to the lack of memory leaks protection could be to set 
process_limit close to process_min_avail while keeping service_count > 1. But 
we'd end up in the risky case described in the docs: 

"With very bad luck that could mean that all the processes are simply waiting 
for the existing connections to die away before the process can die and a new 
one can be created."

So for now we don't see any way out of service_count = 0 and it's associated 
memory leak risk.
___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Re: Tons of imap-login processes despite client_limit very high

2023-07-18 Thread D D
After further testing we realized that it was due to service_count = 100. We 
suspect that when the service count is reached, a new process is spawned, 
explaining the large number of imap-login processes.

With service_count = 0 we stick with only 4 processes (process_min_avail). 
However, we're concerned with having these processes run indefinitely in case 
of memory leaks.

Any insights on that?
___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Re: Many IDLE imap processes but very few connections moved to imap-hibernate

2023-07-17 Thread D D
Well I'm not sure what happened but the issue seems to have resolved itself 
somehow:

$ ps aux | grep "dovecot/imap" | wc -l
2168
$ ps aux | grep "dovecot/imap" | grep IDLE | wc -l
56
$ ps aux | grep "imap-hibernate"
syslog863411  0.4  0.0  16660 15296 ?S09:07   1:40 
dovecot/imap-hibernate [2298 connections]

Since my initial message we played with the following settings which seem 
unrelated to hibernation:

mail_debug
haproxy_trusted_networks
mmap_disable
mail_fsync
mail_nfs_index
mail_nfs_storage
mail_max_userip_connections

I'll try to provide more info if the issue arises again.
___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Tons of imap-login processes despite client_limit very high

2023-07-17 Thread D D
Hi Dovecot community.

We're seeing a ton of imap-login processes running even when using high 
performance mode 
(https://doc.dovecot.org/admin_manual/login_processes/#high-performance-mode). 
According to the docs:

"process_min_avail should be set to be at least the number of CPU cores in the 
system, so that all of them will be used. Otherwise new processes are created 
only once an existing one’s connection count reaches client_limit"

We have process_min_avail=4, client_limit=0 and default_client_limit=20. So 
we'd expect seeing only 4 imap-login processes serving a ton of connections 
each. Yet, we see thousands of imap-login processes (more than half of all the 
imap processes):

$ ps aux | grep imap-login | wc -l
1278
$ ps aux | grep imap | wc -l
2154

We use verbose_proctitle=yes and a there seem to be 2 types of these processes, 
about half for a single IP and about half we suspect for multiple IPs:

$ ps aux  | grep imap-log
_apt 1941081  0.0  0.0  10420  8700 ?S14:57   0:00 
dovecot/imap-login [84.115.232.178 TLS proxy]
_apt 1941589  0.0  0.0  10532  8648 ?S14:57   0:00 
dovecot/imap-login [119.202.86.160 TLS proxy]
_apt 1941789  0.0  0.0  10188  8620 ?S14:57   0:00 
dovecot/imap-login [0 pre-login + 2 TLS proxies]
_apt 1942144  0.0  0.0  10716  8748 ?S14:57   0:00 
dovecot/imap-login [0 pre-login + 3 TLS proxies]
_apt 1942428  0.0  0.0  10800  8712 ?S14:57   0:00 
dovecot/imap-login [5.41.100.37 TLS proxy]
...
$ ps aux  | grep imap-log | grep pre-login | wc -l
624
$ ps aux  | grep imap-log | grep -v pre-login | wc -l
654

Is having so many imap-login processes normal with our config? Did we 
misunderstand the docs or is there something wrong here?


default_client_limit = 1048576
default_process_limit = 20

service imap-login {
  # client_limit = 0 # default is 0
  # process_limit = 0 # default is 0
  service_count = 100
  process_min_avail = 4 
  vsz_limit = 2G

  inet_listener imap {
  }
  inet_listener imaps {
haproxy = yes
port = 994
  }
}
___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Re: Multiple backends with NFSv4.1 (supports file locking): should work without Director, right?

2023-05-20 Thread D D
Thanks Tom. Are you refering to a proxy software in particular (e.g. Dovecot 
proxy, Nginx, ...)? Do you mean having a single proxy in front of all the 
backends?

We'd prefer to avoid that if possible, as that makes the proxy a single point 
of failure. But it seems to be the recommended way to deal with cluster indeed 
(https://doc.dovecot.org/configuration_manual/nfs/#clustering-without-director).
___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Re: Multiple backends with NFSv4.1 (supports file locking): should work without Director, right?

2023-05-20 Thread D D
Thanks for the input!

Great to know that you got clusters working with at least some version of NFS 
without using Director. Were you guys using NLM (Network Lock Manager), 
dotlock, or something else, to have file locking capabilities with NFSv3?

The delegation feature of NFSv4 mentioned by Adrian can be disabled 
(https://docs.oracle.com/cd/E19253-01/816-4555/rfsrefer-140/index.html#:~:text=You%20can%20disable%20delegation%20by,callback%20service%20on%20the%20client.).
 Perhaps without it things would run just as fine as with NFSv3.
___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org