qp and Invalid quoted-printable input trailer: '=' not followed by two hex digits
I have a mail (attached) for which dovecot reports: imap(test): pid=<32688> session=, Error: Mailbox INBOX.test: UID=1: read() failed: read(/var/mail/test/.test/cur/1695887652.M710714P17846.mbox13:2,RS) failed: Invalid quoted-printable input trailer: '=' not followed by two hex digits and I fail to see why. There is = used at the end of lines as soft-line break. Also if you delete "przez system Spa=" it works. If you lave it and delete "=" line it also works. Doesn't seem to be consistent. Is dovecot too restrictive here? dovecot 2.3.20 and 2.3.21 -- Arkadiusz Miśkiewicz, arekm / ( maven.pl | pld-linux.org )From: a...@example.com To: b...@example.com Subject: test MIME-Version: 1.0 Date: Thu, 28 Sep 2023 09:54:02 +0200 Message-Id: <202309289092802.1115...@example.com> Content-Type: multipart/alternative; boundary="202309289092801:11152:0" This message is in MIME format. Since your mail reader does not understand this format, some or all of this message may not be legible. --202309289092801:11152:0 Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: quoted-printable text/html przez system Spa= mTitan = --202309289092801:11152:0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable text/plain message --202309289092801:11152:0-- ___ dovecot mailing list -- dovecot@dovecot.org To unsubscribe send an email to dovecot-le...@dovecot.org
Re: sieve processing - enable/disable per user?
On 06/10/2023 17:32, Oscar del Rio wrote: On 2023-10-06 5:12 a.m., Arkadiusz Miśkiewicz via dovecot wrote: Unfortunately it doesn't get expanded: user_query = ... , 'sieve' AS 'sieve_enabled', ... mail_plugins = $mail_plugins %{sieve_enabled} "sieve_enabled" seems to be a "yes" or "no" user variable (default="yes"), without having to modify mail_plugins https://github.com/dovecot/pigeonhole/commit/d2a75724985c66aeb2decdd6272d4f21255284c9 Nice - it works. Thanks! -- Arkadiusz Miśkiewicz, arekm / ( maven.pl | pld-linux.org ) ___ dovecot mailing list -- dovecot@dovecot.org To unsubscribe send an email to dovecot-le...@dovecot.org
Re: sieve processing - enable/disable per user?
On 06/10/2023 11:12, Arkadiusz Miśkiewicz wrote: On 06/10/2023 09:27, Aki Tuomi wrote: Maybe try protocol lmtp { mail_plugins = $mail_plugins %{sieve_enabled} } and return sieve_enabled="sieve" for those users you want to enable sieve processing for. Unfortunately it doesn't get expanded: user_query = ... , 'sieve' AS 'sieve_enabled', ... mail_plugins = $mail_plugins %{sieve_enabled} and dovecot[17847]: lmtp(17932): Fatal: Plugin '%{sieve_enabled}' not found from directory /usr/lib64/dovecot/plugins Tried %{userdb:sieve_enabled:} etc, too - these are not expanded in that place. So other approach - is there a way to disable managesieve only, per user (while allowing imap, pop3 etc) ? -- Arkadiusz Miśkiewicz, arekm / ( maven.pl | pld-linux.org ) ___ dovecot mailing list -- dovecot@dovecot.org To unsubscribe send an email to dovecot-le...@dovecot.org
Re: sieve processing - enable/disable per user?
On 06/10/2023 09:27, Aki Tuomi wrote: Maybe try protocol lmtp { mail_plugins = $mail_plugins %{sieve_enabled} } and return sieve_enabled="sieve" for those users you want to enable sieve processing for. Unfortunately it doesn't get expanded: user_query = ... , 'sieve' AS 'sieve_enabled', ... mail_plugins = $mail_plugins %{sieve_enabled} and dovecot[17847]: lmtp(17932): Fatal: Plugin '%{sieve_enabled}' not found from directory /usr/lib64/dovecot/plugins Aki On 06/10/2023 09:56 EEST Arkadiusz Miśkiewicz via dovecot wrote: Hi. I keep users in database and would like to enable/disable sieve for them per user. What's the approach for that? I'm looking at the sieve options and user_query setting but don't see anything that would make this possible. -- Arkadiusz Miśkiewicz, arekm / ( maven.pl | pld-linux.org ) ___ dovecot mailing list -- dovecot@dovecot.org To unsubscribe send an email to dovecot-le...@dovecot.org -- Arkadiusz Miśkiewicz, arekm / ( maven.pl | pld-linux.org ) ___ dovecot mailing list -- dovecot@dovecot.org To unsubscribe send an email to dovecot-le...@dovecot.org
sieve processing - enable/disable per user?
Hi. I keep users in database and would like to enable/disable sieve for them per user. What's the approach for that? I'm looking at the sieve options and user_query setting but don't see anything that would make this possible. -- Arkadiusz Miśkiewicz, arekm / ( maven.pl | pld-linux.org ) ___ dovecot mailing list -- dovecot@dovecot.org To unsubscribe send an email to dovecot-le...@dovecot.org
Re: Panic: file mail-index-transaction-finish.c: line 185
On 19.09.2022 14:52, Christian Mack wrote: Am 18.09.22 um 10:21 schrieb Arkadiusz Miśkiewicz: On 18.09.2022 08:21, Aki Tuomi wrote: On September 17, 2022 10:55:42 PM GMT+03:00, "Arkadiusz Miśkiewicz" wrote: On 16.09.2022 08:46, Aki Tuomi wrote: On 15/09/2022 11:02 EEST Aki Tuomi wrote: On September 15, 2022 10:00:21 AM GMT+03:00, "Arkadiusz Miśkiewicz" wrote: On 15.09.2022 07:10, Aki Tuomi wrote: On 15/09/2022 07:57 EEST Arkadiusz Miśkiewicz wrote: On 29.12.2021 10:26, Aki Tuomi wrote: On 29/12/2021 11:20 tobiswo...@gmail.com wrote: Hi list I have weird issue with my Dovecot 2.3.17.1 (476cd46418) When deleting a certain amount of messages from my INBOX via my MUA (Evolution) all over sudden dovecot starts to panic Panic: file mail-index-transaction-finish.c: line 185 (mail_index_transaction_get_uid): assertion failed: (seq <= t->view- map->hdr.messages_count) Thanks! Aki Arkadiusz, is it possible for you to see if this issue happens with 2.3.19.2 please? I can test any version or single patch. Where is that 2.3.19.2? Apologies, I ment the "latest CE release" at https://repo.dovecot.org . I was testing that on my own 2.3.19.1 build. I can test anything that's in form of source/patch that I can build on custom Linux distro here. Tried guessing urls like https://repo.dovecot.org/ce-latest-2.3 but 404, so don't know which version is "ce-latest-2.3". Aki https://www.dovecot.org/releases/2.3/dovecot-2.3.19.1.tar.gz https://www.dovecot.org/releases/2.3/dovecot-2.3.19.1.tar.gz.sig As mentioned earlier tests were done on exactly this version, own build. -- Arkadiusz Miśkiewicz, arekm / ( maven.pl | pld-linux.org )
Re: Panic: file mail-index-transaction-finish.c: line 185
On 18.09.2022 08:21, Aki Tuomi wrote: On September 17, 2022 10:55:42 PM GMT+03:00, "Arkadiusz Miśkiewicz" wrote: On 16.09.2022 08:46, Aki Tuomi wrote: On 15/09/2022 11:02 EEST Aki Tuomi wrote: On September 15, 2022 10:00:21 AM GMT+03:00, "Arkadiusz Miśkiewicz" wrote: On 15.09.2022 07:10, Aki Tuomi wrote: On 15/09/2022 07:57 EEST Arkadiusz Miśkiewicz wrote: On 29.12.2021 10:26, Aki Tuomi wrote: On 29/12/2021 11:20 tobiswo...@gmail.com wrote: Hi list I have weird issue with my Dovecot 2.3.17.1 (476cd46418) When deleting a certain amount of messages from my INBOX via my MUA (Evolution) all over sudden dovecot starts to panic Panic: file mail-index-transaction-finish.c: line 185 (mail_index_transaction_get_uid): assertion failed: (seq <= t->view- map->hdr.messages_count) Thanks! Aki Arkadiusz, is it possible for you to see if this issue happens with 2.3.19.2 please? I can test any version or single patch. Where is that 2.3.19.2? Apologies, I ment the "latest CE release" at https://repo.dovecot.org . I was testing that on my own 2.3.19.1 build. I can test anything that's in form of source/patch that I can build on custom Linux distro here. Tried guessing urls like https://repo.dovecot.org/ce-latest-2.3 but 404, so don't know which version is "ce-latest-2.3". Aki -- Arkadiusz Miśkiewicz, arekm / ( maven.pl | pld-linux.org )
Re: Panic: file mail-index-transaction-finish.c: line 185
On 16.09.2022 08:46, Aki Tuomi wrote: On 15/09/2022 11:02 EEST Aki Tuomi wrote: On September 15, 2022 10:00:21 AM GMT+03:00, "Arkadiusz Miśkiewicz" wrote: On 15.09.2022 07:10, Aki Tuomi wrote: On 15/09/2022 07:57 EEST Arkadiusz Miśkiewicz wrote: On 29.12.2021 10:26, Aki Tuomi wrote: On 29/12/2021 11:20 tobiswo...@gmail.com wrote: Hi list I have weird issue with my Dovecot 2.3.17.1 (476cd46418) When deleting a certain amount of messages from my INBOX via my MUA (Evolution) all over sudden dovecot starts to panic Panic: file mail-index-transaction-finish.c: line 185 (mail_index_transaction_get_uid): assertion failed: (seq <= t->view- map->hdr.messages_count) Thanks! Aki Arkadiusz, is it possible for you to see if this issue happens with 2.3.19.2 please? I can test any version or single patch. Where is that 2.3.19.2? -- Arkadiusz Miśkiewicz, arekm / ( maven.pl | pld-linux.org )
Re: Panic: file mail-index-transaction-finish.c: line 185
On 15.09.2022 07:10, Aki Tuomi wrote: On 15/09/2022 07:57 EEST Arkadiusz Miśkiewicz wrote: On 29.12.2021 10:26, Aki Tuomi wrote: On 29/12/2021 11:20 tobiswo...@gmail.com wrote: Hi list I have weird issue with my Dovecot 2.3.17.1 (476cd46418) When deleting a certain amount of messages from my INBOX via my MUA (Evolution) all over sudden dovecot starts to panic Panic: file mail-index-transaction-finish.c: line 185 (mail_index_transaction_get_uid): assertion failed: (seq <= t->view- map->hdr.messages_count) imap(REDACTED)<24075>: Error: Raw backtrace: /usr/lib64/dovecot/libdovecot.so.0(backtrace_append+0x42) [0x7f09274d4142] -> /usr/lib64/dovecot/libdovecot.so.0(backtrace_get+0x1e) [0x7f09274d424e] -> /usr/lib64/dovecot/libdovecot.so.0(+0xf72fe) [0x7f09274e22fe] -> /usr/lib64/dovecot/libdovecot.so.0(+0xf73a1) [0x7f09274e23a1] -> /usr/lib64/dovecot/libdovecot.so.0(i_fatal+0) [0x7f0927430e38] -> I also sometimes see this on 2.3.19.1: Sep 15 05:05:43 mbox dovecot: imap(marek): pid=<14897> session=, Panic: file mail-index-transaction-finish.c: line 185 (mail_index_transaction_get_uid): assertion failed: (seq <= t->view->map->hdr.messages_count) Sep 15 05:05:43 mbox dovecot: imap(marek): pid=<14897> session=, Error: Raw backtrace: #0 t_askpass[0x7f16bf8658e0] -> #1 backtrace_append[0x7f16bf865b50] -> #2 backtrace_get[0x7f16bf865cb0] -> #3 i_syslog_error_handler[0x7f16bf8727d0] -> #4 i_syslog_fatal_handler[0x7f16bf872900] -> #5 i_panic[0x7f16bf7c62d6] -> #6 mail_index_sync_set_corrupted[0x7f16bf996d27] -> #7 mail_transaction_expunge_guid_cmp[0x7f16bfa43fe0] -> #8 mail_index_transaction_finish[0x7f16bfa44550] -> #9 mail_index_transaction_unref[0x7f16bfa48c30] -> #10 mail_index_transaction_commit_full[0x7f16bfa49110] -> #11 mail_index_transaction_commit[0x7f16bfa491f0] -> #12 mail_cache_set_seq_corrupted_reason[0x7f16bf993a4f] -> #13 mail_set_mail_cache_corrupted[0x7f16bf9ae690] -> #14 maildir_keywords_idx_char[0x7f16bf9d2a50] -> #15 maildir_keywords_idx_char[0x7f16bf9d2de0] -> #16 mail_get_physical_size[0x7f16bf99b770] -> #17 [unw_get_proc_name() failed: -10] -> #18 notify_contexts_mail_copy[0x7f16bead94b0] -> #19 notify_plugin_deinit[0x7f16beada440] -> #20 quota_plugin_deinit[0x7f16bf4b9350] -> #21 acl_mailbox_right_lookup[0x7f16bf4d7720] -> #22 mailbox_save_begin[0x7f16bf9ac880] -> #23 mailbox_copy[0x7f16bf9aca00] -> #24 cmd_close[0x55978a0b0980] -> #25 command_exec[0x55978a0bf220] -> #26 client_handle_unfinished_cmd[0x55978a0bd2b0] -> #27 client_handle_unfinished_cmd[0x55978a0bd2b0] -> #28 client_handle_unfinished_cmd[0x55978a0bd2b0] -> #29 client_handle_input[0x55978a0bd630] -> #30 client_input[0x55978a0bdca0] -> #31 io_loop_call_io[0x7f16bf50] -> #32 io_loop_handler_run_internal[0x7f16bf889e90] -> #33 io_loop_handler_run[0x7f16bf888910] -> #34 io_loop_run[0x7f16bf888ae0] -> #35 master_service_run[0x7f16bf7fbe70] -> #36 main[0x55978a0ae9f0] -> #37 __libc_init_first[0x7f16bf5a34d0] -> #38 __libc_start_main[0x7f16bf5a3580] -> #39 _start[0x55978a0aefa0] Sep 15 05:05:43 mbox dovecot: imap(marek): pid=<14897> session=, Fatal: master: service(imap): child 14897 killed with signal 6 (core dumps disabled - https://dovecot.org/bugreport.html#coredumps) No NFS involved here (linux + xfs). -- Arkadiusz Miśkiewicz, arekm / ( maven.pl | pld-linux.org ) The actual core dump would be useful. The backtrace is nice, but it does not really help figuring out what went wrong in this case. Can also build without optimizations if that will help. Here (quoted but that's thunderbird stupidity): (gdb) bt #0 __pthread_kill_implementation (threadid=, signo=signo@entry=6, no_tid=no_tid@entry=0) at pthread_kill.c:44 #1 0x7f5679d915c3 in __pthread_kill_internal (signo=6, threadid=) at pthread_kill.c:78 #2 0x7f5679d40816 in __GI_raise (sig=sig@entry=6) at ../sysdeps/posix/raise.c:26 #3 0x7f5679d2a7fa in __GI_abort () at abort.c:79 #4 0x7f5679f4e6e6 in default_fatal_finish (status=0, type=LOG_TYPE_PANIC) at failures.c:465 #5 fatal_handler_real (ctx=, format=, args=) at failures.c:477 #6 0x7f5679ffa921 in i_internal_fatal_handler (ctx=, format=, args=) at failures.c:879 #7 0x7f5679f4e3a9 in i_panic (format=format@entry=0x7f567a1df120 "file %s: line %d (%s): assertion failed: (%s)") at failures.c:530 #8 0x7f567a11ed9f in mail_index_transaction_get_uid (t=, seq=) at mail-index-transaction-finish.c:185 #9 0x7f567a1cc02a in mail_index_convert_to_uids (t=t@entry=0x55a8889c0270, array=array@entry=0x55a8889c06c0) at mail-index-transaction-finish.c:205 #10 0x7f567a1cc6b7 in mail_index_transaction_convert_to_uids (t=0x55a8889c0270) at mail-index-transaction-finish.c:313 #11 mail_index_transaction_finish (t=t@entry=0x55a8889c0270) at mail-index-transaction
Re: Panic: file mail-index-transaction-finish.c: line 185
On 29.12.2021 10:26, Aki Tuomi wrote: On 29/12/2021 11:20 tobiswo...@gmail.com wrote: Hi list I have weird issue with my Dovecot 2.3.17.1 (476cd46418) When deleting a certain amount of messages from my INBOX via my MUA (Evolution) all over sudden dovecot starts to panic Panic: file mail-index-transaction-finish.c: line 185 (mail_index_transaction_get_uid): assertion failed: (seq <= t->view- map->hdr.messages_count) imap(REDACTED)<24075>: Error: Raw backtrace: /usr/lib64/dovecot/libdovecot.so.0(backtrace_append+0x42) [0x7f09274d4142] -> /usr/lib64/dovecot/libdovecot.so.0(backtrace_get+0x1e) [0x7f09274d424e] -> /usr/lib64/dovecot/libdovecot.so.0(+0xf72fe) [0x7f09274e22fe] -> /usr/lib64/dovecot/libdovecot.so.0(+0xf73a1) [0x7f09274e23a1] -> /usr/lib64/dovecot/libdovecot.so.0(i_fatal+0) [0x7f0927430e38] -> I also sometimes see this on 2.3.19.1: Sep 15 05:05:43 mbox dovecot: imap(marek): pid=<14897> session=, Panic: file mail-index-transaction-finish.c: line 185 (mail_index_transaction_get_uid): assertion failed: (seq <= t->view->map->hdr.messages_count) Sep 15 05:05:43 mbox dovecot: imap(marek): pid=<14897> session=, Error: Raw backtrace: #0 t_askpass[0x7f16bf8658e0] -> #1 backtrace_append[0x7f16bf865b50] -> #2 backtrace_get[0x7f16bf865cb0] -> #3 i_syslog_error_handler[0x7f16bf8727d0] -> #4 i_syslog_fatal_handler[0x7f16bf872900] -> #5 i_panic[0x7f16bf7c62d6] -> #6 mail_index_sync_set_corrupted[0x7f16bf996d27] -> #7 mail_transaction_expunge_guid_cmp[0x7f16bfa43fe0] -> #8 mail_index_transaction_finish[0x7f16bfa44550] -> #9 mail_index_transaction_unref[0x7f16bfa48c30] -> #10 mail_index_transaction_commit_full[0x7f16bfa49110] -> #11 mail_index_transaction_commit[0x7f16bfa491f0] -> #12 mail_cache_set_seq_corrupted_reason[0x7f16bf993a4f] -> #13 mail_set_mail_cache_corrupted[0x7f16bf9ae690] -> #14 maildir_keywords_idx_char[0x7f16bf9d2a50] -> #15 maildir_keywords_idx_char[0x7f16bf9d2de0] -> #16 mail_get_physical_size[0x7f16bf99b770] -> #17 [unw_get_proc_name() failed: -10] -> #18 notify_contexts_mail_copy[0x7f16bead94b0] -> #19 notify_plugin_deinit[0x7f16beada440] -> #20 quota_plugin_deinit[0x7f16bf4b9350] -> #21 acl_mailbox_right_lookup[0x7f16bf4d7720] -> #22 mailbox_save_begin[0x7f16bf9ac880] -> #23 mailbox_copy[0x7f16bf9aca00] -> #24 cmd_close[0x55978a0b0980] -> #25 command_exec[0x55978a0bf220] -> #26 client_handle_unfinished_cmd[0x55978a0bd2b0] -> #27 client_handle_unfinished_cmd[0x55978a0bd2b0] -> #28 client_handle_unfinished_cmd[0x55978a0bd2b0] -> #29 client_handle_input[0x55978a0bd630] -> #30 client_input[0x55978a0bdca0] -> #31 io_loop_call_io[0x7f16bf50] -> #32 io_loop_handler_run_internal[0x7f16bf889e90] -> #33 io_loop_handler_run[0x7f16bf888910] -> #34 io_loop_run[0x7f16bf888ae0] -> #35 master_service_run[0x7f16bf7fbe70] -> #36 main[0x55978a0ae9f0] -> #37 __libc_init_first[0x7f16bf5a34d0] -> #38 __libc_start_main[0x7f16bf5a3580] -> #39 _start[0x55978a0aefa0] Sep 15 05:05:43 mbox dovecot: imap(marek): pid=<14897> session=, Fatal: master: service(imap): child 14897 killed with signal 6 (core dumps disabled - https://dovecot.org/bugreport.html#coredumps) No NFS involved here (linux + xfs). -- Arkadiusz Miśkiewicz, arekm / ( maven.pl | pld-linux.org )
Re: Thousands of SSL certificates stalls new logins during reload - problem with Dovecot config process
On 2.09.2022 14:44, Bartosz Kwitniewski wrote: Hello, I'm running a dovecot 2.3.19.1 server that has around 6000 SSL certificates in separate config files, each containing: local_name "domain" { ssl_cert = ... ssl_key = ... } When new certificate is added, dovecot is reloaded (around 20 times a day). When dovecot is being reloaded, users are unable to log in for around 30 seconds. Unfortunately it's known for ages that dovecot is not capable of handling thousands of certificates in a sane way. There were some ideas which were never implemented: https://dovecot.org/list/dovecot/2016-October/105858.html ( https://dovecot.org/list/dovecot/2016-October/105855.html ) -- Arkadiusz Miśkiewicz, arekm / ( maven.pl | pld-linux.org )
ssl=required per user?
Can ssl=required be forced on per user setting? Right now I could modify password_query to check if %c (%{secured}) is empty and return appropriate error as "reason" like in https://doc.dovecot.org/configuration_manual/authentication/nologin/#authentication-nologin example. Unfortunately password_query is a bit too late. I would like to deny access as soon as possible, so just after user sends his login. (that won't help much in cases when client sends password early like "AUTH PLAIN b64loginpass" but will help in all other cases) user_query doesn't have this capability that password_query it seems. -- Arkadiusz Miśkiewicz, arekm / ( maven.pl | pld-linux.org )
not getting internal errors logged
Hello. dovecot 2.3.13, 2.3.14 web imap client when doing search is getting error induced by me by deleting lucene-indexes folder: UID SEARCH: Internal error occurred. Refer to server log for more information. [2021-03-08 15:08:01] (0.002 + 0.000 + 0.001 secs). but I'm not getting that internal error logged in syslog. Other information like from lmtp, imap logging in, out etc is logged just fine. In my configs log_path info_log_path debug_log_path are not set, so dovecot default syslog is used. Tested log_path = /tmp/file.txt, too but the effect was the same (regular logs logged there, internal errors not logged) stracing dovecot process I see error message only write()n once (to client) but nowhere else (in a strace visible way). Is anything else controlling internal errors logging? Can't find anything in docs. -- Arkadiusz Miśkiewicz, arekm / ( maven.pl | pld-linux.org )
Re: doveadm expunge logging
W dniu 04.02.2021 o 17:37, Arkadiusz Miśkiewicz pisze: > W dniu 04.02.2021 o 14:01, Aki Tuomi pisze: >> Did you also load `mail_log` plugin? > > I didn't. Added > > protocol doveadm { > mail_plugins = $mail_plugins mail_log notify acl > } > > and now I'm getting logs on stderr. Nice! > > > Can I get this logged into syslog without my own redirection? > > Trying > > doveadm -o 'log_path=syslog' -o 'syslog_facility=mail' ... expunge ... > > but I'm getting these on stderr only. Improvement for doveadm: https://github.com/dovecot/core/pull/156 that allows logging into syslog > >> >> Aki >> >>> On 04/02/2021 14:52 Arkadiusz Miśkiewicz wrote: >>> >>> >>> Hello. >>> >>> dovecot 2.3.13 here, using >>> >>> doveadm -c /etc/my.conf expunge -A mailbox SomeFolder savedbefore 31d >>> >>> and my.conf includes >>> >>> plugin { >>> mail_log_events = delete undelete expunge copy mailbox_delete >>> mailbox_rename save >>> } >>> >>> unfortunately expunged messages are not logged anywhere. >>> >>> Tried -v and -D to see if these will get logged to output (so I could >>> pipe these to syslog) but verbose/debug also don't show which messages >>> were expunged. >>> >>> >>> Is there a way to get this logged? >>> >>> Log like regular dovecot logs would be nice: >>> >>> doveadm(arekm): pid=<14783>, expunge: box=INBOX, uid=0, >>> msgid=, size=15077 >>> >>> -- >>> Arkadiusz Miśkiewicz, arekm / ( maven.pl | pld-linux.org ) > > -- Arkadiusz Miśkiewicz, arekm / ( maven.pl | pld-linux.org )
Re: doveadm expunge logging
W dniu 04.02.2021 o 14:01, Aki Tuomi pisze: > Did you also load `mail_log` plugin? I didn't. Added protocol doveadm { mail_plugins = $mail_plugins mail_log notify acl } and now I'm getting logs on stderr. Nice! Can I get this logged into syslog without my own redirection? Trying doveadm -o 'log_path=syslog' -o 'syslog_facility=mail' ... expunge ... but I'm getting these on stderr only. > > Aki > >> On 04/02/2021 14:52 Arkadiusz Miśkiewicz wrote: >> >> >> Hello. >> >> dovecot 2.3.13 here, using >> >> doveadm -c /etc/my.conf expunge -A mailbox SomeFolder savedbefore 31d >> >> and my.conf includes >> >> plugin { >> mail_log_events = delete undelete expunge copy mailbox_delete >> mailbox_rename save >> } >> >> unfortunately expunged messages are not logged anywhere. >> >> Tried -v and -D to see if these will get logged to output (so I could >> pipe these to syslog) but verbose/debug also don't show which messages >> were expunged. >> >> >> Is there a way to get this logged? >> >> Log like regular dovecot logs would be nice: >> >> doveadm(arekm): pid=<14783>, expunge: box=INBOX, uid=0, >> msgid=, size=15077 >> >> -- >> Arkadiusz Miśkiewicz, arekm / ( maven.pl | pld-linux.org ) -- Arkadiusz Miśkiewicz, arekm / ( maven.pl | pld-linux.org )
doveadm expunge logging
Hello. dovecot 2.3.13 here, using doveadm -c /etc/my.conf expunge -A mailbox SomeFolder savedbefore 31d and my.conf includes plugin { mail_log_events = delete undelete expunge copy mailbox_delete mailbox_rename save } unfortunately expunged messages are not logged anywhere. Tried -v and -D to see if these will get logged to output (so I could pipe these to syslog) but verbose/debug also don't show which messages were expunged. Is there a way to get this logged? Log like regular dovecot logs would be nice: doveadm(arekm): pid=<14783>, expunge: box=INBOX, uid=0, msgid=, size=15077 -- Arkadiusz Miśkiewicz, arekm / ( maven.pl | pld-linux.org )
Re: INDEX= and dovecot.index.log / dovecot.list.index.log
On 16/01/2020 22:12, Arkadiusz Miśkiewicz wrote: > > Using 2.3.8 and > > mail_location = > maildir:/var/mail/%Ln:CONTROL=/var/lib/dovecot/control/%Ln:VOLATILEDIR=/var/lib/dovecot/control/%Ln:INDEX=/var/lib/dovecot/control/%Ln > > + also > > user_query = SELECT ... CONCAT('maildir:/var/mail/', LOWER(u.login), > '/:CONTROL=/var/lib/dovecot/control/%Ln:VOLATILEDIR=/var/lib/dovecot/control/%Ln:INDEX=/var/lib/dovecot/control/%Ln') > AS mail, ... > > but I noticed that dovecot creates > > dovecot.index.log > dovecot.list.index.log > > still in /var/mail/XYZ instad of /var/lib/dovecot/control/XYZ. > > > Is that normal? > > I would expect /var/lib/dovecot/control/XYZ to be used also for these > two files. > > ps. lmtp is used here > Actually both places are used for dovecot.index.log: # ls -l /var/lib/dovecot/control/stoya/.Sent/dovecot* /var/mail/stoya/.Sent/dovecot* -rw--- 1 308041 163376 34 Jan 16 08:29 /var/lib/dovecot/control/stoya/.Sent/dovecot-keywords -rw--- 1 308041 163376 5814 Jan 16 13:12 /var/lib/dovecot/control/stoya/.Sent/dovecot-uidlist -rw--- 1 308041 163376 1588 Jan 11 04:48 /var/lib/dovecot/control/stoya/.Sent/dovecot.index -rw--- 1 308041 163376 148428 Jan 16 20:40 /var/lib/dovecot/control/stoya/.Sent/dovecot.index.cache -rw--- 1 308041 163376 8020 Jan 16 13:12 /var/lib/dovecot/control/stoya/.Sent/dovecot.index.log -rw--- 1 308041 163376 40 Jan 16 21:55 /var/mail/stoya/.Sent/dovecot.index.log -- Arkadiusz Miśkiewicz, arekm / ( maven.pl | pld-linux.org )
INDEX= and dovecot.index.log / dovecot.list.index.log
Using 2.3.8 and mail_location = maildir:/var/mail/%Ln:CONTROL=/var/lib/dovecot/control/%Ln:VOLATILEDIR=/var/lib/dovecot/control/%Ln:INDEX=/var/lib/dovecot/control/%Ln + also user_query = SELECT ... CONCAT('maildir:/var/mail/', LOWER(u.login), '/:CONTROL=/var/lib/dovecot/control/%Ln:VOLATILEDIR=/var/lib/dovecot/control/%Ln:INDEX=/var/lib/dovecot/control/%Ln') AS mail, ... but I noticed that dovecot creates dovecot.index.log dovecot.list.index.log still in /var/mail/XYZ instad of /var/lib/dovecot/control/XYZ. Is that normal? I would expect /var/lib/dovecot/control/XYZ to be used also for these two files. ps. lmtp is used here -- Arkadiusz Miśkiewicz, arekm / ( maven.pl | pld-linux.org )
Re: Panic: file mail-transaction-log-file.c: line 105 (mail_transaction_log_file_free): assertion failed: (!file->locked)
On 28/03/2019 09:36, Timo Sirainen wrote: > On 28 Mar 2019, at 10.15, Arkadiusz Miśkiewicz <mailto:ar...@maven.pl>> wrote: >> >> error = 0x55e3e2b40ac0 "Fixed index file >> /var/mail/piast_efaktury/dovecot.index: log_file_seq 13 -> 15", >> nodiskspace = true, > > This was one of the things I was first wondering, but I'm not sure why > it's not logging an error. Anyway, you're using filesystem quota? And > this index is large enough that trying to rewrite it brings the user > over quota? > That's possible, fs quota and user is close to being full: used: 19061.63 MiB (19987.57 MB) hard: 19073.49 MiB (2.00 MB) So some initial write/rewrite of indexes corrupts them and then fixing doesn't do anything because it can't do anything when it's over quota (but why it doesn't report over quota errors in logs?) That's from current try, lmtp trying to write, index already corrupted earlier: 10068 openat(AT_FDCWD, "/var/mail/piast_efaktury/dovecot.index.log", O_RDWR|O_APPEND) = 15 10068 fstat(15, {st_mode=S_IFREG|0600, st_size=40, ...}) = 0 10068 pread64(15, "\1\3(\0\330\2\232\\\17\0\0\0\16\0\0\0(\200\0\0o&\232\\_\10\0\0\0\0\0\0\1\0\0\0\0\0\0\0", 40, 0) = 40 10068 openat(AT_FDCWD, "/var/mail/piast_efaktury/dovecot.index", O_RDWR) = 16 10068 fstat(16, {st_mode=S_IFREG|0600, st_size=13555228, ...}) = 0 10068 mmap(NULL, 13555228, PROT_READ|PROT_WRITE, MAP_PRIVATE, 16, 0) = 0x7f1abc8e3000 10068 openat(AT_FDCWD, "/var/mail/piast_efaktury/dovecot.index.log.2", O_RDWR|O_APPEND) = 17 10068 fstat(17, {st_mode=S_IFREG|0600, st_size=32808, ...}) = 0 10068 pread64(17, "\1\3(\0\330\2\232\\\16\0\0\0\r\0\0\0x\200\0\0\277 \232\\|\7\0\0\0\0\0\0\1\0\0\0\0\0\0\0", 40, 0) = 40 10068 close(17) = 0 10068 write(2, "\1\01010068 prefix=lmtp(piast_efaktury): pid=<10068> session=, \n", 84) = 84 10068 write(2, "\1\00410068 Index /var/mail/piast_efaktury/dovecot.index: Lost log for seq=13 offset=25648: Missing middle file seq=13 (between 13..4294967295, we have seqs 14,15): .log.2 contains file_seq=14 (initial_ma"..., 229) = 229 10068 write(2, "\1\00310068 fscking index file /var/mail/piast_efaktury/dovecot.index\n", 66) = 66 10068 alarm(180)= 0 10068 fcntl(15, F_SETLKW, {l_type=F_WRLCK, l_whence=SEEK_SET, l_start=0, l_len=0}) = 0 10068 alarm(0) = 180 10068 stat("/var/mail/piast_efaktury/dovecot.index.log", {st_mode=S_IFREG|0600, st_size=40, ...}) = 0 10068 fstat(15, {st_mode=S_IFREG|0600, st_size=40, ...}) = 0 10068 write(2, "\1\00410068 Fixed index file /var/mail/piast_efaktury/dovecot.index: log_file_seq 13 -> 15\n", 87) = 87 10068 umask(000)= 077 10068 openat(AT_FDCWD, "/var/mail/piast_efaktury/dovecot.index.tmp", O_RDWR|O_CREAT|O_EXCL, 0600) = 17 10068 umask(077)= 000 10068 fstat(17, {st_mode=S_IFREG|0600, st_size=0, ...}) = 0 10068 write(17, "\7\3x\0\320\0\0\0\f\0\0\0\1\0\0\0\330\2\232\\\4\0\0\0\307A\260R\316\20D\0q<\21\0\0\0\0\0f\300\17\0\0\0\0\0\304\20D\0\303\224B\0\316\20D\0\17\0\0\0(\0\0\0(\0\0\0\0\0\0\0\177\34\232\\\0\0\0\0p]\231\\\240\363C\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\330\2\232\\\10\0\4\0\4\0\5\0cache\0\0\0$\0\0\0\0\0\0\0\0\0\0\0\0\0\7\0maildir\0\214\37\232\\\214\37\232\\\333!\0233\330\37\232\\\214\37\232\\\333!\0233\330\37\232\\T\250\31."..., 208) = 208 10068 write(17, "]\3242\0\10\0\0\0\0\0\0\0^\3242\0\10\0\0\0\0\0\0\0_\3242\0\10\0\0\0\0\0\0\0`\3242\0\10\0\0\0\0\0\0\0a\3242\0\10\0\0\0\0\0\0\0b\3242\0\10\0\0\0\0\0\0\0c\3242\0\10\0\0\0\0\0\0\0d\3242\0\10\0\0\0\0\0\0\0e\3242\0\10\0\0\0\0\0\0\0f\3242\0\10\0\0\0\0\0\0\0g\3242\0\10\0\0\0\0\0\0\0h\3242\0\10\0\0\0\0\0\0\0i\3242\0\10\0\0\0\0\0\0\0j\3242\0\10\0\0\0\0\0\0\0k\3242\0\10\0\0\0\0\0\0\0l\3242\0\10\0\0\0\0\0\0\0m\3242\0\10\0\0\0"..., 13555020) = 8912688 10068 write(17, "\241)>\0\10\0\0\0\0\0\0\0\242)>\0\10\0\0\0\0\0\0\0\243)>\0\10\0\0\0\0\0\0\0\244)>\0\10\0\0\0\0\0\0\0\245)>\0\10\0\0\0\0\0\0\0\246)>\0\10\0\0\0\0\0\0\0\247)>\0\10\0\0\0\0\0\0\0\250)>\0\10\0\0\0\0\0\0\0\251)>\0\10\0\0\0\0\0\0\0\252)>\0\10\0\0\0\0\0\0\0\253)>\0\10\0\0\0\0\0\0\0\254)>\0\10\0\0\0\0\0\0\0\255)>\0\10\0\0\0\0\0\0\0\256)>\0\10\0\0\0\0\0\0\0\257)>\0\10\0\0\0\0\0\0\0\260)>\0\10\0\0\0\0\0\0\0\261)>\0\10\0\0\0"..., 4642332) = -1 EDQUOT (Disk quota exceeded) 10068 close(17) = 0 10068 unlink("/var/mail/piast_efaktury/dovecot.index.tmp") = 0 10068 write(2, "\1\00610068 file mail-transaction-log-file.c: line 105 (mail_transaction_log_file_free): assertion failed: (!file->locked)\n", 119) = 119 (I guess I have to finally separate indexes out to separate partition (outside of quota) because more of these "not dealing right
Re: Panic: file mail-transaction-log-file.c: line 105 (mail_transaction_log_file_free): assertion failed: (!file->locked)
On 27/03/2019 16:12, Timo Sirainen wrote: > On 27 Mar 2019, at 14.58, Timo Sirainen via dovecot <mailto:dovecot@dovecot.org>> wrote: >> >>> dovecot isn't able to auto fix the indexes and manual deletion is >>> required in all such cases >> >> So if it keeps repeating, it's very strange. Could you send me such >> broken dovecot.index and dovecot.index.log files (without >> dovecot.index.cache)? They shouldn't contain anything sensitive (only >> message flags). > > Tested with the index files you sent. It gets fixed automatically in my > tests. > > The backtrace shows that after fsck it fails to write the fixed index to > the disk, because mail_index_write() fails for some reason. Except > there's no error logged about it, which is rather weird. Do you still > have the lmtp core? Could you do: > > fr 9 > p *log.index (gdb) f 9 #9 0x7f25fc9e709d in mail_transaction_log_move_to_memory (log=0x55e3e2b81eb0) at mail-transaction-log.c:171 171 mail_transaction_log_close(log); (gdb) *log.index Undefined command: "". Try "help". (gdb) print *log.index There is no member named index. (gdb) print *log $8 = { 0x55e3e2b81c20, views = 0x0, filepath = 0x55e3e2c2adc0 "/var/mail/piast_efaktury/dovecot.index.log", filepath2 = 0x55e3e2b6f380 "/var/mail/piast_efaktury/dovecot.index.log.2", files = 0x55e3e2b7ab20, head = 0x55e3e2b7ab20, open_file = 0x0, dotlock_count = 0, dotlock = 0x0, nfs_flush = false, log_2_unlink_checked = true } (gdb) print *(struct mail_index *)0x55e3e2b81c20 $15 = { dir = 0x55e3e2ba8970 "/var/mail/piast_efaktury", prefix = 0x55e3e2baa1a0 "dovecot.index", cache_dir = 0x55e3e2b888c0 "/var/mail/piast_efaktury", event = 0x55e3e2b7a930, cache = 0x0, log = 0x55e3e2b81eb0, open_count = 0, flags = (MAIL_INDEX_OPEN_FLAG_CREATE | MAIL_INDEX_OPEN_FLAG_DOTLOCK_USE_EXCL | MAIL_INDEX_OPEN_FLAG_SAVEONLY), fsync_mode = FSYNC_MODE_OPTIMIZED, fsync_mask = (unknown: 0), mode = 384, gid = 4294967295, gid_origin = 0x55e3e2c2ad60 "/var/mail/piast_efaktury", optimization_set = { { rewrite_min_log_bytes = 8192, rewrite_max_log_bytes = 131072 }, log = { min_size = 32768, max_size = 1048576, min_age_secs = 300, log2_max_age_secs = 172800 }, cache = { unaccessed_field_drop_secs = 2592000, record_max_size = 65536, compress_min_size = 32768, compress_delete_percentage = 20, compress_continued_percentage = 200, compress_header_continue_count = 4 } }, pending_log2_rotate_time = 0, extension_pool = 0x55e3e2c2bf90, extensions = { arr = { buffer = 0x55e3e2c2bfb8, element_size = 48 }, v = 0x55e3e2c2bfb8, v_modifiable = 0x55e3e2c2bfb8 }, ext_hdr_init_id = 0, ext_hdr_init_data = 0x0, sync_lost_handlers = { arr = { buffer = 0x55e3e2b41900, element_size = 8 }, v = 0x55e3e2b41900, v_modifiable = 0x55e3e2b41900 --Type for more, q to quit, c to continue without paging--c }, filepath = 0x55e3e2c2ad90 "/var/mail/piast_efaktury/dovecot.index", fd = 16, map = 0x55e3e2c2c380, last_mmap_error_time = 0, indexid = 1553597144, inconsistency_id = 0, last_read_log_file_seq = 13, last_read_log_file_tail_offset = 25648, fsck_log_head_file_seq = 15, fsck_log_head_file_offset = 40, sync_commit_result = 0x0, lock_method = FILE_LOCK_METHOD_FCNTL, max_lock_timeout_secs = 4294967295, keywords_pool = 0x55e3e2c2a400, keywords = { arr = { buffer = 0x55e3e2c2a5f0, element_size = 8 }, v = 0x55e3e2c2a5f0, v_modifiable = 0x55e3e2c2a5f0 }, keywords_hash = { _table = 0x55e3e2c2a6c0, _key = 0x55e3e2c2a6c0 "", _keyp = 0x55e3e2c2a6c0, _const_key = 0x55e3e2c2a6c0 "", _value = 0x55e3e2c2a6c0, _valuep = 0x55e3e2c2a6c0 }, keywords_ext_id = 0, modseq_ext_id = 1, views = 0x0, module_contexts = { arr = { buffer = 0x55e3e2b41940, element_size = 8 }, v = 0x55e3e2b41940, v_modifiable = 0x55e3e2b41940 }, error = 0x55e3e2b40ac0 "Fixed index file /var/mail/piast_efaktury/dovecot.index: log_file_seq 13 -> 15", nodiskspace = true, index_lock_timeout = false, index_delete_requested = false, index_deleted = false, log_sync_locked = true, readonly = false, mapping = true, syncing = false, need_recreate = false, index_min_write = false, modseqs_enabled = false, initial_create = false, initial_mapped = false, fscked = true } (gdb) -- Arkadiusz Miśkiewicz, arekm / ( maven.pl | pld-linux.org )
Panic: file mail-transaction-log-file.c: line 105 (mail_transaction_log_file_free): assertion failed: (!file->locked)
=13 offset=25648: > Missing middle file seq=13 (between 13..4294967295, we have seqs 14,15): > .log.2 contains file_seq=14 (initial_mapped=0, reason=Index mapped) > Mar 26 14:18:12 mbox dovecot[10001]: pop3(piast_efaktury): pid=<5694> > session=, Warning: fscking index file > /var/mail/piast_efaktury/dovecot.index > Mar 26 14:18:12 mbox dovecot[10001]: pop3(piast_efaktury): pid=<5694> > session=, Error: Fixed index file > /var/mail/piast_efaktury/dovecot.index: log_file_seq 13 -> 15 > Mar 26 14:18:12 mbox dovecot[10001]: pop3(piast_efaktury): pid=<5694> > session=, Panic: file mail-transaction-log-file.c: line 105 > (mail_transaction_log_file_free): assertion failed: (!file->locked) > Mar 26 14:18:12 mbox dovecot[10001]: pop3(piast_efaktury): pid=<5694> > session=, Error: Raw backtrace: > /usr/lib64/dovecot/libdovecot.so.0(+0xe011b) [0x7f20b6e2911b] -> > /usr/lib64/dovecot/libdovecot.so.0(+0xe01b1) [0x7f20b6e291b1] -> > /usr/lib64/dovecot/libdovecot.so.0(+0x4bf56) [0x7f20b6d94f56] -> > /usr/lib64/dovecot/libdovecot-storage.so.0(+0x4b17e) [0x7f20b6f2d17e] -> > /usr/lib64/dovecot/libdovecot-storage.so.0(mail_transaction_logs_clean+0x5c) > [0x7f20b6fe4e3c] -> > /usr/lib64/dovecot/libdovecot-storage.so.0(mail_transaction_log_close+0x34) > [0x7f20b6fe4f04] -> > /usr/lib64/dovecot/libdovecot-storage.so.0(mail_transaction_log_move_to_memory+0xed) > [0x7f20b6fe509d] -> > /usr/lib64/dovecot/libdovecot-storage.so.0(mail_index_move_to_memory+0x60) > [0x7f20b6fdf0b0] -> > /usr/lib64/dovecot/libdovecot-storage.so.0(mail_index_write+0x1e1) > [0x7f20b6fdd3a1] -> > /usr/lib64/dovecot/libdovecot-storage.so.0(mail_index_fsck+0x68a) > [0x7f20b6fc813a] -> > /usr/lib64/dovecot/libdovecot-storage.so.0(mail_index_sync_map+0x5b0) > [0x7f20b6fd2090] -> > /usr/lib64/dovecot/libdovecot-storage.so.0(mail_index_map+0x13b) > [0x7f20b6fca1eb] -> /usr/lib64/dovecot/libdovecot-storage.so.0(+0xfcc96) > [0x7f20b6fdec96] -> /usr/lib64/dovecot/libdovecot-storage.so.0(+0xfd2a7) > [0x7f20b6fdf2a7] -> > /usr/lib64/dovecot/libdovecot-storage.so.0(mail_index_open+0x7a) > [0x7f20b6fdf3aa] -> > /usr/lib64/dovecot/libdovecot-storage.so.0(index_storage_mailbox_open+0xac) > [0x7f20b6fb8c1c] -> /usr/lib64/dovecot/libdovecot-storage.so.0(+0x84df9) > [0x7f20b6f66df9] -> /usr/lib64/dovecot/libdovecot-storage.so.0(+0x84ecf) > [0x7f20b6f66ecf] -> /usr/lib64/dovecot/libdovecot-storage.so.0(+0xbe11c) > [0x7f20b6fa011c] -> /usr/lib64/dovecot/plugins/lib20_zlib_plugin.so(+0x46ee) > [0x7f20b36896ee] -> /usr/lib64/dovecot/libdovecot-storage.so.0(+0x5bd56) > [0x7f20b6f3dd56] -> > /usr/lib64/dovecot/libdovecot-storage.so.0(mailbox_open+0x4a) > [0x7f20b6f3df3a] -> dovecot/pop3 [piast_efaktury > 1.1.1.1](client_init_mailbox+0x7a) [0x5638a04d1bda] -> dovecot/pop3 > [piast_efaktury 1.1.1.1](+0x5645) [0x5638a04d0645] -> dovecot/pop3 > [piast_efaktury 1.1.1.1](+0x5882) [0x5638a04d0882] -> > /usr/lib64/dovecot/libdovecot.so.0(+0x74021) [0x7f20b6dbd021] -> > /usr/lib64/dovecot/libdovecot.so.0(+0x7439b) [0x7f20b6dbd39b] -> > /usr/lib64/dovecot/libdovecot.so.0(+0x74d1d) [0x7f20b6dbdd1d] > Mar 26 14:18:12 mbox dovecot[10001]: pop3(piast_efaktury): pid=<5694> > session=, Fatal: master: service(pop3): child 5694 killed > with signal 6 (core dumps disabled - > https://dovecot.org/bugreport.html#coredumps) dovecot isn't able to auto fix the indexes and manual deletion is required in all such cases -- Arkadiusz Miśkiewicz, arekm / ( maven.pl | pld-linux.org )
Re: dovecot/config processes one more time - which are safe to kill?
On 14/01/2019 01:43, Timo Sirainen wrote: > On 13 Dec 2018, at 11.18, Arkadiusz Miśkiewicz wrote: >> >> >> Hello. >> >> The problem with dovecot/config processes never ending and spawning new >> one on each reload >> (https://www.dovecot.org/list/dovecot/2016-November/106058.html) is >> becoming a problem here: >> >> # ps aux|grep dovecot/config|wc -l >> 206 > > I think you also have 206 other Dovecot processes that are keeping the config > process open? Maybe 206 imap-login processes or something? Anyway I'd expect > that this would happen only if some other process is keeping a UNIX socket > connection open to the config process. Unless of course there's some bug that > just isn't shutting them down even though they don't have any connections.. > But at least I couldn't easily reproduce that. > > I suppose there isn't much of a reason for existing processes to keep the > config socket open after reload, so a patch like below would likely help. > Although it probably should be delayed so that existing imap/pop3-login > connections doing STARTTLS wouldn't fail if that causes a new config lookup. > > diff --git a/src/lib-master/master-service.c b/src/lib-master/master-service.c > index 3de11fa1b..41005cb5e 100644 > --- a/src/lib-master/master-service.c > +++ b/src/lib-master/master-service.c > @@ -815,6 +815,7 @@ void master_service_stop_new_connections(struct > master_service *service) > } > if (service->login != NULL) > master_login_stop(service->login); > + master_service_close_config_fd(service); > } > > bool master_service_is_killed(struct master_service *service) Wouldn't be it better if these other process simply reconnected to current config/stats/log processes? (I'm killing n-1 processes leaving youngest one alive) Hm, killing older "config" processes doesn't report that killed process was used by anything but likely this is logging behaviour difference only (between killing config/stats vs killing log (which reports thins like: Shutting down logging for 'imap-login: ' with 16 clients)) Mar 14 06:10:10 mailbox dovecot: master: Warning: SIGHUP received - reloading configuration Mar 14 06:22:11 mailbox dovecot: master: Warning: SIGHUP received - reloading configuration Mar 14 06:52:13 mailbox dovecot: master: Warning: SIGHUP received - reloading configuration Mar 14 07:01:02 mailbox dovecot: config: Warning: Killed with signal 15 (by pid=4825 uid=0 code=kill) Mar 14 07:01:02 mailbox dovecot: config: Warning: Killed with signal 15 (by pid=4825 uid=0 code=kill) Mar 14 07:01:02 mailbox dovecot: config: Warning: Killed with signal 15 (by pid=4825 uid=0 code=kill) Mar 14 07:10:15 mailbox dovecot: master: Warning: SIGHUP received - reloading configuration Mar 14 07:16:16 mailbox dovecot: master: Warning: SIGHUP received - reloading configuration Mar 14 08:01:02 mailbox dovecot: config: Warning: Killed with signal 15 (by pid=20047 uid=0 code=kill) Mar 14 08:01:02 mailbox dovecot: config: Warning: Killed with signal 15 (by pid=20047 uid=0 code=kill) Mar 14 08:28:21 mailbox dovecot: master: Warning: SIGHUP received - reloading configuration Mar 14 08:32:17 mailbox dovecot: master: Warning: SIGHUP received - reloading configuration Mar 14 08:44:16 mailbox dovecot: master: Warning: SIGHUP received - reloading configuration Mar 14 08:56:17 mailbox dovecot: master: Warning: SIGHUP received - reloading configuration Mar 14 09:01:04 mailbox dovecot: config: Warning: Killed with signal 15 (by pid=31423 uid=0 code=kill) Mar 14 09:01:04 mailbox dovecot: config: Warning: Killed with signal 15 (by pid=31423 uid=0 code=kill) Mar 14 09:01:04 mailbox dovecot: config: Warning: Killed with signal 15 (by pid=31423 uid=0 code=kill) Mar 14 09:01:04 mailbox dovecot: config: Warning: Killed with signal 15 (by pid=31423 uid=0 code=kill) Mar 14 11:00:22 mailbox dovecot: master: Warning: SIGHUP received - reloading configuration Mar 14 11:01:02 mailbox dovecot: config: Warning: Killed with signal 15 (by pid=31710 uid=0 code=kill) Mar 14 11:02:22 mailbox dovecot: master: Warning: SIGHUP received - reloading configuration Mar 14 11:08:21 mailbox dovecot: master: Warning: SIGHUP received - reloading configuration Mar 14 11:40:20 mailbox dovecot: master: Warning: SIGHUP received - reloading configuration Mar 14 12:01:02 mailbox dovecot: config: Warning: Killed with signal 15 (by pid=3995 uid=0 code=kill) Mar 14 12:01:02 mailbox dovecot: config: Warning: Killed with signal 15 (by pid=3995 uid=0 code=kill) Mar 14 12:01:02 mailbox dovecot: config: Warning: Killed with signal 15 (by pid=3995 uid=0 code=kill) Mar 14 12:06:19 mailbox dovecot: master: Warning: SIGHUP received - reloading configuration Mar 14 12:16:21 mailbox dovecot: master: Warning: SIGHUP received - reloading configuration Mar 14 12:36:20 mailbox doveco
Re: dovecot/config processes one more time - which are safe to kill?
On 13/12/2018 10:18, Arkadiusz Miśkiewicz wrote: > > Hello. > > The problem with dovecot/config processes never ending and spawning new > one on each reload > (https://www.dovecot.org/list/dovecot/2016-November/106058.html) is > becoming a problem here: > > # ps aux|grep dovecot/config|wc -l > 206 > > That's a lot of wasted memory - dovecot/config processes ate over 30GB > of ram on 64GB box. If anyone else uses shutdown_clients = no and has the same problem here is a script that needs to be run from cron and it kills old, unneeded config processes. Works fine here. -- Arkadiusz Miśkiewicz, arekm / ( maven.pl | pld-linux.org ) #!/usr/bin/python3 import psutil import sys import syslog syslog.openlog(logoption=syslog.LOG_PID, facility=syslog.LOG_DAEMON) program = sys.argv[0] dovecot_pid = int(open('/var/run/dovecot/master.pid', 'r').read()) procs = [] for proc in psutil.process_iter(): if proc.name() != "config": continue if proc.ppid() != dovecot_pid: msg = "%s: Found process: %s but parent is not dovecot master pid (%d). Skipping." % (program, repr(proc), dovecot_pid) syslog.syslog(msg) print(msg, file=sys.stderr) continue procs.append(proc) procs = sorted(procs, key=lambda p: p.create_time()) # terminate all processes but leave latest for i in range(0, len(procs) - 1): proc = procs[i] syslog.syslog("%s: Terminating process: %s with TERM signal." % (program, repr(proc))) proc.terminate() syslog.closelog()
Re: dovecot/config processes one more time - which are safe to kill?
On 13/12/2018 17:02, j.emerlik wrote: > In my Dovecot 2.2.32 I do not have such a problem. Most likely you don't use shutdown_clients = no option. > > czw., 13 gru 2018 o 10:18 Arkadiusz Miśkiewicz <mailto:ar...@maven.pl>> napisał(a): > > > Hello. > > The problem with dovecot/config processes never ending and spawning new > one on each reload > (https://www.dovecot.org/list/dovecot/2016-November/106058.html) is > becoming a problem here: > > # ps aux|grep dovecot/config|wc -l > 206 > > That's a lot of wasted memory - dovecot/config processes ate over 30GB > of ram on 64GB box. > > Before killing dovecot/config processes: > # free -m > total used free shared buff/cache > available > Mem: 64437 61656 483 0 2297 > > > after: > > # free -m > total used free shared buff/cache > available > Mem: 64437 23676 37822 0 2939 > > > Currently on dovecot 2.3.3. I guess it's very low priority to handle > that, so: how can I figure out which dovecot/config processes are safe > to be killed by external script? > > Does "all beside 2 newest ones" rule look sane? > > Thanks, > -- > Arkadiusz Miśkiewicz, arekm / ( maven.pl <http://maven.pl> | > pld-linux.org <http://pld-linux.org> ) > -- Arkadiusz Miśkiewicz, arekm / ( maven.pl | pld-linux.org )
dovecot/config processes one more time - which are safe to kill?
Hello. The problem with dovecot/config processes never ending and spawning new one on each reload (https://www.dovecot.org/list/dovecot/2016-November/106058.html) is becoming a problem here: # ps aux|grep dovecot/config|wc -l 206 That's a lot of wasted memory - dovecot/config processes ate over 30GB of ram on 64GB box. Before killing dovecot/config processes: # free -m totalusedfree shared buff/cache available Mem: 64437 61656 483 02297 after: # free -m totalusedfree shared buff/cache available Mem: 64437 23676 37822 02939 Currently on dovecot 2.3.3. I guess it's very low priority to handle that, so: how can I figure out which dovecot/config processes are safe to be killed by external script? Does "all beside 2 newest ones" rule look sane? Thanks, -- Arkadiusz Miśkiewicz, arekm / ( maven.pl | pld-linux.org )
Re: dovecot lmtp thinks that "disk quota exceeded" is "internal error"
On 13/11/2018 21:07, Aki Tuomi wrote: > >> On 13 November 2018 at 22:06 Arkadiusz Miśkiewicz wrote: >> >> >> On 13/11/2018 15:54, Arkadiusz Miśkiewicz wrote: >>> >>> 2.2.36 (not migrated to 2.3 yet) reports such problem: >>> >>>> Nov 13 15:50:58 mbox dovecot: lmtp(xxx): session=, >>>> Error: open(/var/mail/xxx/mailboxes.lock1bf6ad16b7b8b703) failed: Disk >>>> quota exceeded >>>> Nov 13 15:50:58 mbox dovecot: lmtp(xxx): session=, >>>> Error: Couldn't create mailbox list lock /var/mail/xxx/mailboxes.lock: >>>> file_create_locked(/var/mail/xxx/mailboxes.lock) failed: safe_mkstemp(/var >>>> /mail/xxx/mailboxes.lock) failed: Disk quota exceeded >>>> Nov 13 15:50:58 mbox dovecot: lmtp(xxx): session=, >>>> msgid=: save failed to open mailbox >>>> INBOX.Spam: Internal error occurred. Refer to server log for more informat >>>> ion. [2018-11-13 15:50:58] >>> >>> Looks a bug to me since disk exceeded is not a internal error. Shouldn't >>> lmtp return over quota info instead of error? >>> >> >> Just to confirm - dovecot 2.3.3 - the same behaviour, internal error >> >> -- >> Arkadiusz Miśkiewicz, arekm / ( maven.pl | pld-linux.org ) > > Are you using quota:fs? Yes. I remember there was some similar problem and solution/workaround was to keep CONTROL= files on non-quota parition. > Aki -- Arkadiusz Miśkiewicz, arekm / ( maven.pl | pld-linux.org )
Re: dovecot lmtp thinks that "disk quota exceeded" is "internal error"
On 13/11/2018 21:07, Sami Ketola wrote: > > >> On 13 Nov 2018, at 21.06, Arkadiusz Miśkiewicz wrote: >> >> On 13/11/2018 15:54, Arkadiusz Miśkiewicz wrote: >>> >>> 2.2.36 (not migrated to 2.3 yet) reports such problem: >>> >>>> Nov 13 15:50:58 mbox dovecot: lmtp(xxx): session=, >>>> Error: open(/var/mail/xxx/mailboxes.lock1bf6ad16b7b8b703) failed: Disk >>>> quota exceeded >>>> Nov 13 15:50:58 mbox dovecot: lmtp(xxx): session=, >>>> Error: Couldn't create mailbox list lock /var/mail/xxx/mailboxes.lock: >>>> file_create_locked(/var/mail/xxx/mailboxes.lock) failed: safe_mkstemp(/var >>>> /mail/xxx/mailboxes.lock) failed: Disk quota exceeded >>>> Nov 13 15:50:58 mbox dovecot: lmtp(xxx): session=, >>>> msgid=: save failed to open mailbox >>>> INBOX.Spam: Internal error occurred. Refer to server log for more informat >>>> ion. [2018-11-13 15:50:58] >>> >>> Looks a bug to me since disk exceeded is not a internal error. Shouldn't >>> lmtp return over quota info instead of error? >>> >> >> Just to confirm - dovecot 2.3.3 - the same behaviour, internal error > > Dovecot can't create the lock file and it's treated as internal error. Why do > you think that it should not be treated as such? Dovecot knows it's over quota error and can report that way. Just like it reports other over quota conditions. > Sami -- Arkadiusz Miśkiewicz, arekm / ( maven.pl | pld-linux.org )
Re: dovecot lmtp thinks that "disk quota exceeded" is "internal error"
On 13/11/2018 15:54, Arkadiusz Miśkiewicz wrote: > > 2.2.36 (not migrated to 2.3 yet) reports such problem: > >> Nov 13 15:50:58 mbox dovecot: lmtp(xxx): session=, >> Error: open(/var/mail/xxx/mailboxes.lock1bf6ad16b7b8b703) failed: Disk quota >> exceeded >> Nov 13 15:50:58 mbox dovecot: lmtp(xxx): session=, >> Error: Couldn't create mailbox list lock /var/mail/xxx/mailboxes.lock: >> file_create_locked(/var/mail/xxx/mailboxes.lock) failed: safe_mkstemp(/var >> /mail/xxx/mailboxes.lock) failed: Disk quota exceeded >> Nov 13 15:50:58 mbox dovecot: lmtp(xxx): session=, >> msgid=: save failed to open mailbox >> INBOX.Spam: Internal error occurred. Refer to server log for more informat >> ion. [2018-11-13 15:50:58] > > Looks a bug to me since disk exceeded is not a internal error. Shouldn't > lmtp return over quota info instead of error? > Just to confirm - dovecot 2.3.3 - the same behaviour, internal error -- Arkadiusz Miśkiewicz, arekm / ( maven.pl | pld-linux.org )
dovecot 2.2/openssl 1.0 vs dovecot 2.3/openssl 1.1.1 ssl regression
Hi. I'm considering dovecot migration from 2.2.36 run with openssl 1.0.2o to dovecot 2.3.3 run with openssl 1.1.1. Currently I have both variants running with identical configs and certs (the only differences are due to config syntax changes in dovecot 2.3), so for example on both I have: ssl_ca = > wildcard_crt.pem solves the problem and dovecot starts providing both certs to clients but if that's the proper way of solving this issue then what's the point of having ssl_ca config setting? Ideas? -- Arkadiusz Miśkiewicz, arekm / ( maven.pl | pld-linux.org )
dovecot lmtp thinks that "disk quota exceeded" is "internal error"
2.2.36 (not migrated to 2.3 yet) reports such problem: > Nov 13 15:50:58 mbox dovecot: lmtp(xxx): session=, > Error: open(/var/mail/xxx/mailboxes.lock1bf6ad16b7b8b703) failed: Disk quota > exceeded > Nov 13 15:50:58 mbox dovecot: lmtp(xxx): session=, > Error: Couldn't create mailbox list lock /var/mail/xxx/mailboxes.lock: > file_create_locked(/var/mail/xxx/mailboxes.lock) failed: safe_mkstemp(/var > /mail/xxx/mailboxes.lock) failed: Disk quota exceeded > Nov 13 15:50:58 mbox dovecot: lmtp(awypior): > session=, > msgid=: save failed to open mailbox > INBOX.Spam: Internal error occurred. Refer to server log for more informat > ion. [2018-11-13 15:50:58] Looks a bug to me since disk exceeded is not a internal error. Shouldn't lmtp return over quota info instead of error? -- Arkadiusz Miśkiewicz, arekm / ( maven.pl | pld-linux.org )
compressed folders
Hello. Is this still true? https://www.dovecot.org/list/dovecot/2013-March/089084.html Ability to have specific folders compressed only. -- Arkadiusz Miśkiewicz, arekm / ( maven.pl | pld-linux.org )
mail_max_userip_connections from userdb query
Hello. Is still true that mail_max_userip_connections cannot be overriden in userdb query? Want lower global and raise for some logins. https://www.dovecot.org/pipermail/dovecot/2017-July/108520.html -- Arkadiusz Miśkiewicz, arekm / ( maven.pl | pld-linux.org )
Re: lmtp service timeouting even after receiving full message
On Friday 23 of March 2018, Arkadiusz Miśkiewicz wrote: > On Friday 23 of March 2018, Aki Tuomi wrote: > > Shouldn't the last dot be own it's own line? > > Oh, right. So exim fault (spool wireformat feature breaks this). And just got fixed: https://github.com/Exim/exim/commit/f64e8b5f3b01b5285bb1f9172da7950e7f000c22 > > > Aki > > Thanks, -- Arkadiusz Miśkiewicz, arekm / ( maven.pl | pld-linux.org )
Re: lmtp service timeouting even after receiving full message
On Friday 23 of March 2018, Aki Tuomi wrote: > > Shouldn't the last dot be own it's own line? Oh, right. So exim fault (spool wireformat feature breaks this). > Aki Thanks, -- Arkadiusz Miśkiewicz, arekm / ( maven.pl | pld-linux.org )
Re: lmtp service timeouting even after receiving full message
On Thursday 22 of March 2018, Arkadiusz Miśkiewicz wrote: > I have a problem with some messages passed from exim to dovecot lmtp > service: > > From exim debug: > > using socket /var/run/dovecot/lmtp > LMTP<< 220 mbox8 ready > LMTP>> LHLO mbox8... > LMTP<< 250-mbox8 > LMTP<< 250-STARTTLS > LMTP<< 250-8BITMIME > LMTP<< 250-ENHANCEDSTATUSCODES > LMTP<< 250 PIPELINING > LMTP>> MAIL FROM:... > LMTP<< 250 2.1.0 OK > LMTP>> RCPT TO:... > LMTP<< 250 2.1.5 OK > LMTP>> DATA > LMTP<< 354 OK > LMTP>> writing message and terminating "." > cannot use sendfile for body: terminating dot wanted > writing data block fd=6 size=6585 timeout=300 > LMTP>> QUIT > LMTP<< 421 4.4.2 mbox8 Disconnected client for inactivity > > > now above "writting message and terminating" according to exim strace looks > like this: > > 13413 22:07:24 write(2, " LMTP>> writing message and terminating \".\"\n", > 45) = 45 <0.18> 13413 22:07:24 stat("/etc/localtime", > {st_mode=S_IFREG|0644, st_size=2705, ...}) = 0 <0.25> 13413 22:07:24 > write(2, "cannot use sendfile for body: terminating dot wanted\n", 53) = > 53 <0.22> 13413 22:07:24 lseek(3, 19, SEEK_SET) = 19 <0.15> > [...] > 13413 22:07:24 write(2, "writing data block fd=6 size=6585 timeout=300\n", > 46) = 46 <0.19> 13413 22:07:24 alarm(300) = 0 <0.13> > 13413 22:07:24 write(6, "Return-path: > [...] > \nwrap\">.\n", 6585) = 6585 <0.23> > > and then exim waits for dovecot lmtp to say it accepted message but lmtp > never does that. > > Isn't .\n always enough for dovecot to signal end of message? Because exim > wrote full message to dovecot lmtp unix socket. > > Now the interesting this - exim does only one write() with 6585 bytes to > write that message to lmtp. If exim needs more than one write() to do that > (message is larger by some buffer) then it succeeds, dovecot accepts it. > > Could dovecot lmtp handling for single write() case be broken? > > dovecot 2.2.35 And reproducer. Both messages have ".\n" as ending but one has \r\n before it. #!/usr/bin/python3 import socket c = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) c.connect("/var/run/dovecot/lmtp") print(c.recv(1024)) print("LHLO") c.send(b"LHLO mbox8\r\n") print(c.recv(1024)) print("MAIL FROM") c.send(b"MAIL FROM:<some...@example.com>\r\n") print(c.recv(1024)) print("RCPT TO") c.send(b"RCPT TO:<ar...@example.com>\r\n") print(c.recv(1024)) print("DATA") c.send(b"DATA\r\n") print(c.recv(1024)) print("body...") msg_fail = b"testtest.\n" msg_ok = b"01, 41.82, 30.14\r\n.\n" #c.send(msg_ok) c.send(msg_fail) print(c.recv(1024)) c.close() -- Arkadiusz Miśkiewicz, arekm / ( maven.pl | pld-linux.org )
lmtp service timeouting even after receiving full message
I have a problem with some messages passed from exim to dovecot lmtp service: From exim debug: using socket /var/run/dovecot/lmtp LMTP<< 220 mbox8 ready LMTP>> LHLO mbox8... LMTP<< 250-mbox8 LMTP<< 250-STARTTLS LMTP<< 250-8BITMIME LMTP<< 250-ENHANCEDSTATUSCODES LMTP<< 250 PIPELINING LMTP>> MAIL FROM:... LMTP<< 250 2.1.0 OK LMTP>> RCPT TO:... LMTP<< 250 2.1.5 OK LMTP>> DATA LMTP<< 354 OK LMTP>> writing message and terminating "." cannot use sendfile for body: terminating dot wanted writing data block fd=6 size=6585 timeout=300 LMTP>> QUIT LMTP<< 421 4.4.2 mbox8 Disconnected client for inactivity now above "writting message and terminating" according to exim strace looks like this: 13413 22:07:24 write(2, " LMTP>> writing message and terminating \".\"\n", 45) = 45 <0.18> 13413 22:07:24 stat("/etc/localtime", {st_mode=S_IFREG|0644, st_size=2705, ...}) = 0 <0.25> 13413 22:07:24 write(2, "cannot use sendfile for body: terminating dot wanted\n", 53) = 53 <0.22> 13413 22:07:24 lseek(3, 19, SEEK_SET) = 19 <0.15> [...] 13413 22:07:24 write(2, "writing data block fd=6 size=6585 timeout=300\n", 46) = 46 <0.19> 13413 22:07:24 alarm(300) = 0 <0.13> 13413 22:07:24 write(6, "Return-path: [...] \nwrap\">.\n", 6585) = 6585 <0.23> and then exim waits for dovecot lmtp to say it accepted message but lmtp never does that. Isn't .\n always enough for dovecot to signal end of message? Because exim wrote full message to dovecot lmtp unix socket. Now the interesting this - exim does only one write() with 6585 bytes to write that message to lmtp. If exim needs more than one write() to do that (message is larger by some buffer) then it succeeds, dovecot accepts it. Could dovecot lmtp handling for single write() case be broken? dovecot 2.2.35 -- Arkadiusz Miśkiewicz, arekm / ( maven.pl | pld-linux.org )
Re: v2.2.35 released
On Wednesday 21 of March 2018, Arkadiusz Miśkiewicz wrote: > On Monday 19 of March 2018, Aki Tuomi wrote: > > https://dovecot.org/releases/2.2/dovecot-2.2.35.tar.gz > > https://dovecot.org/releases/2.2/dovecot-2.2.35.tar.gz.sig > > [...] > > > - Fix local name handling in v2.2.34 SNI code, bug found by cPanel. > > That change broke handling of such entries > > local_name *.example.com { > ssl_cert =ssl_key = } > > and for connection with pop3.example.com in TLS SNI default certificate is > presented instead of domain specific one. > > Reverting > > commit 446c0b02a7802b676e893ccc4934fc7318d950ea > Author: Aki Tuomi <aki.tu...@dovecot.fi> > Date: Tue Mar 6 15:15:01 2018 +0200 > > lib-master: Correctly match when local_name has multiple names > > Reported by J. Nick Koston <n...@cpanel.net> > > > fixes the problem. And proper fix: --- dovecot-2.2.35/src/lib-master/master-service-settings-cache.c 2018-03-21 10:15:09.097480691 +0100 +++ dovecot-2.2.35/src/lib-master/master-service-settings-cache.c~ 2018-03-19 10:30:01.0 +0100 @@ -131,7 +131,7 @@ match_local_name(const char *local_name, return TRUE; local_name = ptr+1; } - return dns_match_wildcard(filter_local_name, local_name) == 0; + return dns_match_wildcard(local_name, filter_local_name) == 0; } /* Remove any elements which there is no filter for */ -- Arkadiusz Miśkiewicz, arekm / ( maven.pl | pld-linux.org )
Re: v2.2.35 released
On Monday 19 of March 2018, Aki Tuomi wrote: > https://dovecot.org/releases/2.2/dovecot-2.2.35.tar.gz > https://dovecot.org/releases/2.2/dovecot-2.2.35.tar.gz.sig [...] > - Fix local name handling in v2.2.34 SNI code, bug found by cPanel. That change broke handling of such entries local_name *.example.com { ssl_cert = Date: Tue Mar 6 15:15:01 2018 +0200 lib-master: Correctly match when local_name has multiple names Reported by J. Nick Koston <n...@cpanel.net> fixes the problem. -- Arkadiusz Miśkiewicz, arekm / ( maven.pl | pld-linux.org )
Re: Dovecot 2.3.0 TLS
On Thursday 11 of January 2018, Aki Tuomi wrote: > Seems we might've made a unexpected change here when we revamped the ssl > code. Revamped, interesting, can it support milions certs now on single machine? (so are certs loaded by demand and not wasting memory) > Aki -- Arkadiusz Miśkiewicz, arekm / ( maven.pl | pld-linux.org )
logging download speed for services
Jan 23 07:14:13 srv dovecot[7702]: imap(something): session=, Logged out bytes=189/2669 Jan 23 07:14:57 srv dovecot[7702]: pop3(abc): session=, Disconnected: Logged out top=0/0, retr=0/0, del=0/0, size=0, bytes=12/43 Could dovecot be logging download speed for these and log it? (ignoring idle times for imap and only counting fetches etc) The goal is to be able to compare users speed, notice degradation and other such problems. -- Arkadiusz Miśkiewicz, arekm / ( maven.pl | pld-linux.org )
macros in config support
Hi. Is there any support for macros in dovecot configuration - user defined macros? For example: server-config.conf: %define my_db_name "aaa" %define my_db_host "db.example.com" %define my_db_pass "sdfdsfsdfsdf2313" %define server_id 10 %define something blabla ... dovecot-sql.conf: !include server-config.conf connect = host=%{my_db_host} dbname=%{my_db_name} iterate_query = SELECT ... WHERE os.id_pop3_server=%{server_id} ... Basically to be able to use own macros anywhere in config files. -- Arkadiusz Miśkiewicz, arekm / ( maven.pl | pld-linux.org )
Re: secure setup for imap hibernation
On Friday 27 of October 2017, Aki Tuomi wrote: > On 27.10.2017 11:20, Arkadiusz Miśkiewicz wrote: > > Hi. > > > > What's the approach for securely enabling imap hibernation in case when > > each user uses different uid and gid? > > > > Looks like none and 0666 on hibernation and imap master sockets is the > > only way? > > > > Thanks, > > That's the only way, yes. Hibernation keeps all connections in same > process. Couldn't dovecot do setgroups(2) to add additional common group to imap/hibernation processes and rely on that for access to sockets (sockets would be root:thatgroup 0660) thus making it a bit more secure? Non mail related uids/gids wouldn't have access to sockets that way. > Aki -- Arkadiusz Miśkiewicz, arekm / ( maven.pl | pld-linux.org )
secure setup for imap hibernation
Hi. What's the approach for securely enabling imap hibernation in case when each user uses different uid and gid? Looks like none and 0666 on hibernation and imap master sockets is the only way? Thanks, -- Arkadiusz Miśkiewicz, arekm / ( maven.pl | pld-linux.org )
Re: Dovecot >=2.2.29 + Filesystem quota = incorrect storage information
On Tuesday 17 of October 2017, Timo Sirainen wrote: > On 17 Oct 2017, at 18.02, Macka <dove...@macka.pl> wrote: > > I have to resume the thread. > > > > Apparently the problem is caused by the new /usr/include/sys/quota.h file > > (glibc-2.25 and newer) > > > > When I use the quota.h with glibc-headers-2.25, the filesystem quota > > limits are badly displayed. When using the same glibc-2.25 library but > > replacing ONLY one quota.h file from the older version of glibc-2.24, > > after compilation, the limits are correct. > > Looks like they removed the _LINUX_QUOTA_VERSION define from quota.h. This > causes Dovecot to assume it's quota v1. I wonder if there's a way to > detect that it's a new quota.h or should we just drop support for > _LINUX_QUOTA_VERSION==1.. Just reverse conditions? Assume quota version 2 and if _LINUX_QUOTA_VERSION is defined and ==1 then go for version 1. -- Arkadiusz Miśkiewicz, arekm / ( maven.pl | pld-linux.org )
Re: Dovecot and Letsencrypt certs
On Friday 08 of September 2017, Ralph Seichter wrote: > On 08.09.2017 16:20, LuKreme wrote: > > However, it seems like checking the certs is something that dovecot > > should be doing on its own. > > What is Dovecot supposed to do? Keep track of the certificate expiry > date? That was already discussed but due to other reason. dovecot shouldn't load SSL certificates into memory and instead open & load cert on demand (when client connects and requests particular domain via SNI (or default if no SNI)). Why? Because dovecot *cannot* handle thousands of virtual domains and SSL certificates for these. It wastes so much RAM and timeouts on reloads in such case. Tested here. [1] That's why the only sensible solution is to work like exim - load cert from disk on demand. That fixes both problems - ram wasting/timeouts and refreshing certificates. > -Ralph 1. https://dovecot.org/list/dovecot/2016-October/105855.html -- Arkadiusz Miśkiewicz, arekm / ( maven.pl | pld-linux.org )
Re: v2.2.32 release candidate 2 released
On Tuesday 22 of August 2017, Timo Sirainen wrote: > https://dovecot.org/releases/2.2/rc/dovecot-2.2.32.rc2.tar.gz > https://dovecot.org/releases/2.2/rc/dovecot-2.2.32.rc2.tar.gz.sig > > A couple of changes since rc1: > > + Added apparmor plugin. See https://wiki2.dovecot.org/Plugins/Apparmor Oh so no way to set separate hat for each user? (based on sql query for example etc) -- Arkadiusz Miśkiewicz, arekm / ( maven.pl | pld-linux.org )
Re: Correct settings for ssl protocols" and "ssl ciphers"
On Tuesday 17 of January 2017, Jerry wrote: > I have the following two settings in my "10-ssl.conf" file > > # SSL protocols to use > ssl_protocols = !SSLv2 > > # SSL ciphers to use > ssl_cipher_list = ALL:!LOW:!SSLv2:!EXP:!aNULL > > I have seen different configurations while Googling. I am wondering > what the consensus is for the best settings for these two items. What > do the developers recommend? Likely the same or similar to what browsers recommend. See https://wiki.mozilla.org/Security/Server_Side_TLS https://wiki.mozilla.org/Security/Server_Side_TLS#Intermediate_compatibility_.28default.29 Currently using: ssl_cipher_list = ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES1 28-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:EC DHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128- GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3- SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA ssl_protocols = !SSLv2 !SSLv3 > > Thanks! -- Arkadiusz Miśkiewicz, arekm / ( maven.pl | pld-linux.org )
Re: nologin + reason -> logging reason
On Monday 21 of November 2016, Timo Sirainen wrote: > On 21 Nov 2016, at 16.39, Arkadiusz Miśkiewicz <ar...@maven.pl> wrote: > > Hi. > > > > I'm using nologin with own reason [1]. That works fine. For example pop3 > > client gets nice message like "-ERR [AUTH] Account is locked. Please > > contact support." > > > > > > Unfortunately maillog lacks information details about why user was not > > allowed to log in. > > > > pop3-login: Disconnected (auth failed, 1 attempts in 2 secs): > > user=, method=LOGIN, rip=1.1.1.1, lip=2.2.2.2, > > session= > > > > Is it possible to log "reason" there, too? (whether it is > > default/internal dovecot reason or my custom one). > > Does it work if you add: > > login_log_format_elements = $login_log_format_elements > reason=%{passdb:reason} Unfortunately with this empty reason is always logged (for both - allowed and nologin users) Nov 22 07:09:08 mbox dovecot[31261]: pop3-login: Disconnected (auth failed, 1 attempts in 2 secs): user=, method=LOGIN, rip=1.1.1.1, lip=2.2.2.2, session=, reason= while user got -ERR [AUTH] Account is locked. Please contact support. and I had: login_log_format_elements = user=<%u> method=%m rip=%r lip=%l mpid=%e local_name=%{local_name} %c session=<%{session}> reason=%{passdb:reason} -- Arkadiusz Miśkiewicz, arekm / ( maven.pl | pld-linux.org )
Re: nologin + reason -> logging reason
On Monday 21 of November 2016, @lbutlr wrote: > On Nov 21, 2016, at 7:39 AM, Arkadiusz Miśkiewicz <ar...@maven.pl> wrote: > > reason is the only thing in maillog that allows to distinguish why user > > was not allowed to log in. > > Um… the only thing? How about where you set the reason in the first place? That "first" place is constantly changing (database) and I'm looking at logs from X days/weeks ago, so database doesn't even have old info. log it the only place where it would make sense to store a reason. > I think the assumption with nologin is that the admin knows the reason, > especially considering that nologin is drastic and is almost certain to > confuse the user’s MUA, so should only be used in dire cases. 4 different and dynamically changing reasons possible, so it's not that simple. -- Arkadiusz Miśkiewicz, arekm / ( maven.pl | pld-linux.org )
nologin + reason -> logging reason
Hi. I'm using nologin with own reason [1]. That works fine. For example pop3 client gets nice message like "-ERR [AUTH] Account is locked. Please contact support." Unfortunately maillog lacks information details about why user was not allowed to log in. pop3-login: Disconnected (auth failed, 1 attempts in 2 secs): user=, method=LOGIN, rip=1.1.1.1, lip=2.2.2.2, session= Is it possible to log "reason" there, too? (whether it is default/internal dovecot reason or my custom one). reason is the only thing in maillog that allows to distinguish why user was not allowed to log in. 1. http://wiki2.dovecot.org/PasswordDatabase/ExtraFields -- Arkadiusz Miśkiewicz, arekm / ( maven.pl | pld-linux.org )
Re: BUG: nopassword doesn't work with CRAM-MD5
On Thursday 17 of November 2016, Aki Tuomi wrote: > On 17.11.2016 10:30, Arkadiusz Miśkiewicz wrote: > > On Thursday 17 of November 2016, Aki Tuomi wrote: > >> On 17.11.2016 10:14, Arkadiusz Miśkiewicz wrote: > >>> Hello. > >>> > >>> dovecot 2.2.26.0 > >>> > >>> When testing nopassword extra field > >>> (http://wiki2.dovecot.org/PasswordDatabase/ExtraFields) with CRAM-MD5 > >>> dovecot doesn't allow any password (while it should) and returns > >>> > >>> " Authentication failed" > >>> > >>> while in logs: > >>> > >>> Nov 17 08:22:34 auth-worker(1551): Info: > >>> sql(pepe,127.0.0.1,): Requested CRAM-MD5 scheme, but > >>> we have a NULL password > >>> > >>> NULL is there because our sql query returns empty password just like > >>> wiki says "nopassword: you want to allow all passwords, use an empty > >>> password and this field. " > >>> > >>> > >>> If password is returned in sql query then it fails, too: > >>> > >>> Nov 17 09:00:49 auth-worker(2206): Error: > >>> sql(pepe,127.0.0.1,): nopassword set but password is > >>> non- empty > >>> > >>> So looks to be a bug. > >> > >> It's not a bug. CRAM-MD5 does in fact require *some* password to work, > > > > Provide fake/random one for nopassword internally. > > > >> you can either store it with doveadm pw -S CRAM-MD5 or as plain text > >> password. > > > > Then I get > > > >>> sql(pepe,127.0.0.1,): nopassword set but password is > >>> non- empty > > > > So that doesn't help > > > > btw. doveadm pw -S is not documented, so no idea what it does > > > >> Aki > > sorry, typo. > > Ment doveadm pw -s CRAM-MD5 > > How do you perceive user login works with CRAM-MD5 if you do not provide > *any* password for the user? I can provide it and I want to do that but nopassword doesn't let me. > Some passdb backend must provide a password > for the user, if you want to load extra attributes from alternative > backend, use noauthenticate instead of nopassword, but make sure the > last passdb can authenticate the user. Ok, I'll try noauthenticate. > > Aki -- Arkadiusz Miśkiewicz, arekm / ( maven.pl | pld-linux.org )
Re: BUG: nopassword doesn't work with CRAM-MD5
On Thursday 17 of November 2016, Aki Tuomi wrote: > On 17.11.2016 10:14, Arkadiusz Miśkiewicz wrote: > > Hello. > > > > dovecot 2.2.26.0 > > > > When testing nopassword extra field > > (http://wiki2.dovecot.org/PasswordDatabase/ExtraFields) with CRAM-MD5 > > dovecot doesn't allow any password (while it should) and returns > > > > " Authentication failed" > > > > while in logs: > > > > Nov 17 08:22:34 auth-worker(1551): Info: > > sql(pepe,127.0.0.1,): Requested CRAM-MD5 scheme, but we > > have a NULL password > > > > NULL is there because our sql query returns empty password just like wiki > > says "nopassword: you want to allow all passwords, use an empty > > password and this field. " > > > > > > If password is returned in sql query then it fails, too: > > > > Nov 17 09:00:49 auth-worker(2206): Error: > > sql(pepe,127.0.0.1,): nopassword set but password is > > non- empty > > > > So looks to be a bug. > > It's not a bug. CRAM-MD5 does in fact require *some* password to work, Provide fake/random one for nopassword internally. > you can either store it with doveadm pw -S CRAM-MD5 or as plain text > password. Then I get > > sql(pepe,127.0.0.1,): nopassword set but password is > > non- empty So that doesn't help btw. doveadm pw -S is not documented, so no idea what it does > Aki -- Arkadiusz Miśkiewicz, arekm / ( maven.pl | pld-linux.org )
BUG: nopassword doesn't work with CRAM-MD5
Hello. dovecot 2.2.26.0 When testing nopassword extra field (http://wiki2.dovecot.org/PasswordDatabase/ExtraFields) with CRAM-MD5 dovecot doesn't allow any password (while it should) and returns " Authentication failed" while in logs: Nov 17 08:22:34 auth-worker(1551): Info: sql(pepe,127.0.0.1,): Requested CRAM-MD5 scheme, but we have a NULL password NULL is there because our sql query returns empty password just like wiki says "nopassword: you want to allow all passwords, use an empty password and this field. " If password is returned in sql query then it fails, too: Nov 17 09:00:49 auth-worker(2206): Error: sql(pepe,127.0.0.1,): nopassword set but password is non- empty So looks to be a bug. -- Arkadiusz Miśkiewicz, arekm / ( maven.pl | pld-linux.org )
Re: lazy-load SNI?
On Friday 11 of November 2016, KSB wrote: > >>> Great! Seems to be working fine for my usage and makes my configs 50% > >>> smaller (which is gigantic improvement). Will do more testing though. > >>> > >>> Thanks! > > A little bit offtopic, but what is the point of using imap/pop SNI? > All > clients want to connect to their own domain or what? Yes. -- Arkadiusz Miśkiewicz, arekm / ( maven.pl | pld-linux.org )
Re: lazy-load SNI?
On Friday 11 of November 2016, Aki Tuomi wrote: > On 11.11.2016 19:17, Arkadiusz Miśkiewicz wrote: > > On Friday 11 of November 2016, Aki Tuomi wrote: > >> If you are interested in testing, please find patch attached that allows > >> you to specify > >> > >> local_name *.foo.bar { > >> } > >> > >> or > >> > >> local_name *.*.foo.bar { > >> } > >> > >> so basically you can now use certificate name matching rules for > >> local_name. It made most sense. > > > > Great! Seems to be working fine for my usage and makes my configs 50% > > smaller (which is gigantic improvement). Will do more testing though. > > > > Thanks! > > > > > > > > What about dovecot stopping processing new clients when reload is in > > progress problem - is it possible to make it behave better? To minimize > > (or avoid) "downtime". > > > > How to reproduce - just create config file with 20 000 - 50 000 entries > > > > local_name hostXexample.com { > > > >ssl_cert = >ssl_key = > > > } > > > > where cert.pem contains some full chain (CA cert + intermediate + cert + > > key). > > > > Start dovecot and then doveadm reload should take long time. Enough for > > noticing that dovecot stops processing clients. > > > >> Aki Tuomi > >> Dovecot oy > > That is something that will happen later. Can't give any date, but it's > in our internal tasklist. Ok, thanks. Just making sure that this (stopping processing clients) and lazy-loading of thousands of SSL certs itself are treated by dovecot team as two separate issues (and tons of SSL certs simply helps to notice first issue). And was hoping that stopping processing clients issue is easy/easier to solve (but looks like that's not the case). > Aki -- Arkadiusz Miśkiewicz, arekm / ( maven.pl | pld-linux.org )
Re: lazy-load SNI?
On Friday 11 of November 2016, Aki Tuomi wrote: > If you are interested in testing, please find patch attached that allows > you to specify > > local_name *.foo.bar { > } > > or > > local_name *.*.foo.bar { > } > > so basically you can now use certificate name matching rules for > local_name. It made most sense. Great! Seems to be working fine for my usage and makes my configs 50% smaller (which is gigantic improvement). Will do more testing though. Thanks! What about dovecot stopping processing new clients when reload is in progress problem - is it possible to make it behave better? To minimize (or avoid) "downtime". How to reproduce - just create config file with 20 000 - 50 000 entries local_name hostXexample.com { ssl_cert = Aki Tuomi > Dovecot oy -- Arkadiusz Miśkiewicz, arekm / ( maven.pl | pld-linux.org )
Re: lazy-load SNI?
On Friday 11 of November 2016, Felipe Gasper wrote: > Hello, > > We’re rolling out large SNI deployments for our mail servers. Each > domain > gets an entry like this in the config: > > local_name mail.foo.com { > ssl_cert = ssl_key = } Lack of glob/regexp support here is also a problem (for me). I could have 50% smaller config if local_name supported regexp matching, so it would be possible to do: local_name ^(pop3|imap)\.foo\.com { ... } or even with glob like *.foo.com matching. > > There are a couple problems we’re finding with this approach: > > 1) Dovecot wants to load everything at once, which has some machines taking > up many GiB of memory just for Dovecot. Is there any way to defer loading > of an SSL cert until a client actually requests it? No - thread here http://www.dovecot.org/list/dovecot/2016-October/105855.html Memory is one thing. The other is that dovecot stops accepting clients when huge config reload happens (I guess it's a design problem since it makes no sense to do that in any case. Clients should be processed without gap using old config until new config is loaded and ready to go). And third problem is that there is hardcoded 10s limit for reloading which in case thousands of certificates is way too short limit. Anyway if you hit that limit it's already lost case due to earlier problem. > > 2) Any time we add or remove a domain, Dovecot’s SNI config matrix needs to > be rebuilt. Is there a way to handle SNI requests dynamically via some > sort of configuration plugin, so we wouldn’t need to rebuild the config on > domain add/remove? I looked through the docs but couldn’t see a way to do > this. That's unavoidable for now :-( Here we started analyzing maillog and put into dovecot config only these ssl certs for domains that are actually used with TLS. It's very ugly and short- sighted approach but hopefuly proper solution will be implemented by dovecot team before all people start to use TLS. > Thank you in advance! > > -Felipe Gasper > Mississauga, ON -- Arkadiusz Miśkiewicz, arekm / ( maven.pl | pld-linux.org )
tons of dovecot/config processes
Hi. I've noticed that dovecot (using 2.2.26.0 here) starts dovecot/config processes that stay for long time. Example: [root@ixion-pld ~]# service dovecot restart Stopping Dovecot service...[ DONE ] Starting Dovecot service...[ DONE ] [root@ixion-pld ~]# ps aux|grep dovecot root 25333 0.0 0.0 13736 2480 ?Ss 09:40 0:00 /usr/sbin/dovecot dovecot 25336 0.0 0.0 9480 924 ?S09:40 0:00 dovecot/anvil [0 connections] root 25337 0.0 0.0 9612 2416 ?S09:40 0:00 dovecot/log root 25339 0.0 0.0 12496 3256 ?S09:40 0:00 dovecot/config root 25341 0.0 0.0 132168 888 pts/1S+ 09:40 0:00 grep dovecot [root@ixion-pld ~]# doveadm reload [root@ixion-pld ~]# ps aux|grep dovecot root 25333 0.0 0.0 13872 2720 ?Ss 09:40 0:00 /usr/sbin/dovecot dovecot 25336 0.0 0.0 9480 924 ?S09:40 0:00 dovecot/anvil [0 connections] root 25344 0.0 0.0 9612 2428 ?S09:40 0:00 dovecot/log root 25346 0.0 0.0 12496 3192 ?S09:40 0:00 dovecot/config root 25348 0.0 0.0 132168 876 pts/1S+ 09:40 0:00 grep dovecot so far good - only one dovecot/config. Lets connect to pop3 and keep connection [root@ixion-pld ~]# telnet localhost pop3 Trying 127.0.0.1.110... Connected to localhost. Escape character is '^]'. +OK Mail server ready. on the other console [root@ixion-pld ~]# ps aux|grep dovecot root 25333 0.0 0.0 13872 2720 ?Ss 09:40 0:00 /usr/sbin/dovecot dovecot 25336 0.0 0.0 9480 924 ?S09:40 0:00 dovecot/anvil [2 connections] root 25344 0.0 0.0 9612 2428 ?S09:40 0:00 dovecot/log root 25346 0.0 0.0 12496 3192 ?S09:40 0:00 dovecot/config dovenull 25364 0.0 0.0 20908 4080 ?S09:41 0:00 dovecot/pop3-login [127.0.0.1] dovecot 25365 0.0 0.0 100236 7776 ?S09:41 0:00 dovecot/auth [0 wait, 0 passdb, 0 userdb] root 25368 0.0 0.0 132168 856 pts/1S+ 09:41 0:00 grep dovecot so there is a client connected and one dovecot/config. Lets reload: [root@ixion-pld ~]# doveadm reload [root@ixion-pld ~]# ps aux|grep dovecot root 25333 0.0 0.0 13872 2752 ?Ss 09:40 0:00 /usr/sbin/dovecot dovecot 25336 0.0 0.0 9480 924 ?S09:40 0:00 dovecot/anvil [2 connections] root 25344 0.0 0.0 9612 2428 ?S09:40 0:00 dovecot/log root 25346 0.0 0.0 12920 3700 ?S09:40 0:00 dovecot/config dovenull 25364 0.0 0.0 20908 4080 ?S09:41 0:00 dovecot/pop3-login [127.0.0.1] dovecot 25365 0.0 0.0 100236 7776 ?S09:41 0:00 dovecot/auth [0 wait, 0 passdb, 0 userdb] root 25371 0.0 0.0 9612 2196 ?S09:41 0:00 dovecot/log root 25373 0.0 0.0 12496 3196 ?S09:41 0:00 dovecot/config root 25375 0.0 0.0 132168 856 pts/1S+ 09:41 0:00 grep dovecot now we have two dovecot/config processes. Second dovecot/config stays there until client disconnects (what for?). When clients disconnects we are back to single dovecot/config: [root@ixion-pld ~]# ps aux|grep dovecot root 25333 0.0 0.0 13872 2752 ?Ss 09:40 0:00 /usr/sbin/dovecot dovecot 25336 0.0 0.0 9480 924 ?S09:40 0:00 dovecot/anvil [0 connections] root 25371 0.0 0.0 9612 2196 ?S09:41 0:00 dovecot/log root 25373 0.0 0.0 12496 3196 ?S09:41 0:00 dovecot/config root 25418 0.0 0.0 132168 852 pts/1S+ 09:43 0:00 grep dovecot Now on production server where are tons of clients this looks more insane: # ps aux|grep dovecot/config | wc -l 56 Note that I'm running with shutdown_clients = no here (+ high performance auth/login variant). So looks like something is not right here. Obviously with shutdown_clients=yes this doesn't occur since clients are disconnected. doveadm reload can happen every 2 minutes (because dovecot requires reload when SSL certificates change; new domain gets added, new cert gets automatically created -> reload, certificate is renewed (every 2 months) -> reload etc) -- Arkadiusz Miśkiewicz, arekm / ( maven.pl | pld-linux.org )
Re: logging TLS SNI hostname
On Tuesday 08 of November 2016, Aki Tuomi wrote: > > On November 8, 2016 at 4:08 PM Arkadiusz Miśkiewicz <ar...@maven.pl> > > wrote: > > > > On Thursday 20 of October 2016, Arkadiusz Miśkiewicz wrote: > > > On Thursday 20 of October 2016, Aki Tuomi wrote: > > > > On 20.10.2016 15:52, Arkadiusz Miśkiewicz wrote: > > > > > > ... -servername something > > > > > > > > If you want to try out, try applying this patch... > > > > > > Works, thanks! > > > > But... it's easy to log fake things > > > > Nov 8 15:04:01 mbox dovecot: pop3-login: Aborted login (no auth attempts > > in 1 secs): user=<>, rip=127.0.0.1, lip=127.0.0.1, > > local_name=whitehouse.gov, i_can=put_anything, here=etc, TLS, > > session=<26rEnMpAPMtb6rD0> > > > > by using > > > > openssl s_client -connect 127.0.0.1:110 -starttls pop3 -servername > > "whitehouse.gov, i_can=put_anything, here=etc" > > > > so some escaping here would also be needed. > > > > conf: > > login_log_format_elements = user=<%u> method=%m rip=%r lip=%l mpid=%e > > local_name=%{local_name} %c session=<%{session}> > > > > > > From 066edb5e5c14a05c90e9ae63f0b76fcfd9c1149e Mon Sep 17 00:00:00 > > > > 2001 From: Aki Tuomi <aki.tu...@dovecot.fi> > > > > Date: Thu, 20 Oct 2016 16:06:27 +0300 > > > > Subject: [PATCH] login-common: Include local_name in > > > > login_var_expand_table > > > > > > > > This way it can be used in login_log_format > > There is escaping in the final code in 2.2.26.0. This is on 2.2.26.0. Escaping was only added to auth code, not logging one, right? -- Arkadiusz Miśkiewicz, arekm / ( maven.pl | pld-linux.org )
Re: logging TLS SNI hostname
On Thursday 20 of October 2016, Arkadiusz Miśkiewicz wrote: > On Thursday 20 of October 2016, Aki Tuomi wrote: > > On 20.10.2016 15:52, Arkadiusz Miśkiewicz wrote: > > > > ... -servername something > > > > If you want to try out, try applying this patch... > > Works, thanks! But... it's easy to log fake things Nov 8 15:04:01 mbox dovecot: pop3-login: Aborted login (no auth attempts in 1 secs): user=<>, rip=127.0.0.1, lip=127.0.0.1, local_name=whitehouse.gov, i_can=put_anything, here=etc, TLS, session=<26rEnMpAPMtb6rD0> by using openssl s_client -connect 127.0.0.1:110 -starttls pop3 -servername "whitehouse.gov, i_can=put_anything, here=etc" so some escaping here would also be needed. conf: login_log_format_elements = user=<%u> method=%m rip=%r lip=%l mpid=%e local_name=%{local_name} %c session=<%{session}> > > > From 066edb5e5c14a05c90e9ae63f0b76fcfd9c1149e Mon Sep 17 00:00:00 2001 > > From: Aki Tuomi <aki.tu...@dovecot.fi> > > Date: Thu, 20 Oct 2016 16:06:27 +0300 > > Subject: [PATCH] login-common: Include local_name in > > login_var_expand_table > > > > This way it can be used in login_log_format -- Arkadiusz Miśkiewicz, arekm / ( maven.pl | pld-linux.org )
Re: Panic: file mail-transaction-log-file.c: line 104 (mail_transaction_log_file_free): assertion failed: (!file->locked)
On Tuesday 01 of November 2016, Aki Tuomi wrote: > On 25.08.2016 10:29, Aki Tuomi wrote: > > On 14.07.2016 10:56, Arkadiusz Miśkiewicz wrote: > >> 2.2.25 (also happens on 2.2.24). Happens every time I try to make > >> deliver and only for this user: > >> > >> Jul 14 09:52:02 mbox dovecot: lmtp(25601): Connect from local > >> Jul 14 09:52:02 mbox dovecot: lmtp(powiadomienia): > >> session=, Error: Index > >> /var/mail/powiadomienia/dovecot.index: Lost log for seq=1009 offset=40: > >> Missing middle file seq=1009 (between 1009..4294967295) > >> Jul 14 09:52:02 mbox dovecot: lmtp(powiadomienia): > >> session=, Warning: fscking index file > >> /var/mail/powiadomienia/dovecot.index Jul 14 09:52:02 mbox dovecot: > >> lmtp(powiadomienia): session=, Error: Fixed > >> index file /var/mail/powiadomienia/dovecot.index: log_file_seq 1009 -> > >> 1011 Jul 14 09:52:02 mbox dovecot: lmtp(powiadomienia): > >> session=, Panic: file > >> mail-transaction-log-file.c: line 104 (mail_transaction_log_file_free): > >> assertion failed: (!file->locked) Jul 14 09:52:02 mbox dovecot: > >> lmtp(powiadomienia): session=, Error: Raw > >> backtrace: /usr/lib64/dovecot/libdovecot.so.0(+0x8d7d2) > >> [0x7feb89fc97d2] -> /usr/lib64/dovecot/libdovecot.so.0(+0x8d8bd) > >> [0x7feb89fc98bd] -> /usr/lib64/dovecot/libdovecot.so.0(i_fatal+0) > >> [0x7feb89f67e31] -> > >> /usr/lib64/dovecot/libdovecot-storage.so.0(mail_transaction_log_file_fr > >> ee+0x160) [0x7feb8a331fa0] -> /usr/lib64/dovecot/libdovecot- > >> storage.so.0(mail_transaction_logs_clean+0x4d) [0x7feb8a3360ed] -> > >> /usr/lib64/dovecot/libdovecot-storage.so.0(mail_transaction_log_close+0 > >> x30) [0x7feb8a336230] -> /usr/lib64/dovecot/libdovecot- > >> storage.so.0(mail_transaction_log_move_to_memory+0xd5) [0x7feb8a3363e5] > >> -> > >> /usr/lib64/dovecot/libdovecot-storage.so.0(mail_index_move_to_memory+0x > >> a0) [0x7feb8a330440] -> /usr/lib64/dovecot/libdovecot- > >> storage.so.0(mail_index_write+0x183) [0x7feb8a32e9d3] -> > >> /usr/lib64/dovecot/libdovecot-storage.so.0(mail_index_fsck+0xc1f) > >> [0x7feb8a3186ff] -> > >> /usr/lib64/dovecot/libdovecot-storage.so.0(mail_index_sync_map+0x49b) > >> [0x7feb8a322eab] -> > >> /usr/lib64/dovecot/libdovecot-storage.so.0(mail_index_map+0x71) > >> [0x7feb8a31a231] -> > >> /usr/lib64/dovecot/libdovecot-storage.so.0(+0xe0fed) [0x7feb8a32ffed] > >> -> /usr/lib64/dovecot/libdovecot-storage.so.0(+0xe15f3) > >> [0x7feb8a3305f3] -> > >> /usr/lib64/dovecot/libdovecot-storage.so.0(mail_index_open+0x78) > >> [0x7feb8a3306d8] -> > >> /usr/lib64/dovecot/libdovecot-storage.so.0(index_storage_mailbox_open+0 > >> x92) [0x7feb8a309202] -> > >> /usr/lib64/dovecot/libdovecot-storage.so.0(+0x6c0e2) [0x7feb8a2bb0e2] > >> -> /usr/lib64/dovecot/libdovecot-storage.so.0(+0x6c1c8) > >> [0x7feb8a2bb1c8] -> > >> /usr/lib64/dovecot/plugins/lib20_zlib_plugin.so(+0x2fdc) > >> [0x7feb85697fdc] -> > >> /usr/lib64/dovecot/libdovecot-storage.so.0(+0x450c6) [0x7feb8a2940c6] > >> -> /usr/lib64/dovecot/libdovecot-storage.so.0(mailbox_open+0x20) > >> [0x7feb8a294240] -> > >> /usr/lib64/dovecot/libdovecot-lda.so.0(mail_deliver_save_open+0xad) > >> [0x7feb8a58d1ad] -> > >> /usr/lib64/dovecot/libdovecot-lda.so.0(mail_deliver_save+0xbb) > >> [0x7feb8a58d48b] -> > >> /usr/lib64/dovecot/libdovecot-lda.so.0(mail_deliver+0x123) > >> [0x7feb8a58d9e3] -> dovecot/lmtp [DATA powiadomienia]() [0x406bc8] -> > >> /usr/lib64/dovecot/libdovecot.so.0(io_loop_call_io+0x4c) > >> [0x7feb89fdd67c] -> > >> /usr/lib64/dovecot/libdovecot.so.0(io_loop_handler_run_internal+0x101) > >> [0x7feb89fdeb01] -> > >> /usr/lib64/dovecot/libdovecot.so.0(io_loop_handler_run+0x25) > >> [0x7feb89fdd705] > >> Jul 14 09:52:02 mbox dovecot: lmtp(powiadomienia): > >> session=, Fatal: master: service(lmtp): child > >> 25601 killed with signal 6 (core dumps disabled) > > > > Hi! > > > > Are you still able to reproduce this? Any hope for backtrace with gdb? > > > > gdb /path/to/binary /path/to/core > > bt full > > > > Aki > > Ping Sorry, I deleted index for that login and things started to work again, so have no way to reproduce anymore. Also no core dump for that issue. When it happens again I'll check backtrace. > Aki -- Arkadiusz Miśkiewicz, arekm / ( maven.pl | pld-linux.org )
Re: v2.2.26 released
On Friday 28 of October 2016, Aki Tuomi wrote: > 27.10.2016 16:39, Arkadiusz Miśkiewicz wrote: > > On Thursday 27 of October 2016, Timo Sirainen wrote: > >> http://dovecot.org/releases/2.2/dovecot-2.2.26.tar.gz > >> http://dovecot.org/releases/2.2/dovecot-2.2.26.tar.gz.sig > > > > Please merge to 2.2 branch this fix. I'm hitting that problem on 2.2.25: > > > > From 6c969ac21a43cc10ee1f1a91a4f39e4864c886cb Mon Sep 17 00:00:00 2001 > > From: Aki Tuomi <aki.tu...@dovecot.fi> > > Date: Fri, 15 Jul 2016 11:31:25 +0300 > > Subject: [PATCH] auth: Remove i_assert for credentials scheme > > > > --- > > > > src/auth/auth-request.c | 2 -- > > 1 file changed, 2 deletions(-) > > Hi! > > Do you have some details how to reproduce this issue on your end? It seems to be related to "unknown user" always. Oct 21 08:43:51 mbox dovecot: auth-worker(31838): sql(abc, 1.1.1.1,): unknown user Oct 21 08:43:51 mbox dovecot: auth: Panic: file auth-request.c: line 1053 (auth_request_lookup_credentials): assertion failed: (request->credentials_scheme == scheme) Oct 21 08:43:51 mbox dovecot: auth: Error: Raw backtrace: /usr/lib64/dovecot/libdovecot.so.0(+0x8d822) [0x7fb7d85f5822] -> /usr/lib64/dovecot/libdovecot.so.0(+0x8d90d) [0x7fb7d85f590d] -> /usr/lib64/dovecot/libdovecot.so.0(i_fatal+0) [0x7fb7d8593e51] -> dovecot/auth [4 wait, 0 passdb, 0 userdb](auth_request_lookup_credentials+0xd8) [0x4176b8] -> dovecot/auth [4 wait, 0 passdb, 0 userdb]() [0x4245b2] -> dovecot/auth [4 wait, 0 passdb, 0 userdb]() [0x4172cb] -> dovecot/auth [4 wait, 0 passdb, 0 userdb](auth_request_lookup_credentials_callback+0x68) [0x417388] -> dovecot/auth [4 wait, 0 passdb, 0 userdb](passdb_handle_credentials+0x92) [0x427ea2] -> dovecot/auth [4 wait, 0 passdb, 0 userdb]() [0x428686] -> dovecot/auth [4 wait, 0 passdb, 0 userdb]() [0x41de8a] -> /usr/lib64/dovecot/libdovecot.so.0(io_loop_call_io+0x4c) [0x7fb7d86096cc] -> /usr/lib64/dovecot/libdovecot.so.0(io_loop_handler_run_internal+0x101) [0x7fb7d860ab51] -> /usr/lib64/dovecot/libdovecot.so.0(io_loop_handler_run+0x25) [0x7fb7d8609755] -> /usr/lib64/dovecot/libdovecot.so.0(io_loop_run+0x38) [0x7fb7d86098f8] -> /usr/lib64/dovecot/libdovecot.so.0(master_service_run+0x13) [0x7fb7d859a263] -> dovecot/auth [4 wait, 0 passdb, 0 userdb](main+0x3af) [0x40d41f] -> /lib64/libc.so.6(__libc_start_main+0xf0) [0x7fb7d7728800] -> dovecot/auth [4 wait, 0 passdb, 0 userdb](_start+0x2a) [0x40d60a] Oct 21 08:43:51 mbox dovecot: pop3-login: Warning: Auth connection closed with 2 pending requests (max 1 secs, pid=31822, EOF) Oct 21 08:43:51 mbox dovecot: auth: Fatal: master: service(auth): child 31833 killed with signal 6 (core dumps disabled) But didn't try to reproduce it as "[PATCH] auth: Remove i_assert for credentials scheme" fixes it. > > Aki -- Arkadiusz Miśkiewicz, arekm / ( maven.pl | pld-linux.org )
Re: v2.2.26 released
On Thursday 27 of October 2016, Aki Tuomi wrote: > On 27.10.2016 16:39, Arkadiusz Miśkiewicz wrote: > > On Thursday 27 of October 2016, Timo Sirainen wrote: > >> http://dovecot.org/releases/2.2/dovecot-2.2.26.tar.gz > >> http://dovecot.org/releases/2.2/dovecot-2.2.26.tar.gz.sig > > > > Please merge to 2.2 branch this fix. I'm hitting that problem on 2.2.25: > > From 6c969ac21a43cc10ee1f1a91a4f39e4864c886cb Mon Sep 17 00:00:00 2001 > > > > From: Aki Tuomi <aki.tu...@dovecot.fi> > > Date: Fri, 15 Jul 2016 11:31:25 +0300 > > Subject: [PATCH] auth: Remove i_assert for credentials scheme > > > > --- > > > > src/auth/auth-request.c | 2 -- > > 1 file changed, 2 deletions(-) > > That fix is included in 2.2.26. Are you sure? I don't see it there. > Aki -- Arkadiusz Miśkiewicz, arekm / ( maven.pl | pld-linux.org )
Re: v2.2.26 released
On Thursday 27 of October 2016, Timo Sirainen wrote: > http://dovecot.org/releases/2.2/dovecot-2.2.26.tar.gz > http://dovecot.org/releases/2.2/dovecot-2.2.26.tar.gz.sig Please merge to 2.2 branch this fix. I'm hitting that problem on 2.2.25: From 6c969ac21a43cc10ee1f1a91a4f39e4864c886cb Mon Sep 17 00:00:00 2001 From: Aki Tuomi <aki.tu...@dovecot.fi> Date: Fri, 15 Jul 2016 11:31:25 +0300 Subject: [PATCH] auth: Remove i_assert for credentials scheme --- src/auth/auth-request.c | 2 -- 1 file changed, 2 deletions(-) -- Arkadiusz Miśkiewicz, arekm / ( maven.pl | pld-linux.org )
multiple SSL certificates story
Hi. Little story :-) I'm playing with dovecot 2.2.25 and multiple SSL certificates. ~7000 certificates which are loaded twice, so my dovecot has ~14 000 certificate pairs (14k key + 14k cert) in config. 14 000 local_name entries. Like these: local_name imap.example.com { ssl_cert =
Re: logging TLS SNI hostname
On Thursday 20 of October 2016, Aki Tuomi wrote: > On 20.10.2016 15:52, Arkadiusz Miśkiewicz wrote: > > > ... -servername something > > If you want to try out, try applying this patch... Works, thanks! > > From 066edb5e5c14a05c90e9ae63f0b76fcfd9c1149e Mon Sep 17 00:00:00 2001 > From: Aki Tuomi <aki.tu...@dovecot.fi> > Date: Thu, 20 Oct 2016 16:06:27 +0300 > Subject: [PATCH] login-common: Include local_name in login_var_expand_table > > This way it can be used in login_log_format -- Arkadiusz Miśkiewicz, arekm / ( maven.pl | pld-linux.org )
Re: logging TLS SNI hostname
On Thursday 20 of October 2016, Aki Tuomi wrote: > On 20.10.2016 15:41, Arkadiusz Miśkiewicz wrote: > > On Thursday 20 of October 2016, Aki Tuomi wrote: > >> On 18.10.2016 14:16, Arkadiusz Miśkiewicz wrote: > >>> On Monday 17 of October 2016, KT Walrus wrote: > >>>>> On Oct 17, 2016, at 2:41 AM, Arkadiusz Miśkiewicz <ar...@maven.pl> > >>>>> wrote: > >>>>> > >>>>> On Monday 30 of May 2016, Arkadiusz Miśkiewicz wrote: > >>>>>> Is there a way to log SNI hostname used in TLS session? Info is > >>>>>> there in SSL_CTX_set_tlsext_servername_callback, dovecot copies it > >>>>>> to ssl_io->host. > >>>>>> > >>>>>> Unfortunately I don't see it expanded to any variables ( > >>>>>> http://wiki.dovecot.org/Variables ). Please consider this to be a > >>>>>> feature request. > >>>>>> > >>>>>> The goal is to be able to see which hostname client used like: > >>>>>> > >>>>>> May 30 08:21:19 xxx dovecot: pop3-login: Login: user=, > >>>>>> method=PLAIN, rip=1.1.1.1, lip=2.2.2.2, mpid=17135, TLS, > >>>>>> SNI=pop3.somehost.org, session= > >>>>> > >>>>> Dear dovecot team, would be possible to add such variable ^ ? > >>>>> > >>>>> That would be neat feature because server operator would know what > >>>>> hostname client uses to connect to server (which is really usefull in > >>>>> case of many hostnames pointing to single IP). > >>>> > >>>> I’d love to be able to use this SNI domain name in the Dovecot IMAP > >>>> proxy for use in the SQL password_query. This would allow the proxy to > >>>> support multiple IMAP server domains each with their own set of users. > >>>> And, it would save me money by using only the IP of the proxy for all > >>>> the IMAP server domains instead of giving each domain a unique IP. > >>> > >>> It only needs to be carefuly implemented on dovecot side as TLS SNI > >>> hostname is information passed directly by client. > >>> > >>> So some fqdn name validation would need to happen in case if client has > >>> malicious intents. > >>> > >>>> Kevin > >> > >> Hi! > >> > >> I wonder if this would be of any help? It provides %{local_name} > >> passdb/userdb variable, you can use it for some logging too... > >> > >> https://github.com/dovecot/core/commit/fe791e96fdf796f7d8997ee0515b163dc > >> 5ed dd72 > > > > Should it work for such usage, too? > > > > login_log_format_elements = user=<%u> method=%m rip=%r lip=%l mpid=%e > > local_name=%{local_name} %c session=<%{session}> > > > > Because I'm not getting local_name logged at all (dovecot -a shows its > > there). > > > >> Aki > > > > Thanks, > > How did you try? With openssl you need to use openssl s_client -connect > ... -servername something Yes, using it. -servername is mandatory for TLS SNI to work. I'm getting correct certificate (as shown by openssl s_client). Certificate that's configured with local_name, so TLS SNI works fine on client and dovecot side. ps. I'm using 2.2.25 + above %{local_name} patch. Could some other patch be needed for this to work? > Aki -- Arkadiusz Miśkiewicz, arekm / ( maven.pl | pld-linux.org )
Re: logging TLS SNI hostname
On Thursday 20 of October 2016, Aki Tuomi wrote: > On 18.10.2016 14:16, Arkadiusz Miśkiewicz wrote: > > On Monday 17 of October 2016, KT Walrus wrote: > >>> On Oct 17, 2016, at 2:41 AM, Arkadiusz Miśkiewicz <ar...@maven.pl> > >>> wrote: > >>> > >>> On Monday 30 of May 2016, Arkadiusz Miśkiewicz wrote: > >>>> Is there a way to log SNI hostname used in TLS session? Info is there > >>>> in SSL_CTX_set_tlsext_servername_callback, dovecot copies it to > >>>> ssl_io->host. > >>>> > >>>> Unfortunately I don't see it expanded to any variables ( > >>>> http://wiki.dovecot.org/Variables ). Please consider this to be a > >>>> feature request. > >>>> > >>>> The goal is to be able to see which hostname client used like: > >>>> > >>>> May 30 08:21:19 xxx dovecot: pop3-login: Login: user=, > >>>> method=PLAIN, rip=1.1.1.1, lip=2.2.2.2, mpid=17135, TLS, > >>>> SNI=pop3.somehost.org, session= > >>> > >>> Dear dovecot team, would be possible to add such variable ^ ? > >>> > >>> That would be neat feature because server operator would know what > >>> hostname client uses to connect to server (which is really usefull in > >>> case of many hostnames pointing to single IP). > >> > >> I’d love to be able to use this SNI domain name in the Dovecot IMAP > >> proxy for use in the SQL password_query. This would allow the proxy to > >> support multiple IMAP server domains each with their own set of users. > >> And, it would save me money by using only the IP of the proxy for all > >> the IMAP server domains instead of giving each domain a unique IP. > > > > It only needs to be carefuly implemented on dovecot side as TLS SNI > > hostname is information passed directly by client. > > > > So some fqdn name validation would need to happen in case if client has > > malicious intents. > > > >> Kevin > > Hi! > > I wonder if this would be of any help? It provides %{local_name} > passdb/userdb variable, you can use it for some logging too... > > https://github.com/dovecot/core/commit/fe791e96fdf796f7d8997ee0515b163dc5ed > dd72 Should it work for such usage, too? login_log_format_elements = user=<%u> method=%m rip=%r lip=%l mpid=%e local_name=%{local_name} %c session=<%{session}> Because I'm not getting local_name logged at all (dovecot -a shows its there). > Aki Thanks, -- Arkadiusz Miśkiewicz, arekm / ( maven.pl | pld-linux.org )
Re: logging TLS SNI hostname
On Monday 17 of October 2016, KT Walrus wrote: > > On Oct 17, 2016, at 2:41 AM, Arkadiusz Miśkiewicz <ar...@maven.pl> wrote: > > > > On Monday 30 of May 2016, Arkadiusz Miśkiewicz wrote: > >> Is there a way to log SNI hostname used in TLS session? Info is there in > >> SSL_CTX_set_tlsext_servername_callback, dovecot copies it to > >> ssl_io->host. > >> > >> Unfortunately I don't see it expanded to any variables ( > >> http://wiki.dovecot.org/Variables ). Please consider this to be a > >> feature request. > >> > >> The goal is to be able to see which hostname client used like: > >> > >> May 30 08:21:19 xxx dovecot: pop3-login: Login: user=, > >> method=PLAIN, rip=1.1.1.1, lip=2.2.2.2, mpid=17135, TLS, > >> SNI=pop3.somehost.org, session= > > > > Dear dovecot team, would be possible to add such variable ^ ? > > > > That would be neat feature because server operator would know what > > hostname client uses to connect to server (which is really usefull in > > case of many hostnames pointing to single IP). > > I’d love to be able to use this SNI domain name in the Dovecot IMAP proxy > for use in the SQL password_query. This would allow the proxy to support > multiple IMAP server domains each with their own set of users. And, it > would save me money by using only the IP of the proxy for all the IMAP > server domains instead of giving each domain a unique IP. It only needs to be carefuly implemented on dovecot side as TLS SNI hostname is information passed directly by client. So some fqdn name validation would need to happen in case if client has malicious intents. > Kevin -- Arkadiusz Miśkiewicz, arekm / ( maven.pl | pld-linux.org )
Re: logging TLS SNI hostname
On Monday 30 of May 2016, Arkadiusz Miśkiewicz wrote: > Is there a way to log SNI hostname used in TLS session? Info is there in > SSL_CTX_set_tlsext_servername_callback, dovecot copies it to > ssl_io->host. > > Unfortunately I don't see it expanded to any variables ( > http://wiki.dovecot.org/Variables ). Please consider this to be a feature > request. > > The goal is to be able to see which hostname client used like: > > May 30 08:21:19 xxx dovecot: pop3-login: Login: user=, method=PLAIN, > rip=1.1.1.1, lip=2.2.2.2, mpid=17135, TLS, SNI=pop3.somehost.org, > session= Dear dovecot team, would be possible to add such variable ^ ? That would be neat feature because server operator would know what hostname client uses to connect to server (which is really usefull in case of many hostnames pointing to single IP). Thanks, -- Arkadiusz Miśkiewicz, arekm / ( maven.pl | pld-linux.org )
dovecot 2.2.25 BUG: local_name is not matching correctly
Bug report: When using dovecot 2.2.25 SNI capability it doesn't always match proper vhost config. For example if we have such config: local_name imap.example.com { ssl_cert =
Panic: file mail-transaction-log-file.c: line 104 (mail_transaction_log_file_free): assertion failed: (!file->locked)
2.2.25 (also happens on 2.2.24). Happens every time I try to make deliver and only for this user: Jul 14 09:52:02 mbox dovecot: lmtp(25601): Connect from local Jul 14 09:52:02 mbox dovecot: lmtp(powiadomienia): session=, Error: Index /var/mail/powiadomienia/dovecot.index: Lost log for seq=1009 offset=40: Missing middle file seq=1009 (between 1009..4294967295) Jul 14 09:52:02 mbox dovecot: lmtp(powiadomienia): session=, Warning: fscking index file /var/mail/powiadomienia/dovecot.index Jul 14 09:52:02 mbox dovecot: lmtp(powiadomienia): session=, Error: Fixed index file /var/mail/powiadomienia/dovecot.index: log_file_seq 1009 -> 1011 Jul 14 09:52:02 mbox dovecot: lmtp(powiadomienia): session=, Panic: file mail-transaction-log-file.c: line 104 (mail_transaction_log_file_free): assertion failed: (!file->locked) Jul 14 09:52:02 mbox dovecot: lmtp(powiadomienia): session=, Error: Raw backtrace: /usr/lib64/dovecot/libdovecot.so.0(+0x8d7d2) [0x7feb89fc97d2] -> /usr/lib64/dovecot/libdovecot.so.0(+0x8d8bd) [0x7feb89fc98bd] -> /usr/lib64/dovecot/libdovecot.so.0(i_fatal+0) [0x7feb89f67e31] -> /usr/lib64/dovecot/libdovecot-storage.so.0(mail_transaction_log_file_free+0x160) [0x7feb8a331fa0] -> /usr/lib64/dovecot/libdovecot- storage.so.0(mail_transaction_logs_clean+0x4d) [0x7feb8a3360ed] -> /usr/lib64/dovecot/libdovecot-storage.so.0(mail_transaction_log_close+0x30) [0x7feb8a336230] -> /usr/lib64/dovecot/libdovecot- storage.so.0(mail_transaction_log_move_to_memory+0xd5) [0x7feb8a3363e5] -> /usr/lib64/dovecot/libdovecot-storage.so.0(mail_index_move_to_memory+0xa0) [0x7feb8a330440] -> /usr/lib64/dovecot/libdovecot- storage.so.0(mail_index_write+0x183) [0x7feb8a32e9d3] -> /usr/lib64/dovecot/libdovecot-storage.so.0(mail_index_fsck+0xc1f) [0x7feb8a3186ff] -> /usr/lib64/dovecot/libdovecot-storage.so.0(mail_index_sync_map+0x49b) [0x7feb8a322eab] -> /usr/lib64/dovecot/libdovecot-storage.so.0(mail_index_map+0x71) [0x7feb8a31a231] -> /usr/lib64/dovecot/libdovecot-storage.so.0(+0xe0fed) [0x7feb8a32ffed] -> /usr/lib64/dovecot/libdovecot-storage.so.0(+0xe15f3) [0x7feb8a3305f3] -> /usr/lib64/dovecot/libdovecot-storage.so.0(mail_index_open+0x78) [0x7feb8a3306d8] -> /usr/lib64/dovecot/libdovecot-storage.so.0(index_storage_mailbox_open+0x92) [0x7feb8a309202] -> /usr/lib64/dovecot/libdovecot-storage.so.0(+0x6c0e2) [0x7feb8a2bb0e2] -> /usr/lib64/dovecot/libdovecot-storage.so.0(+0x6c1c8) [0x7feb8a2bb1c8] -> /usr/lib64/dovecot/plugins/lib20_zlib_plugin.so(+0x2fdc) [0x7feb85697fdc] -> /usr/lib64/dovecot/libdovecot-storage.so.0(+0x450c6) [0x7feb8a2940c6] -> /usr/lib64/dovecot/libdovecot-storage.so.0(mailbox_open+0x20) [0x7feb8a294240] -> /usr/lib64/dovecot/libdovecot-lda.so.0(mail_deliver_save_open+0xad) [0x7feb8a58d1ad] -> /usr/lib64/dovecot/libdovecot-lda.so.0(mail_deliver_save+0xbb) [0x7feb8a58d48b] -> /usr/lib64/dovecot/libdovecot-lda.so.0(mail_deliver+0x123) [0x7feb8a58d9e3] -> dovecot/lmtp [DATA powiadomienia]() [0x406bc8] -> /usr/lib64/dovecot/libdovecot.so.0(io_loop_call_io+0x4c) [0x7feb89fdd67c] -> /usr/lib64/dovecot/libdovecot.so.0(io_loop_handler_run_internal+0x101) [0x7feb89fdeb01] -> /usr/lib64/dovecot/libdovecot.so.0(io_loop_handler_run+0x25) [0x7feb89fdd705] Jul 14 09:52:02 mbox dovecot: lmtp(powiadomienia): session=, Fatal: master: service(lmtp): child 25601 killed with signal 6 (core dumps disabled) -- Arkadiusz Miśkiewicz, arekm / ( maven.pl | pld-linux.org )
logging TLS SNI hostname
Is there a way to log SNI hostname used in TLS session? Info is there in SSL_CTX_set_tlsext_servername_callback, dovecot copies it to ssl_io->host. Unfortunately I don't see it expanded to any variables ( http://wiki.dovecot.org/Variables ). Please consider this to be a feature request. The goal is to be able to see which hostname client used like: May 30 08:21:19 xxx dovecot: pop3-login: Login: user=, method=PLAIN, rip=1.1.1.1, lip=2.2.2.2, mpid=17135, TLS, SNI=pop3.somehost.org, session= -- Arkadiusz Miśkiewicz, arekm / ( maven.pl | pld-linux.org )
Re: Dovecot code repository moved to Github
On Wednesday 09 of December 2015, Timo Sirainen wrote: > http://hg.dovecot.org/ is no longer being updated. The public repository > exists now in https://github.com/dovecot/core instead. Nice. Should we (dovecot users) use github issue tracker for reporting bugs, too? -- Arkadiusz Miśkiewicz, arekm / ( maven.pl | pld-linux.org )
Re: v2.2.20 released
On Tuesday 08 of December 2015, Gerhard Wiesinger wrote: > On 07.12.2015 20:13, Timo Sirainen wrote: > > http://dovecot.org/releases/2.2/dovecot-2.2.20.tar.gz > > http://dovecot.org/releases/2.2/dovecot-2.2.20.tar.gz.sig > > > > This could be (one of) the last v2.2.x release. We're starting v2.3 > > development soon. > > Great! > > What's on the featurelist of v2.3? Support for thousands of ssl certificates without having to load/specify these in config would be nice. Something like load_cert_pattern = /etc/dovecot/ssl/$domain (aka if file exists - use it) cert_fallback = /etc/dovecot/ssl/primary.cert etc That would make it possible to use https://letsencrypt.org functionality for all hosted domains at once. -- Arkadiusz Miśkiewicz, arekm / ( maven.pl | pld-linux.org )
Re: v2.2.20 released
On Wednesday 09 of December 2015, Steffen Kaiser wrote: > On Tue, 8 Dec 2015, Arkadiusz Miśkiewicz wrote: > >> What's on the featurelist of v2.3? > > > > Support for thousands of ssl certificates without having to load/specify > > these in config would be nice. > > > > Something like > > load_cert_pattern = /etc/dovecot/ssl/$domain (aka if file exists - use > > it) cert_fallback = /etc/dovecot/ssl/primary.cert > > where does $domain come from? From connecting client, from SNI (https://en.wikipedia.org/wiki/Server_Name_Indication) > > > That would make it possible to use https://letsencrypt.org functionality > > for all hosted domains at once. -- Arkadiusz Miśkiewicz, arekm / ( maven.pl | pld-linux.org )
Re: Dovecot LMTP tries to access a directory of a different user, than the one it actually changed to.
On Friday 03 of July 2015, Ernest Deak wrote: Hello, I encountered a problem when trying to send an email to multiple recipients. That bug exists for some time http://www.dovecot.org/list/dovecot/2014-September/097688.html but no solution exists and I think no one actually tried to fix it. (no solution beside already mentioned ugly workaround with limiting to 1 recipient per lmtp session) -- Arkadiusz Miśkiewicz, arekm / ( maven.pl | pld-linux.org )
Re: [2.3 feature request]: multiple passwords for single user
On Monday 15 of December 2014, Rick Romero wrote: Quoting Arkadiusz Miśkiewicz ar...@maven.pl: Hi. I wonder if there any plans of finishing multiple passwords for single user feature? snip Untill that happens (not that great) workaround exists: http://wiki2.dovecot.org/Authentication/MultipleDatabases Whoops misfired Unless you want a single service to have multiple passwords, I do want exactly that. which doesn't seem like a good idea to me, Good/bad depends on usage scenario and needs, so don't worry about this. use SQL if statements to separate by service/host. http://www.dovecot.org/list/dovecot/2014-July/097140.html That won't work in my scenario. I need two (or more) passwords for the same service. -- Arkadiusz Miśkiewicz, arekm / ( maven.pl | pld-linux.org )
[2.3 feature request]: multiple passwords for single user
Hi. I wonder if there any plans of finishing multiple passwords for single user feature? Few months old thread mentioned this: http://www.dovecot.org/list/dovecot/2014-July/097217.html and even a patch that unfortunately was never finished http://dovecot.org/patches/2.0/auth-multi-password-2.0.diff Multiple passwords for single user are great since allow easy password management for users. For example you could have one password on a laptop, one on a phone, one on a tablet. Revoking access to the device is as easy as dropping single password and it doesn't affect other devices. There are more use cases. Untill that happens (not that great) workaround exists: http://wiki2.dovecot.org/Authentication/MultipleDatabases Thanks, -- Arkadiusz Miśkiewicz, arekm / ( maven.pl | pld-linux.org )
[PATCH]: libexttextcat from libreoffice
Hello. There is libexttextcat version provided by libreoffice team http://www.freedesktop.org/wiki/Software/libexttextcat http://dev-www.libreoffice.org/src/libexttextcat/ which uses pkgconfig. Library name is different: libexttextcat-2.0.so so dovecot configure doesn't find it. Something like this is needed: --- dovecot-2.2.15/configure.ac~2014-10-25 05:57:08.0 +0200 +++ dovecot-2.2.15/configure.ac 2014-11-14 08:49:02.888452270 +0100 @@ -2747,10 +2747,16 @@ have_lucene_textcat=yes AC_DEFINE(HAVE_LUCENE_TEXTCAT,, Define if you want textcat support for CLucene) ], [ -AC_CHECK_LIB(exttextcat, special_textcat_Init, [ - have_lucene_exttextcat=yes - AC_DEFINE(HAVE_LUCENE_EXTTEXTCAT,, Define if you want textcat (Debian version) support for CLucene) -]) + if test $PKG_CONFIG != $PKG_CONFIG --exists libexttextcat 2/dev/null; then + PKG_CHECK_MODULES(LIBEXTTEXTCAT, libexttextcat) + LIBS=$LIBS $LIBEXTTEXTCAT_LIBS + AC_DEFINE(HAVE_LUCENE_EXTTEXTCAT,, Define if you want textcat (LibreOffice version) support for CLucene) + else + AC_CHECK_LIB(exttextcat, special_textcat_Init, [ + have_lucene_exttextcat=yes + AC_DEFINE(HAVE_LUCENE_EXTTEXTCAT,, Define if you want textcat (Debian version) support for CLucene) + ]) + fi ]) ], [ if test $want_stemmer = yes; then -- Arkadiusz Miśkiewicz, arekm / ( maven.pl | pld-linux.org )
Renaming not supported across conflicting directory - why?
I wonder what is point of checkign file/dir create mode like this? Here it makes problems since some folders have different permissions than others but both are accessible/writable by user. So renaming is possible (as renaming according to unix permissions). Yet dovecot artificially prevents this. /* if we're renaming under another mailbox, require its permissions to be same as ours. */ if (strchr(newname, mailbox_list_get_hierarchy_sep(newlist)) != NULL) { struct mailbox_permissions old_perm, new_perm; mailbox_list_get_permissions(oldlist, oldname, old_perm); mailbox_list_get_permissions(newlist, newname, new_perm); if ((new_perm.file_create_mode != old_perm.file_create_mode || new_perm.dir_create_mode != old_perm.dir_create_mode || new_perm.file_create_gid != old_perm.file_create_gid)) { mailbox_list_set_error(oldlist, MAIL_ERROR_NOTPOSSIBLE, Renaming not supported across conflicting directory permissions); return -1; } } -- Arkadiusz Miśkiewicz, arekm / ( maven.pl | pld-linux.org )
Re: v2.2.15 released - test suite segfault
: test_run_with_fatals (test-common.c:362) ==6098==by 0x317B821C14: (below main) (in /lib64/libc-2.20.so) ==6098== Makefile:1877: recipe for target 'check-test' failed make[2]: *** [check-test] Error 1 -- Arkadiusz Miśkiewicz, arekm / ( maven.pl | pld-linux.org )
Re: v2.2.15 released - test suite segfault
On Saturday 25 of October 2014, Timo Sirainen wrote: On 25 Oct 2014, at 06:11, Arkadiusz Miśkiewicz ar...@maven.pl wrote: fatal_printf_format_fix .. : ok 0 / 190 tests failed ==6098== Invalid read of size 16 ==6098==at 0x317B880804: ??? (in /lib64/libc-2.20.so) ==6098==by 0x317B8A93B6: ??? (in /lib64/libc-2.20.so) ==6098==by 0x317B8AAA21: ??? (in /lib64/libc-2.20.so) ==6098==by 0x317B8A9C0F: ??? (in /lib64/libc-2.20.so) ==6098==by 0x317B8A9F94: ??? (in /lib64/libc-2.20.so) ==6098==by 0x42A0D7: utc_mktime (utc-mktime.c:39) That's inside gmtime() call. Looks to me like a libc bug. What OS / libc / CPU is this with? Anyway this code hasn't changed for years. Ok, looks like that was valgrind fault. linux, glibc 2.20, x86_64 -- Arkadiusz Miśkiewicz, arekm / ( maven.pl | pld-linux.org )
Re: dovecot 2.2.13: LMTP delivery with multiple recipients incorrectly mixes users
On Monday 01 of September 2014, Arkadiusz Miśkiewicz wrote: Hi. I'm using exim that delivers email over LMTP to dovecot 2.2.13. I noticed that dovecot LMTP service is sometimes (reare but repeats) mixing users. Example below. There is one mail (msgid=1ACE53B70631CA45B62348E4EE8757493731A59E@KRMXA41) that is going to be delivered to multiple local recipients. What is worse is that dovecot lmtp can sometimes (if permissions allow that) create mail file, in maildir tree, of user B using user A uid/gid! All that because it mixes users. That leads to more problems (like when using filesystem quota. Since user A has his files (by uid/gid) stored in directory of user B. A cannot access them, delete them but they still eat user A quota). Looks to be some major brokeness in dovecot lmtp. (batch_max = 1 should workaround the problem I think but that's not a solution) Some recipients are delivered properly: Sep 1 05:40:33 host dovecot: lmtp(3176): Connect from local Sep 1 05:40:34 host dovecot: lmtp(3176, gbuser1): TDO+HNDpA1RoDAAA16XVAg: msgid=1ACE53B70631CA45B62348E4EE8757493731A59E@KRMXA41: saved mail to INBOX Sep 1 05:40:34 host dovecot: lmtp(3176, jpuser2): TDO+HNDpA1RoDAAA16XVAg: msgid=1ACE53B70631CA45B62348E4EE8757493731A59E@KRMXA41: saved mail to INBOX Sep 1 05:40:34 host dovecot: lmtp(3176, rkuser3): TDO+HNDpA1RoDAAA16XVAg: msgid=1ACE53B70631CA45B62348E4EE8757493731A59E@KRMXA41: saved mail to INBOX Sep 1 05:40:34 host dovecot: lmtp(3176, gbruser4): TDO+HNDpA1RoDAAA16XVAg: msgid=1ACE53B70631CA45B62348E4EE8757493731A59E@KRMXA41: saved mail to INBOX Sep 1 05:40:34 host dovecot: lmtp(3176, pbauser5): TDO+HNDpA1RoDAAA16XVAg: msgid=1ACE53B70631CA45B62348E4EE8757493731A59E@KRMXA41: saved mail to INBOX Sep 1 05:40:34 host dovecot: lmtp(3176, mwauser6): TDO+HNDpA1RoDAAA16XVAg: msgid=1ACE53B70631CA45B62348E4EE8757493731A59E@KRMXA41: saved mail to INBOX Sep 1 05:40:34 host dovecot: lmtp(3176, mdyuser7): TDO+HNDpA1RoDAAA16XVAg: msgid=1ACE53B70631CA45B62348E4EE8757493731A59E@KRMXA41: saved mail to INBOX but some are not: Sep 1 05:40:34 host dovecot: lmtp(3176, lkrzyuser8): Error: lstat(/var/lib/dovecot/control/gbuser1/.INBOX/dovecot-uidlist.lock) failed: Permission denied Sep 1 05:40:34 host dovecot: lmtp(3176, lkrzyuser8): Error: file_dotlock_create(/var/lib/dovecot/control/gbuser1/.INBOX/dovecot-uidlis t) failed: Permission denied (euid=28371(unknown) egid=17373(unknown) missing +x perm: /var/lib/dovecot/control/gbuser1, dir owned by 67593:17373 mode=0700) Notice it was trying to deliver to user lkrzyuser8 but it tries to access some other user files (dovecot-uidlist). euid=28371 is indeed lkrzyuser8 but why it tries to access gbuser1 files? Sep 1 05:40:34 host dovecot: lmtp(3176, lkrzyuser8): Error: lstat(/var/lib/dovecot/control/gbuser1/.INBOX/dovecot-uidlist.lock) failed: Permission denied Sep 1 05:40:34 host dovecot: lmtp(3176, lkrzyuser8): Error: file_dotlock_create(/var/lib/dovecot/control/gbuser1/.INBOX/dovecot-uidlis t) failed: Permission denied (euid=28371(unknown) egid=17373(unknown) missing +x perm: /var/lib/dovecot/control/gbuser1, dir owned by 67593:17373 mode=0700) Sep 1 05:40:34 host dovecot: lmtp(3176, lkrzyuser8): TDO+HNDpA1RoDAAA16XVAg: msgid=1ACE53B70631CA45B62348E4EE8757493731A59E@KRMXA41: save failed to INBOX: BUG: Unknown internal error Above is again the same case. Sep 1 05:40:34 host dovecot: lmtp(3176, wm1user9): Error: lstat(/var/lib/dovecot/control/gbuser1/.INBOX/dovecot-uidlist.lock) failed: Permission denied Sep 1 05:40:34 host dovecot: lmtp(3176, wm1user9): Error: file_dotlock_create(/var/lib/dovecot/control/gbuser1/.INBOX/dovecot-uidlis t) failed: Permission denied (euid=128065(unknown) egid=17373(unknown) missing +x perm: /var/lib/dovecot/control/gbuser1, dir owned by 67593:17373 mode=0700) Sep 1 05:40:34 host dovecot: lmtp(3176, wm1user9): Error: lstat(/var/lib/dovecot/control/gbuser1/.INBOX/dovecot-uidlist.lock) failed: Permission denied Sep 1 05:40:34 host dovecot: lmtp(3176, wm1user9): Error: file_dotlock_create(/var/lib/dovecot/control/gbuser1/.INBOX/dovecot-uidlis t) failed: Permission denied (euid=128065(unknown) egid=17373(unknown) missing +x perm: /var/lib/dovecot/control/gbuser1, dir owned by 67593:17373 mode=0700) Sep 1 05:40:34 host dovecot: lmtp(3176, wm1user9): TDO+HNDpA1RoDAAA16XVAg: msgid=1ACE53B70631CA45B62348E4EE8757493731A59E@KRMXA41: save failed to INBOX: BUG: Unknown internal error And here again the same problem but with user wm1user9 Sep 1 05:40:34 host dovecot: lmtp(3176): Disconnect from local: Successful quit # doveadm user gbuser1 field value uid 67593 gid 17373 home/var/mail/gbuser1/ mailmaildir:/var/mail/gbuser1/:CONTROL=/var/lib/dovecot/control/gbuser1 # doveadm user lkrzyuser8 field value uid 28371 gid 17373 home/var/mail/lkrzyuser8/ mail maildir:/var/mail/lkrzyuser8/:CONTROL
lmtp memory usage problem - Fatal: pool_system_realloc(268435456): Out of memory
Hi. In my setup exim delivers mails to dovecot using LMTP. In one LMTP session exim can deliver up to 200 recipients (batch_max set to that value). Now the problem is that sometimes 256MB is not enoug for dovecot lmtp to handle incoming emails. My questions: - how big memory limit should be for lmtp? I was thinking that lmtp (more or less) simply reads from one descriptor and writes to file, then does rename() (maildir used here) and that's all. That shouldn't require big number of memory. So how to determine correct memory limit and what affects this limit? - is number of recipients in one LMTP session important here? Not sure, maybe dovecot stores email in memory first and then writes to the user maildirs? Setting (batch) limit to 1 could reduce memory usage then (since no need to store anything in memory) ? Thanks, Log: Sep 4 16:10:30 mail dovecot: lmtp(21383, user): Fatal: pool_system_realloc(268435456): Out of memory Sep 4 16:10:30 mail dovecot: lmtp(21383, user): Error: Raw backtrace: /usr/lib64/dovecot/libdovecot.so.0(+0x682a0) [0x7fe327e632a0] - /usr/lib64/dovecot/libd ovecot.so.0(+0x6837e) [0x7fe327e6337e] - /usr/lib64/dovecot/libdovecot.so.0(i_error+0) [0x7fe327e1dbf8] - /usr/lib64/dovecot/libdovecot.so.0(+0x7d6a3) [0x7fe327e786a3] - /usr/lib64/dovecot/libdovecot.so.0(i_stream_grow_buffer+0x8f) [0x7fe327e6c4cf] - /usr/lib64/dovecot/libdovecot.so.0(i_stream_try_alloc+0x82) [0x7fe327e6c592] - /usr/lib6 4/dovecot/libdovecot.so.0(+0x73b9b) [0x7fe327e6eb9b] - /usr/lib64/dovecot/libdovecot.so.0(+0x73c36) [0x7fe327e6ec36] - /usr/lib64/dovecot/libdovecot.so.0(i_stream_read+0x 53) [0x7fe327e6bad3] - /usr/lib64/dovecot/libdovecot.so.0(+0x77391) [0x7fe327e72391] - /usr/lib64/dovecot/libdovecot.so.0(i_stream_read+0x53) [0x7fe327e6bad3] - /usr/lib 64/dovecot/libdovecot.so.0(i_stream_read_data+0x3d) [0x7fe327e6c2fd] - /usr/lib64/dovecot/libdovecot.so.0(io_stream_copy+0x7f) [0x7fe327e7cacf] - /usr/lib64/dovecot/libdo vecot.so.0(+0x83310) [0x7fe327e7e310] - /usr/lib64/dovecot/libdovecot.so.0(o_stream_send_istream+0x4d) [0x7fe327e7c92d] - /usr/lib64/dovecot/libdovecot-storage.so.0(maild ir_save_continue+0x5a) [0x7fe32811b14a] - /usr/lib64/dovecot/libdovecot-storage.so.0(mail_storage_copy+0x88) [0x7fe328145328] - /usr/lib64/dovecot/libdovecot-storage.so.0 (maildir_copy+0x42) [0x7fe3281179c2] - /usr/lib64/dovecot/plugins/lib10_quota_plugin.so(+0xb52b) [0x7fe32764352b] - /usr/lib64/dovecot/libdovecot-storage.so.0(mailbox_cop y+0x6d) [0x7fe32814d29d] - /usr/lib64/dovecot/libdovecot-lda.so.0(mail_deliver_save+0x185) [0x7fe3283f1765] - /usr/lib64/dovecot/libdovecot-lda.so.0(mail_deliver+0xeb) [0 x7fe3283f1b6b] - dovecot/lmtp() [0x405d80] - /usr/lib64/dovecot/libdovecot.so.0(io_loop_call_io+0x3f) [0x7fe327e7398f] - /usr/lib64/dovecot/libdovecot.so.0(io_loop_handl er_run_internal+0xd7) [0x7fe327e74897] - /usr/lib64/dovecot/libdovecot.so.0(io_loop_handler_run+0x9) [0x7fe327e739f9] - /usr/lib64/dovecot/libdovecot.so.0(io_loop_run+0x3 8) [0x7fe327e73a78] - /usr/lib64/dovecot/libdovecot.so.0(master_service_run+0x13) [0x7fe327e22ca3] Sep 4 16:10:30 mail dovecot: lmtp(21383, user): Fatal: master: service(lmtp): child 21383 returned error 83 (Out of memory (service lmtp { vsz_limit=256 MB }, you may need to increase it) - set CORE_OUTOFMEM=1 environment to get core dump) -- Arkadiusz Miśkiewicz, arekm / maven.pl
dovecot 2.2.13: LMTP delivery with multiple recipients incorrectly mixes users
:41:34 host dovecot: lmtp(4737): Disconnect from local: Successful quit The problem is not reasily repeatable. It happens several times a day for different users each time (while thousands users are logging in), so I guess some race condition takes place. # dovecot -n # 2.2.13: /etc/dovecot/dovecot.conf doveconf: Warning: service auth { client_limit=1000 } is lower than required under max. load (8000) doveconf: Warning: service anvil { client_limit=1000 } is lower than required under max. load (6003) # OS: Linux 3.14.17-1 x86_64 xfs auth_mechanisms = plain login auth_username_chars = abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ01234567890.-_@= auth_username_translation = @= auth_verbose = yes default_process_limit = 2000 default_vsz_limit = 512 M disable_plaintext_auth = no first_valid_gid = 1500 first_valid_uid = 1500 lda_mailbox_autocreate = yes lmtp_save_to_detail_mailbox = yes login_greeting = Mail server ready. mail_location = maildir:/var/mail/%Ln:CONTROL=/var/lib/dovecot/control/%Ln mail_log_prefix = %s(%u): session=%{session}, mail_plugins = zlib quota namespace { hidden = no inbox = yes location = prefix = INBOX. separator = . type = private } passdb { args = /etc/dovecot/dovecot-sql.conf.ext driver = sql } plugin { mail_log_events = delete undelete expunge copy mailbox_delete mailbox_rename quota = fs:User quota:user quota2 = fs:Group quota:group } postmaster_address = postmas...@somwehere.pl service auth { unix_listener auth-userdb { mode = 0666 } } service imap { process_limit = 2048 } service pop3 { process_limit = 1024 } userdb { args = /etc/dovecot/dovecot-sql.conf.ext driver = sql } protocol lmtp { auth_username_format = %Ln auth_username_translation = } protocol imap { imap_logout_format = bytes=%i/%o mail_max_userip_connections = 20 mail_plugins = zlib quota imap_quota mail_log notify } protocol pop3 { mail_max_userip_connections = 20 mail_plugins = mail_log notify pop3_client_workarounds = outlook-no-nuls oe-ns-eoh pop3_logout_format = top=%t/%p, retr=%r/%b, del=%d/%m, size=%s, bytes=%i/%o pop3_uidl_format = %Mf } -- Arkadiusz Miśkiewicz, arekm / maven.pl
Re: rebuilding indexes for dovecot pop3
On Wednesday 20 of August 2014, Timo Sirainen wrote: On 18 Aug 2014, at 08:48, Arkadiusz Miśkiewicz ar...@maven.pl wrote: Hi. doveadm -D fetch -u login \ 'imap.envelope imap.bodystructure size.virtual date.received' '*' according to mail archives allows to regenerate indexes and precache specified headers for imap usage. What headers are needed for dovecot pop3 daemon? The only thing that POP3 needs is the UIDL. So depending on your pop3_uidl_format setting you may or may not need something indexed. But you could just fetch pop3.uidl and that'll figure it out automatically. Ok, so I have imap and pop3 covered. What about deliver and LMTP ? These also are updating cache/index files afaik. How these know what to put into cache ? I have one server where dovecot works for imap only. For pop3 there is different software being used. I want to migrate to dovecot pop3 but I need to precache and regenerate indexes for dovecot pop3. Make sure to preserve the UIDLs to avoid users redownloading mails. Using %Mf already (which is compatible with old pop3 daemon). -- Arkadiusz Miśkiewicz, arekm / maven.pl
rebuilding indexes for dovecot pop3
Hi. doveadm -D fetch -u login \ 'imap.envelope imap.bodystructure size.virtual date.received' '*' according to mail archives allows to regenerate indexes and precache specified headers for imap usage. What headers are needed for dovecot pop3 daemon? I have one server where dovecot works for imap only. For pop3 there is different software being used. I want to migrate to dovecot pop3 but I need to precache and regenerate indexes for dovecot pop3. Note, some users used pop3 only, so dovecot imap had no chance to generate any indexes. Thanks, -- Arkadiusz Miśkiewicz, arekm / maven.pl
auth_username_translation and LTMP problem
auth_username_translation seems to be appliet on RCPT TO address of LTMP transport. Why dovecot is doing that? And better question - is there a way to disable auth_username_translation for LTMP but leave enabled for the rest (imap, pop3 etc) ? Background: I'm doing auth_username_translation = @= to allow logins like a...@bbb.pl to be internally translated to aaa=bbb.pl. That works fine. Now my exim delivers mail to dovecot using LTMP and it does translation or it own, so it does: RCPT TO: aaa=bbb...@mymbox.pl Unfortunately looks like dovecot it doing translation one more time and looking in user database for aaa=bbb.pl=mymbox.pl where such user doesn't exist. Only aaa=bbb.pl user exists. Thanks, -- Arkadiusz Miśkiewicz, arekm / maven.pl
Re: auth_username_translation and LTMP problem
On Friday 15 of August 2014, Arkadiusz Miśkiewicz wrote: auth_username_translation seems to be appliet on RCPT TO address of LTMP transport. Why dovecot is doing that? And better question - is there a way to disable auth_username_translation for LTMP but leave enabled for the rest (imap, pop3 etc) ? Note, just tested: auth_username_translation = @= protocol lmtp { auth_username_translation = } This can be set but that doesn't work. dovecot seems to be still doing translation even for lmtp. -- Arkadiusz Miśkiewicz, arekm / maven.pl
macros
Does dovecot support any form of macros? I would like to share configuration file between several servers. Configs are different only in tiny aspects. Somethine like: dovecot-server.conf (different on each server): %define ID 55 %define SOMETHING SELECT FROM * WHERE something %define MECHANISMS digest-md5 dovecot-main.conf (common, shared config) !include dovecot-server.conf user_query = ${SOMETHING} AND id=${ID} auth_mechanisms = plain login ${MECHANISMS} etc -- Arkadiusz Miśkiewicz, arekm / maven.pl
Re: [Dovecot] mail_log_events, but who exactly triggered events? [feature request]
On Thursday 30 of January 2014, Steffen Kaiser wrote: On Thu, 30 Jan 2014, Reindl Harald wrote: Am 30.01.2014 12:04, schrieb Arkadiusz Miśkiewicz: On Thursday 30 of January 2014, Reindl Harald wrote: Am 30.01.2014 10:50, schrieb Arkadiusz Miśkiewicz: mail_log_events is nice addition but how to log who exactly triggered particular event? For example 5 users from 5 IP addresses uses single imap user/mailbox. One of them deletes email and I'm logging delete related events. The only logged thing is: dovecot: imap(user): delete: box=INBOX, uid=673287, msgid=some@thing, size=1230 Here is a feature request: Add optionally (or unconditionally) logging of session id in mail_log_events. Timo, is this possible? (the same session id that appears in login log entries: dovecot: imap-login: Login: user=someone2, method=PLAIN, rip=aaa, lip=yyy, mpid=11682, TLS, session=U1lD9y3xoQBPuvZx) So for example this would get logged: dovecot: imap(user): delete: box=INBOX, uid=673287, msgid=some@thing, size=1230, session=U1lD9y3xoQBPuvZx @Arkadiusz, please tell us, if 10 people use the same account name and password, how would you as a server behind the internet with a human brain differ those 10 individuals? The only idea I, personally, have is the IP address: Do they connect from different IP addresses _all_ the time? No NAT involved? Do you know who uses which IP address _all_ the time? If so, Dovecot logs the IP address during login and you can associate a PID with an IP address, IMHO you can add the remote IP address to the log string. Check out the variables page in the Wiki. But, frankly, _if_ you have someone, who is bad and deletes important mail, you should see sensible reason to disallow such work style. The next time you see yet another IP address and don't know the user again. Ok, but why session id that's assigned at login cannot be logged in mail_log_events, too? Is there any technical problem with this approach? It solves the problem (yes, assume different IP addresses; won't work obviously if the address is the same) The discussion is now about changing the way service is used by people while I'm more interested in what dovecot can do or (enhancing) dovecot capabilities. -- Arkadiusz Miśkiewicz, arekm / maven.pl
Re: [Dovecot] mail_log_events, but who exactly triggered events? [feature request]
On Tuesday 04 of February 2014, Steffen Kaiser wrote: On Tue, 4 Feb 2014, Arkadiusz Miśkiewicz wrote: Date: Tue, 4 Feb 2014 13:09:15 +0100 From: Arkadiusz Miśkiewicz ar...@maven.pl To: dovecot@dovecot.org Subject: Re: [Dovecot] mail_log_events, but who exactly triggered events? [feature request] On Thursday 30 of January 2014, Steffen Kaiser wrote: On Thu, 30 Jan 2014, Reindl Harald wrote: Am 30.01.2014 12:04, schrieb Arkadiusz Miśkiewicz: On Thursday 30 of January 2014, Reindl Harald wrote: Am 30.01.2014 10:50, schrieb Arkadiusz Miśkiewicz: mail_log_events is nice addition but how to log who exactly triggered particular event? For example 5 users from 5 IP addresses uses single imap user/mailbox. One of them deletes email and I'm logging delete related events. The only logged thing is: dovecot: imap(user): delete: box=INBOX, uid=673287, msgid=some@thing, size=1230 Here is a feature request: Add optionally (or unconditionally) logging of session id in mail_log_events. Timo, is this possible? (the same session id that appears in login log entries: dovecot: imap-login: Login: user=someone2, method=PLAIN, rip=aaa, lip=yyy, mpid=11682, TLS, session=U1lD9y3xoQBPuvZx) So for example this would get logged: dovecot: imap(user): delete: box=INBOX, uid=673287, msgid=some@thing, size=1230, session=U1lD9y3xoQBPuvZx did you've tried this: http://wiki2.dovecot.org/Variables there is the session variable and the mail_log_prefix setting. Should work, IMHO. Wow, easy. Works nicely. Thanks! -- Arkadiusz Miśkiewicz, arekm / maven.pl
[Dovecot] mail_log_events, but who exactly triggered events?
Hi. mail_log_events is nice addition but how to log who exactly triggered particular event? For example 5 users from 5 IP addresses uses single imap user/mailbox. One of them deletes email and I'm logging delete related events. The only logged thing is: dovecot: imap(user): delete: box=INBOX, uid=673287, msgid=some@thing, size=1230 which tells me nothing about who triggered it actually (note all 5 users were logged in at deletion time) How to solve this problem? Thanks, -- Arkadiusz Miśkiewicz, arekm / maven.pl
Re: [Dovecot] mail_log_events, but who exactly triggered events?
On Thursday 30 of January 2014, Reindl Harald wrote: Am 30.01.2014 10:50, schrieb Arkadiusz Miśkiewicz: mail_log_events is nice addition but how to log who exactly triggered particular event? For example 5 users from 5 IP addresses uses single imap user/mailbox. One of them deletes email and I'm logging delete related events. The only logged thing is: dovecot: imap(user): delete: box=INBOX, uid=673287, msgid=some@thing, size=1230 which tells me nothing about who triggered it actually (note all 5 users were logged in at deletion time) How to solve this problem? do not share user-logins I'm not sharing. Customers are. don't do that for any service, not only mail That impossible to make. Customer creates login abc on my server and gives it to 10 employees to watch that mailbox. 10 employees log in to that single accound and do some actions. One of them is bad and deletes important mail. I want to be able to figure which one. I have no control over customers. Also I see no sensible reason to disallow such work style. that's why ACL / shared mailboxes exists because in that case you have the unique username in the logs instead always the same one When customers log in: dovecot: pop3-login: Login: user=someone1, method=PLAIN, rip=xxx, lip=yyy, mpid=11680, session=MR9D9y3xhwBb6rD1 dovecot: imap-login: Login: user=someone2, method=PLAIN, rip=aaa, lip=yyy, mpid=11682, TLS, session=U1lD9y3xoQBPuvZx session id is logged. Now how to get that id logged in mail_log_events lines? -- Arkadiusz Miśkiewicz, arekm / maven.pl
Re: [Dovecot] Dovecot MTA
On Friday 08 of November 2013, Timo Sirainen wrote: My main design goals for the MTA are: [...] * Configuration: It would take years to implement all of the settings that Postfix has, but I think it's not going to be necessary. In fact I think the number of new settings to dovecot.conf that Dovecot MTA requires would be very minimal. Instead nearly all of the configuration could be done using Sieve scripts. We'd need to implement some new MTA-specific Sieve extensions and a few core features/configurations/databases that the scripts can use, but after that there wouldn't be really any limits to what could be done with them. What I would love is configuration flexibility, some simplified programming language for configuration to allow doing magic things with this new mta and not just be limited by fixed configuration boundaries. exim allows much of such flexibility (including delivery dependant on moon phase - can be easily implemented) but its configuration language is horrible. (For simple mta lovers - http://opensmtpd.org/) -- Arkadiusz Miśkiewicz, arekm / maven.pl
[Dovecot] mdbox and filesystem quota
http://wiki2.dovecot.org/MailboxFormat/dbox Expunging a message only decreases the message's refcount. The space is later freed in purge step. This is typically done in a nightly cronjob when there's less disk I/O activity. What happens if there is filesystem hard quota that is exceeded? Will dovecot allow to delete mails to free space without a need to wait for cronjob to do the purge? -- Arkadiusz MiśkiewiczPLD/Linux Team arekm / maven.plhttp://ftp.pld-linux.org/
Re: [Dovecot] mdbox and filesystem quota
On Sunday 18 of March 2012, Timo Sirainen wrote: On 18.3.2012, at 23.00, Arkadiusz Miśkiewicz wrote: http://wiki2.dovecot.org/MailboxFormat/dbox Expunging a message only decreases the message's refcount. The space is later freed in purge step. This is typically done in a nightly cronjob when there's less disk I/O activity. What happens if there is filesystem hard quota that is exceeded? Will dovecot allow to delete mails to free space without a need to wait for cronjob to do the purge? No. Also the purging itself won't work, because it needs to write new data first before it can delete old data. Don't run out of disk space! Can dovecot treat soft quota like hard quota for user then? Or better enforce quota based on filesystem quot information. With xfs I can set quota but turn enforcement off. All fs quota counters work but no enforcement is being made by xfs itself. -- Arkadiusz MiśkiewiczPLD/Linux Team arekm / maven.plhttp://ftp.pld-linux.org/