Re: The end of Dovecot Director?

2022-11-02 Thread Tom Sommer

On 2022-11-01 16:58, Mark Moseley wrote:

This *feels" to me like a parent company looking to remove features 
from the open source version in order to add feature differentiation to 
the paid version.


I've loved the Dovecot project for over a decade and a half. And 
incidentally I have a very warm spot in my heart for Timo and Aki, 
thanks to Dovecot and especially this mailing list.


I've also loved the PowerDNS project for a decade and a half, so this 
removal of _existing functionality_ is doubly worrisome. I'd like both 
projects to be monetisable and profitable enough to their parent so 
that they continue on for a very, very long time.


But removing long-standing features is a bad look. Please reconsider 
this decision.


Big +1

---
Tom


Re: The end of Dovecot Director?

2022-10-21 Thread Tom Sommer

To be clear, you are removing the Director...

---
Tom

On 2022-10-21 13:28, Aki Tuomi wrote:
To be clear, we are not removing proxying features from Dovecot either. 
Just the director ring feature.


Aki


On 21/10/2022 14:14 EEST Amol Kulkarni  wrote:


Nginx has an mail proxy for pop, imap, smtp.
Can it be used instead of director ?


On Fri, 21 Oct 2022 at 16:21,  wrote:
> On 2022-10-21 10:51, Zhang Huangbin wrote:
>  >> On Oct 21, 2022, at 5:23 PM, hi@zakaria.website wrote:
>  >>
>  >> I was wondering if one can achieve the same implementation with
>  >> haproxy without dovecot director?
>  >
>  > The most important part of Director is it makes sure same mail user
>  > always proxied to same backend IMAP server.
>  >
>  > If mailbox is in Maildir format (and stored on shared storage like
>  > NFS), accessing it from different server may corrupt Dovecot index
>  > files and mailbox becomes unaccessible. Director perfectly avoids this
>  > issue.
>  >
>  > HAProxy can proxy mail user from same client IP to same backend IMAP
>  > server, but not same mail user from different IPs.
>  >
>  > Quote (https://doc.dovecot.org/admin_manual/director/dovecotdirector/):
>  >
>  > "Director can be used by Dovecot’s IMAP/POP3/LMTP proxy to keep a
>  > temporary user -> mail server mapping. As long as user has simultaneous
>  > connections, the user is always redirected to the same server. Each
>  > proxy server is running its own director process, and the directors are
>  > communicating the state to each others. Directors are mainly useful for
>  > setups where all of the mail storage is seen by all servers, such as
>  > with NFS or a cluster filesystem."
>  >
>  > 
>  > Zhang Huangbin, founder of:
>  > - iRedMail: Open source email server solution:
>  > https://www.iredmail.org/
>  > - Spider: Lightweight, on-premises Email Archiving Software:
>  > https://spiderd.io
>
>  Aha makes sense, although I was not able to see how can index files be
>  corrupted when its if will going to be updated, its in same manner as
>  from different connection, e.g. opening email account from different app
>  clients, with different connections, does not corrupt the index files?
>
>  Also, Is it the issue Director resolving as well its with maintaining
>  the logged in dovecot connection to same backend? Anyhow, thanks for
>  your valuable efforts in clearing this :)
>
>  I wondered if there is any other solution to avoid corrupting index
>  files? Perhaps if dovecot offer database indexing as well as login
>  sessions, it seems that this would eliminate Director requirement, and
>  offer better high availability, as for now userdb/authdb is only
>  available per my knowledge, and using database cluster resolves the
>  issue with user and auth queries during simultaneous connections to a
>  different backends.
>
>  Otherwise, it seems in large enterprise deployment with high
>  availability a Director implementation will be needed, hopefully we will
>  find an alternative solution by the time Dovecot 3 is released.
>
>  I might need to get my head around building dovecot with customised
>  modules and review the code which was removed and return it back, if
>  anyone is planning to this, and well off ahead of me, please let me
>  know, we might be able to help one another.
>
>  With thanks.
>
>  Zakaria.
>


Re: Monitoring of director back end nodes

2021-07-31 Thread Tom Sommer

On 2021-07-31 19:28, darkc0de wrote:


So dovemon for Dovecot pro, but nothing for community?


https://github.com/brandond/poolmon

---
Tom


Re: Where is dovemon

2021-01-14 Thread Tom Sommer



On 2021-01-13 12:30, li...@mlserv.org wrote:


I found this link in the documentation:

https://doc.dovecot.org/configuration_manual/dovemon/

But where can I find the program "dovemon"? I searched all over 
whithout luck. In the source code, Google, nothing. It seems as only 
the web site would exist.


Can somebody help me please


Don't know why a nifty utility is commercial. Take a look at 
https://github.com/brandond/poolmon as an alternative.


--
Tom


systemctl unit should depend on remote-fs

2020-09-02 Thread Tom Sommer
The shipped systemctl unit file should have "remote-fs.target" added in 
"After="?


--
Tom


shutdown_clients ignored on Director upgrade/restart

2020-09-01 Thread Tom Sommer
When yum updates a Dovecot Director, it restarts all processes - however 
a few imap-login and pop3-login processes hang because they are still 
proxying connections.


It's like the now orphaned processes are stalled, and needs a kill -9 to 
die.


This is a problem because those stalled connections are still connected 
to backends, so when new connections are made from the same account, 
they do not end up on the same backend and thus index corruption happens 
(same account, two different backends).


shutdown_clients is enabled but it appears Dovecot restarts regardless 
of verifying all processes/clients are shut down.


Just had this happen upon yum update from dovecot-2.3.11.3-3 to 
dovecot-2.3.11.3-4


--
Tom


Re: Flush userdb cache entry

2020-02-19 Thread Tom Sommer




On 2020-02-19 10:14, Tom Sommer wrote:


I have a problem when users change quota in SQL, the change is not
reflected in Dovecot immediately so quota_warning is still triggered
on some occasions.
I think it's because the quota is stored in userdb, and there is no
way to flush userdb cache?

Is there a way to flush userdb entries?
What controls userdb cache TTL/size? auth_cache entries?


Okay, I think I found the issue. userdb TTL keeps being pushed every 
time a user logs in (like auth), so in a sense it will never expire if a 
user logs in every minute.


I could run "doveadm auth cache flush -u f...@bar.com" on the director, 
but it doesn't offer the [-S ] option to proxy the command 
to the current backend server.


Any chance of getting the [-S ] option added to "doveadm 
auth cache flush"?


---
Tom


Flush userdb cache entry

2020-02-19 Thread Tom Sommer

Hi all

I have a problem when users change quota in SQL, the change is not 
reflected in Dovecot immediately so quota_warning is still triggered on 
some occasions.
I think it's because the quota is stored in userdb, and there is no way 
to flush userdb cache?


Is there a way to flush userdb entries?
What controls userdb cache TTL/size? auth_cache entries?

I can't seem to find any documentation on this, sadly :(

Thanks

--
Tom


Sieve prevents tigger of quota-warning

2020-01-31 Thread Tom Sommer

Hi All

I have a few reports from Sieve users, that they are not receiving quota 
warnings.


It appears that when a mailbox is reaching quota, and sieve is 
responsible for moving mail the mail due to a rule, the quota-warning 
directive is not triggered?
It works fine if Sieve is not part of the flow, and lmtp is alone in 
handling mail saving.


Does anyone have a fix or insights into this?

2.3.9.2

Thanks

--
Tom


Re: v2.3.9 released

2019-12-06 Thread Tom Sommer via dovecot



On 2019-12-04 11:34, Aki Tuomi via dovecot wrote:


+ Add lmtp_add_received_header setting. It can be used to prevent LMTP
  from adding "Received:" headers.


Thank you for this :)

--
Tom


Re: About "received" header when using Dovecot proxy

2019-12-02 Thread Tom Sommer via dovecot



On 2019-12-02 13:42, Riku via dovecot wrote:

Hello.
My name is Riku.

Currently, I use Dovecot as a proxy for another SMTP server.
However, this seems to cause the IP address of the "received" header
to be that of the proxy server.
Is it possible to change this so that the IP address of the sender is 
entered?

The version of Dovecot is "2.3.8 (9df20d2db)".
Sorry for the incomprehensible explanation.


This has been discussed a few times on the list already, and I believe 
there is a fix coming at some point: 
https://github.com/dovecot/core/pull/74


Currently there is none

--
Tom


Re: Dovecot v2.3.8 released

2019-10-20 Thread Tom Sommer via dovecot



On 2019-10-20 11:30, Timo Sirainen via dovecot wrote:
On 18 Oct 2019, at 13.43, Tom Sommer via dovecot  
wrote:



Quite a lot of mail downloads for a single session. I wonder if the
user really had that many new mails or if they were being redownloaded
for some reason?

Oct 18 13:40:56 imap()<7552>: Debug: Mailbox 
Junk: Mailbox opened because: autoexpunge
Oct 18 13:40:56 imap()<7552>: Debug: Mailbox 
Junk E-mail: Mailbox opened because: autoexpunge
Oct 18 13:40:56 imap()<7552>: Info: Connection 
closed: read(size=7902) failed: Connection reset by peer (UID FETCH 
running for 0.542 + waiting input/output for 78.357 secs, 60 B in + 
39221480+8192 B out, state=wait-output) in=290 out=39401283 deleted=0 
expunged=0 trashed=0 hdr_count=0 hdr_bytes=0 body_count=94 
body_bytes=39210315


state=wait-output means Dovecot was waiting for client to read the
data it is sending.


This made me think it might be a connection (TCP/IP) thing.

My Director is running with 3 listen-IPs, I removed two of them and this 
appeared to have made the errors stop.


Unsure if anything was changed in this regard between 2.3.7.2 and 2.3.8.

---
Tom


Re: Dovecot v2.3.8 released

2019-10-20 Thread Tom Sommer via dovecot



On 2019-10-20 11:30, Timo Sirainen via dovecot wrote:
On 18 Oct 2019, at 13.43, Tom Sommer via dovecot  
wrote:


I am seeing a lot of errors since the upgrade, on multiple client 
accounts:

Info: Connection closed: read(size=7902) failed: Connection reset by
peer (UID FETCH running for 0.242 + waiting input/output for 108.816
secs, 60 B in + 24780576+8192 B out, state=wait-output)
Using NFS storage (not running with the mail_nfs_index or 
mail_nfs_storage)

Was something changed in terms of IO/Idle timeouts?


We are also seeing different I/O patterns since the upgrade, more I/O 
is being used than normal.


What was the previous version you were running?


2.3.7.2


This is mail_debug from one of the accounts in question:

Oct 18 13:39:37 imap()<7552>: Debug: Mailbox 
INBOX: Mailbox opened because: SELECT
Oct 18 13:39:37 imap()<7552>: Debug: Mailbox 
INBOX: UID 17854: Opened mail because: prefetch
Oct 18 13:39:37 imap()<7552>: Debug: Mailbox 
INBOX: UID 17854: Opened mail because: full mail

..
Oct 18 13:39:48 imap()<7552>: Debug: Mailbox 
INBOX: UID 17947: Opened mail because: full mail


Quite a lot of mail downloads for a single session. I wonder if the
user really had that many new mails or if they were being redownloaded
for some reason?


They might redownload because of UID FETCH failing?

Oct 18 13:40:56 imap()<7552>: Debug: Mailbox 
Junk: Mailbox opened because: autoexpunge
Oct 18 13:40:56 imap()<7552>: Debug: Mailbox 
Junk E-mail: Mailbox opened because: autoexpunge
Oct 18 13:40:56 imap()<7552>: Info: Connection 
closed: read(size=7902) failed: Connection reset by peer (UID FETCH 
running for 0.542 + waiting input/output for 78.357 secs, 60 B in + 
39221480+8192 B out, state=wait-output) in=290 out=39401283 deleted=0 
expunged=0 trashed=0 hdr_count=0 hdr_bytes=0 body_count=94 
body_bytes=39210315


state=wait-output means Dovecot was waiting for client to read the
data it is sending. In v2.3.7 there was some changes related to this,
but were you previously successfully running v2.3.7? In v2.3.8 I can't
really think of such changes.


Yes, we were successfully running 2.3.7.2 before, the issue started just 
after the upgrade


It can't be related to changes in the indexes? Increasing I/O

There were no input/output errors in the logs prior to 2.3.8

---
Tom


Re: Dovecot v2.3.8 released

2019-10-18 Thread Tom Sommer via dovecot




On 2019-10-18 14:55, Aki Tuomi via dovecot wrote:
On 18/10/2019 14:25 Tom Sommer via dovecot  
wrote:



On 2019-10-08 13:18, Aki Tuomi via dovecot wrote:
> https://dovecot.org/releases/2.3/dovecot-2.3.8.tar.gz
> https://dovecot.org/releases/2.3/dovecot-2.3.8.tar.gz.sig
> Binary packages in https://repo.dovecot.org/

> - imap: SETMETADATA with literal value would delete the metadata value
> instead of updating it.
> - imap: When client issues FETCH PREVIEW (LAZY=FUZZY) command, the
> caching decisions should be updated so that newly saved mails will have
> the preview cached.
> - With mail_nfs_index=yes and/or mail_nfs_storage=yes setuid/setgid
> permission bits in some files may have become dropped with some NFS
> servers. Changed NFS flushing to now use chmod() instead of chown().

I am seeing a lot of errors since the upgrade, on multiple client
accounts:

Info: Connection closed: read(size=7902) failed: Connection reset by
peer (UID FETCH running for 0.242 + waiting input/output for 108.816
secs, 60 B in + 24780576+8192 B out, state=wait-output)



This is not actually an error, it's indicating that the remote end
disconnected mid-command.


Right, but this a symptom of the increased IO and thus latency in 2.3.8, 
so the client disconnects while waiting for a response?


---
Tom


Re: Dovecot v2.3.8 released

2019-10-18 Thread Tom Sommer via dovecot




On 2019-10-18 13:25, Tom Sommer via dovecot wrote:

I am seeing a lot of errors since the upgrade, on multiple client 
accounts:


Info: Connection closed: read(size=7902) failed: Connection reset by
peer (UID FETCH running for 0.242 + waiting input/output for 108.816
secs, 60 B in + 24780576+8192 B out, state=wait-output)

Using NFS storage (not running with the mail_nfs_index or 
mail_nfs_storage)


Was something changed in terms of IO/Idle timeouts?


We are also seeing different I/O patterns since the upgrade, more I/O is 
being used than normal.


This is mail_debug from one of the accounts in question:

Oct 18 13:39:37 imap()<7552>: Debug: Mailbox 
INBOX: Mailbox opened because: SELECT
Oct 18 13:39:37 imap()<7552>: Debug: Mailbox 
INBOX: UID 17854: Opened mail because: prefetch
Oct 18 13:39:37 imap()<7552>: Debug: Mailbox 
INBOX: UID 17854: Opened mail because: full mail
Oct 18 13:39:37 imap()<7552>: Debug: Mailbox 
INBOX: UID 17855: Opened mail because: prefetch
Oct 18 13:39:37 imap()<7552>: Debug: Mailbox 
INBOX: UID 17855: Opened mail because: full mail
Oct 18 13:39:37 imap()<7552>: Debug: Mailbox 
INBOX: UID 17856: Opened mail because: prefetch
Oct 18 13:39:37 imap()<7552>: Debug: Mailbox 
INBOX: UID 17856: Opened mail because: full mail
Oct 18 13:39:37 imap()<7552>: Debug: Mailbox 
INBOX: UID 17857: Opened mail because: prefetch
Oct 18 13:39:37 imap()<7552>: Debug: Mailbox 
INBOX: UID 17857: Opened mail because: full mail
Oct 18 13:39:37 imap()<7552>: Debug: Mailbox 
INBOX: UID 17858: Opened mail because: prefetch
Oct 18 13:39:37 imap()<7552>: Debug: Mailbox 
INBOX: UID 17858: Opened mail because: full mail
Oct 18 13:39:37 imap()<7552>: Debug: Mailbox 
INBOX: UID 17859: Opened mail because: prefetch
Oct 18 13:39:37 imap()<7552>: Debug: Mailbox 
INBOX: UID 17859: Opened mail because: full mail
Oct 18 13:39:37 imap()<7552>: Debug: Mailbox 
INBOX: UID 17860: Opened mail because: prefetch
Oct 18 13:39:37 imap()<7552>: Debug: Mailbox 
INBOX: UID 17860: Opened mail because: full mail
Oct 18 13:39:37 imap()<7552>: Debug: Mailbox 
INBOX: UID 17861: Opened mail because: prefetch
Oct 18 13:39:37 imap()<7552>: Debug: Mailbox 
INBOX: UID 17861: Opened mail because: full mail
Oct 18 13:39:37 imap()<7552>: Debug: Mailbox 
INBOX: UID 17862: Opened mail because: prefetch
Oct 18 13:39:37 imap()<7552>: Debug: Mailbox 
INBOX: UID 17862: Opened mail because: full mail
Oct 18 13:39:37 imap()<7552>: Debug: Mailbox 
INBOX: UID 17863: Opened mail because: prefetch
Oct 18 13:39:37 imap()<7552>: Debug: Mailbox 
INBOX: UID 17863: Opened mail because: full mail
Oct 18 13:39:37 imap()<7552>: Debug: Mailbox 
INBOX: UID 17864: Opened mail because: prefetch
Oct 18 13:39:37 imap()<7552>: Debug: Mailbox 
INBOX: UID 17864: Opened mail because: full mail
Oct 18 13:39:37 imap()<7552>: Debug: Mailbox 
INBOX: UID 17865: Opened mail because: prefetch
Oct 18 13:39:37 imap()<7552>: Debug: Mailbox 
INBOX: UID 17865: Opened mail because: full mail
Oct 18 13:39:37 imap()<7552>: Debug: Mailbox 
INBOX: UID 17866: Opened mail because: prefetch
Oct 18 13:39:37 imap()<7552>: Debug: Mailbox 
INBOX: UID 17866: Opened mail because: full mail
Oct 18 13:39:37 imap()<7552>: Debug: Mailbox 
INBOX: UID 17867: Opened mail because: prefetch
Oct 18 13:39:37 imap()<7552>: Debug: Mailbox 
INBOX: UID 17867: Opened mail because: full mail
Oct 18 13:39:37 imap()<7552>: Debug: Mailbox 
INBOX: UID 17868: Opened mail because: prefetch
Oct 18 13:39:37 imap()<7552>: Debug: Mailbox 
INBOX: UID 17868: Opened mail because: full mail
Oct 18 13:39:37 imap()<7552>: Debug: Mailbox 
INBOX: UID 17869: Opened mail because: prefetch
Oct 18 13:39:37 imap()<7552>: Debug: Mailbox 
INBOX: UID 17869: Opened mail because: full mail
Oct 18 13:39:37 imap()<7552>: Debug: Mailbox 
INBOX: UID 17870: Opened mail because: prefetch
Oct 18 13:39:37 imap()<7552>: Debug: Mailbox 
INBOX: UID 17870: Opened mail because: full mail
Oct 18 13:39:37 imap()<7552>: Debug: Mailbox 
INBOX: UID 17871: Opened mail because: prefetch
Oct 18 13:39:37 imap()<7552>: Debug: Mailbox 
INBOX: UID 17871: Opened mail because: full mail
Oct 18 13:39:37 imap()<7552>: Debug: Mailbox 
INBOX: UID 17872: Opened mail because: prefetch
Oct 18 13:39:37 imap()<7552>: Debug: Mailbox 
INBOX: UID 17872: Opened mail because: full mail
Oct 18 13:39:37 imap()<7552>: Debug: Mailbox 
INBOX: UID 17873: Opened mail because: prefetch
Oct 18 13:39:37 imap()<7552>: Debug: Mailbox 
INBOX: UID 17873: Opened mail because: full mail
Oct 18 13:39:37 imap()<7552>: Debug: Mailbox 
INBOX: UID 17874: Opened mail because: pr

Re: Dovecot v2.3.8 released

2019-10-18 Thread Tom Sommer via dovecot

On 2019-10-08 13:18, Aki Tuomi via dovecot wrote:

https://dovecot.org/releases/2.3/dovecot-2.3.8.tar.gz
https://dovecot.org/releases/2.3/dovecot-2.3.8.tar.gz.sig
Binary packages in https://repo.dovecot.org/



- imap: SETMETADATA with literal value would delete the metadata value
instead of updating it.
- imap: When client issues FETCH PREVIEW (LAZY=FUZZY) command, the
caching decisions should be updated so that newly saved mails will have
the preview cached.
- With mail_nfs_index=yes and/or mail_nfs_storage=yes setuid/setgid
permission bits in some files may have become dropped with some NFS
servers. Changed NFS flushing to now use chmod() instead of chown().


I am seeing a lot of errors since the upgrade, on multiple client 
accounts:


Info: Connection closed: read(size=7902) failed: Connection reset by 
peer (UID FETCH running for 0.242 + waiting input/output for 108.816 
secs, 60 B in + 24780576+8192 B out, state=wait-output)


Using NFS storage (not running with the mail_nfs_index or 
mail_nfs_storage)


Was something changed in terms of IO/Idle timeouts?

Tried doing reindex/resync, but did not help.

--
Tom


Re: fts-elastic plugin

2019-09-19 Thread Tom Sommer via dovecot

On 2019-09-19 10:24, Filip Hanes via dovecot wrote:


Hi all,
I have recently worked on fts plugin for ElasticSearch.
https://github.com/filiphanes/fts-elastic
It is forked from fts-elasticsearch as you can see in PR 
https://github.com/atkinsj/fts-elasticsearch/pull/21


In my opinion fts-elastic is now functionally on level as currently 
recommended FTS plugin: fts-solr.
I would like to implement proper rescan inspired by fts-lucene so it 
would become the most complete fts plugin for dovecot. --


Awesome. Thank you for this.

---
Tom


Re: Force dovecot-uidlist reset

2019-09-09 Thread Tom Sommer via dovecot
Nevermind, the indexes were not deleted correctly - the method described 
below works :)


---
Tom

On 2019-09-09 09:45, Tom Sommer via dovecot wrote:

Is there a way to force Dovecot to rebuild dovecot-uidlist from zero?

It seems deleting all indexes and dovecot-* files followed by "doveadm
force-resync" is not enough? It just gets the same UIDs? Perhaps from
Maildir filenames? But I would like to reset the uid of all mails.

Thanks


Force dovecot-uidlist reset

2019-09-09 Thread Tom Sommer via dovecot

Is there a way to force Dovecot to rebuild dovecot-uidlist from zero?

It seems deleting all indexes and dovecot-* files followed by "doveadm 
force-resync" is not enough? It just gets the same UIDs? Perhaps from 
Maildir filenames? But I would like to reset the uid of all mails.


Thanks

--
Tom


Re: Feature wishlist: Allow to hide client IP/host in submission service

2019-08-28 Thread Tom Sommer via dovecot



On 2019-08-28 14:07, Timo Sirainen via dovecot wrote:

On 25 Aug 2019, at 21.51, Sebastian Krause via dovecot
 wrote:


Hi,

In many mail setups a required feature (for privacy reasons) is to
hide the host and IP of clients (in the "Received" header) that use
the authenticated submission over port 587. In Postfix that's
possible (https://serverfault.com/q/413533/86332), but not very nice
to configure especially if you only want want to strip the Received
header for port 587 submissions, but not on port 25.

As far as I can see this configuration is not possible at all in the
Dovecot submission server because the function which adds the
Received header with the client's IP address
(smtp_server_transaction_write_trace_record) is always called in
submission-commands.c.

It would be very useful if the submission server could anonymize the
client with a single configuration option, then all the Postfix
configuration mess (and using SASL) could be skipped by simply using
the Dovecot submission server instead.


Yeah, it would be useful to hide the client's IP and do it by default.
Actually I think there shouldn't even be an option to not hide it. Or
would it be better or worse to just not have the Received header added
at all?


Better to just remove the Received header entirely.

Make lmtp_add_received_headers work on submission as well, maybe?


Re: Feature wishlist: Allow to hide client IP/host in submission service

2019-08-27 Thread Tom Sommer via dovecot

On 2019-08-25 20:51, Sebastian Krause via dovecot wrote:

Hi,

In many mail setups a required feature (for privacy reasons) is to
hide the host and IP of clients (in the "Received" header) that use
the authenticated submission over port 587. In Postfix that's
possible (https://serverfault.com/q/413533/86332), but not very nice
to configure especially if you only want want to strip the Received
header for port 587 submissions, but not on port 25.

As far as I can see this configuration is not possible at all in the
Dovecot submission server because the function which adds the
Received header with the client's IP address
(smtp_server_transaction_write_trace_record) is always called in
submission-commands.c.

It would be very useful if the submission server could anonymize the
client with a single configuration option, then all the Postfix
configuration mess (and using SASL) could be skipped by simply using
the Dovecot submission server instead.

The anonymization would work by replacing the client's EHLO host
with "submission" and the IP address with 127.0.0.1. In full the
Received header would look something like this where the first line
is always the same:

Received: from submission (unknown [127.0.0.1])
   by mail.example.com with ESMTPSA
   id 8bV9D+51Yl1FOwAA1ctoJQ
   (envelope-from )
   for ; Sun, 25 Aug 2019 13:50:06 +0200


Check https://github.com/dovecot/core/pull/74

Unsure if it covers Submission though


Re: Dovecot v2.3.7.1 & Pigeonhole v0.5.7.1 released

2019-07-23 Thread Tom Sommer via dovecot



On 2019-07-23 14:01, Timo Sirainen via dovecot wrote:

https://dovecot.org/releases/2.3/dovecot-2.3.7.1.tar.gz
https://dovecot.org/releases/2.3/dovecot-2.3.7.1.tar.gz.sig

https://pigeonhole.dovecot.org/releases/2.3/dovecot-2.3-pigeonhole-0.5.7.1.tar.gz
https://pigeonhole.dovecot.org/releases/2.3/dovecot-2.3-pigeonhole-0.5.7.1.tar.gz.sig

Binary packages in https://repo.dovecot.org/

These releases fix the reported regressions in v2.3.7 & v0.5.7.

Dovecot core:
- Fix TCP_NODELAY errors being logged on non-Linux OSes
- lmtp proxy: Fix assert-crash when client uses BODY=8BITMIME
- Remove wrongly added checks in namespace prefix checking

Pigeonhole:
- dsync: Sieve script syncing failed if mailbox attributes 
weren't

  enabled.


Thank you


Re: Dovecot release v2.3.7

2019-07-17 Thread Tom Sommer via dovecot
On 2019-07-16 09:46, Timo Sirainen via dovecot wrote:

> On 13 Jul 2019, at 14.44, Tom Sommer via dovecot  wrote:
> 
>> LMTP is broken on director:
>> 
>> Jul 13 13:42:41 lmtp(34824): Panic: file smtp-params.c: line 685 
>> (smtp_params_mail_add_body_to_event): assertion failed: ((caps & 
>> SMTP_CAPABILITY_8BITMIME) != 0)
> 
> Thanks, fixed: 
> https://github.com/dovecot/core/commit/c4de81077c11d09eddf6a5c93676ee82350343a6

Thanks. Is there a new release?

Re: Dovecot release v2.3.7

2019-07-13 Thread Tom Sommer via dovecot




On 2019-07-13 13:44, Tom Sommer via dovecot wrote:

LMTP is broken on director:

Jul 13 13:42:41 lmtp(34824): Panic: file smtp-params.c: line 685
(smtp_params_mail_add_body_to_event): assertion failed: ((caps &
SMTP_CAPABILITY_8BITMIME) != 0)
Jul 13 13:42:41 lmtp(34824): Error: Raw backtrace:
/usr/lib64/dovecot/libdovecot.so.0(+0xdf06a) [0x7f561203806a] ->
/usr/lib64/dovecot/libdovecot.so.0(+0xdf0b1) [0x7f56120380b1] ->
/usr/lib64/dovecot/libdovecot.so.0(+0x40161) [0x7f5611f99161] ->
/usr/lib64/dovecot/libdovecot.so.0(+0x44fc4) [0x7f5611f9dfc4] ->
/usr/lib64/dovecot/libdovecot.so.0(smtp_client_transaction_start+0x78)
[0x7f5611fa7a58] -> dovecot/lmtp [172.17.165.6 MAIL
FROM](lmtp_proxy_rcpt+0xa37) [0x558eaf181d67] -> dovecot/lmtp
[172.17.165.6 MAIL FROM](client_default_cmd_rcpt+0x9e)
[0x558eaf17eebe] ->
/usr/lib64/dovecot/libdovecot.so.0(smtp_server_cmd_rcpt+0x222)
[0x7f5611fafd12] ->
/usr/lib64/dovecot/libdovecot.so.0(smtp_server_command_new+0x150)
[0x7f5611fb58a0] -> /usr/lib64/dovecot/libdovecot.so.0(+0x606c8)
[0x7f5611fb96c8] ->
/usr/lib64/dovecot/libdovecot.so.0(io_loop_call_io+0x55)
[0x7f561204f275] ->
/usr/lib64/dovecot/libdovecot.so.0(io_loop_handler_run+0x95)
[0x7f561204f3a5] ->
/usr/lib64/dovecot/libdovecot.so.0(io_loop_run+0x38) [0x7f561204f588]
-> /usr/lib64/dovecot/libdovecot.so.0(master_service_run+0x13)
[0x7f5611fc68f3] -> dovecot/lmtp [172.17.165.6 MAIL FROM](main+0x289)
[0x558eaf17dc19] -> /lib64/libc.so.6(__libc_start_main+0x100)
[0x7f5611be3d20] -> dovecot/lmtp [172.17.165.6 MAIL FROM](+0x5849)
[0x558eaf17d849]



Downgraded to 2.3.6 (from source, because it's gone from repo) - Works.

--
Tom


Re: Dovecot release v2.3.7

2019-07-13 Thread Tom Sommer via dovecot
Can you please put dovecot-2.3.6-2.x86_64 back in the repo so we can 
downgrade?


---
Tom

On 2019-07-12 14:29, Aki Tuomi via dovecot wrote:

Hi!

We are pleased to release Dovecot release v2.3.7.

Tarball is available at

https://dovecot.org/releases/2.3/dovecot-2.3.7.tar.gz
https://dovecot.org/releases/2.3/dovecot-2.3.7.tar.gz.sig

Binary packages are available at https://repo.dovecot.org/

Changes
---

* fts-solr: Removed break-imap-search parameter
+ Added more events for the new statistics, see
  https://doc.dovecot.org/admin_manual/list_of_events/
+ mail-lua: Add IMAP metadata accessors, see
  https://doc.dovecot.org/admin_manual/lua/
+ Add event exporters that allow exporting raw events to log files and
  external systems, see
  https://doc.dovecot.org/configuration_manual/event_export/
+ SNIPPET is now PREVIEW and size has been increased to 200 characters.
+ Add body option to fts_enforced. This triggers building FTS index 
only

  on body search, and an error using FTS index fails the search rather
  than reads through all the mails.
- Submission/LMTP: Fixed crash when domain argument is invalid in a
  second EHLO/LHLO command.
- Copying/moving mails using Maildir format loses IMAP keywords in the
  destination if the mail also has no system flags.
- mail_attachment_detection_options=add-flags-on-save caused email body
  to be unnecessarily opened when FETCHing mail headers that were
  already cached.
- mail attachment detection keywords not saved with maildir.
- dovecot.index.cache may have grown excessively large in some
  situations. This happened especially when using autoexpunging with
  lazy_expunge folders. Also with mdbox format in general the cache 
file

  wasn't recreated as often as it should have.
- Autoexpunged mails weren't immediately deleted from the disk. 
Instead,

  the deletion from disk happened the next time the folder was opened.
  This could have caused unnecessary delays if the opening was done by
  an interactive IMAP session.
- Dovecot's TCP connections sometimes add extra 40ms latency due to not
  enabling TCP_NODELAY. HTTP and SMTP/LMTP connections weren't
  affected, but everything else was. This delay wasn't always visible -
  only in some situations with some message/packet sizes.
- imapc: Fix various crash conditions
- Dovecot builds were not always reproducible.
- login-proxy: With shutdown_clients=no after config reload the
  existing connections could no longer be listed or kicked with 
doveadm.

- "doveadm proxy kick" with -f parameter caused a crash in some
  situations.
- Auth policy can cause segmentation fault crash during auth process
  shutdown if all auth requests have not been finished.
- Fix various minor bugs leading into incorrect behaviour in mailbox
  list index handling. These rarely caused noticeable problems.
- LDAP auth: Iteration accesses freed memory, possibly crashing
  auth-worker
- local_name { .. } filter in dovecot.conf does not correctly support
  multiple names and wildcards were matched incorrectly.
- replicator: dsync assert-crashes if it can't connect to remote TCP
  server.
- config: Memory leak in config process when ssl_dh setting wasn't
  set and there was no ssl-parameters.dat file.
  This caused config process to die once in a while
  with "out of memory".

---
Aki Tuomi
Open-Xchange oy


Re: Dovecot release v2.3.7

2019-07-13 Thread Tom Sommer via dovecot

LMTP is broken on director:

Jul 13 13:42:41 lmtp(34824): Panic: file smtp-params.c: line 685 
(smtp_params_mail_add_body_to_event): assertion failed: ((caps & 
SMTP_CAPABILITY_8BITMIME) != 0)
Jul 13 13:42:41 lmtp(34824): Error: Raw backtrace: 
/usr/lib64/dovecot/libdovecot.so.0(+0xdf06a) [0x7f561203806a] -> 
/usr/lib64/dovecot/libdovecot.so.0(+0xdf0b1) [0x7f56120380b1] -> 
/usr/lib64/dovecot/libdovecot.so.0(+0x40161) [0x7f5611f99161] -> 
/usr/lib64/dovecot/libdovecot.so.0(+0x44fc4) [0x7f5611f9dfc4] -> 
/usr/lib64/dovecot/libdovecot.so.0(smtp_client_transaction_start+0x78) 
[0x7f5611fa7a58] -> dovecot/lmtp [172.17.165.6 MAIL 
FROM](lmtp_proxy_rcpt+0xa37) [0x558eaf181d67] -> dovecot/lmtp 
[172.17.165.6 MAIL FROM](client_default_cmd_rcpt+0x9e) [0x558eaf17eebe] 
-> /usr/lib64/dovecot/libdovecot.so.0(smtp_server_cmd_rcpt+0x222) 
[0x7f5611fafd12] -> 
/usr/lib64/dovecot/libdovecot.so.0(smtp_server_command_new+0x150) 
[0x7f5611fb58a0] -> /usr/lib64/dovecot/libdovecot.so.0(+0x606c8) 
[0x7f5611fb96c8] -> 
/usr/lib64/dovecot/libdovecot.so.0(io_loop_call_io+0x55) 
[0x7f561204f275] -> 
/usr/lib64/dovecot/libdovecot.so.0(io_loop_handler_run+0x95) 
[0x7f561204f3a5] -> /usr/lib64/dovecot/libdovecot.so.0(io_loop_run+0x38) 
[0x7f561204f588] -> 
/usr/lib64/dovecot/libdovecot.so.0(master_service_run+0x13) 
[0x7f5611fc68f3] -> dovecot/lmtp [172.17.165.6 MAIL FROM](main+0x289) 
[0x558eaf17dc19] -> /lib64/libc.so.6(__libc_start_main+0x100) 
[0x7f5611be3d20] -> dovecot/lmtp [172.17.165.6 MAIL FROM](+0x5849) 
[0x558eaf17d849]



---
Tom

On 2019-07-12 14:29, Aki Tuomi via dovecot wrote:

Hi!

We are pleased to release Dovecot release v2.3.7.

Tarball is available at

https://dovecot.org/releases/2.3/dovecot-2.3.7.tar.gz
https://dovecot.org/releases/2.3/dovecot-2.3.7.tar.gz.sig

Binary packages are available at https://repo.dovecot.org/

Changes
---

* fts-solr: Removed break-imap-search parameter
+ Added more events for the new statistics, see
  https://doc.dovecot.org/admin_manual/list_of_events/
+ mail-lua: Add IMAP metadata accessors, see
  https://doc.dovecot.org/admin_manual/lua/
+ Add event exporters that allow exporting raw events to log files and
  external systems, see
  https://doc.dovecot.org/configuration_manual/event_export/
+ SNIPPET is now PREVIEW and size has been increased to 200 characters.
+ Add body option to fts_enforced. This triggers building FTS index 
only

  on body search, and an error using FTS index fails the search rather
  than reads through all the mails.
- Submission/LMTP: Fixed crash when domain argument is invalid in a
  second EHLO/LHLO command.
- Copying/moving mails using Maildir format loses IMAP keywords in the
  destination if the mail also has no system flags.
- mail_attachment_detection_options=add-flags-on-save caused email body
  to be unnecessarily opened when FETCHing mail headers that were
  already cached.
- mail attachment detection keywords not saved with maildir.
- dovecot.index.cache may have grown excessively large in some
  situations. This happened especially when using autoexpunging with
  lazy_expunge folders. Also with mdbox format in general the cache 
file

  wasn't recreated as often as it should have.
- Autoexpunged mails weren't immediately deleted from the disk. 
Instead,

  the deletion from disk happened the next time the folder was opened.
  This could have caused unnecessary delays if the opening was done by
  an interactive IMAP session.
- Dovecot's TCP connections sometimes add extra 40ms latency due to not
  enabling TCP_NODELAY. HTTP and SMTP/LMTP connections weren't
  affected, but everything else was. This delay wasn't always visible -
  only in some situations with some message/packet sizes.
- imapc: Fix various crash conditions
- Dovecot builds were not always reproducible.
- login-proxy: With shutdown_clients=no after config reload the
  existing connections could no longer be listed or kicked with 
doveadm.

- "doveadm proxy kick" with -f parameter caused a crash in some
  situations.
- Auth policy can cause segmentation fault crash during auth process
  shutdown if all auth requests have not been finished.
- Fix various minor bugs leading into incorrect behaviour in mailbox
  list index handling. These rarely caused noticeable problems.
- LDAP auth: Iteration accesses freed memory, possibly crashing
  auth-worker
- local_name { .. } filter in dovecot.conf does not correctly support
  multiple names and wildcards were matched incorrectly.
- replicator: dsync assert-crashes if it can't connect to remote TCP
  server.
- config: Memory leak in config process when ssl_dh setting wasn't
  set and there was no ssl-parameters.dat file.
  This caused config process to die once in a while
  with "out of memory".

---
Aki Tuomi
Open-Xchange oy


Re: Orphaned processes after doveadm log reopen

2019-07-09 Thread Tom Sommer via dovecot



On 2019-07-09 07:37, Aki Tuomi wrote:

On 8.7.2019 14.54, Tom Sommer via dovecot wrote:


On 2019-07-08 13:36, Tom Sommer via dovecot wrote:
I rotate logs every night on my Director, running "doveadm log 
reopen"


It happens on "/etc/init.d/dovecot restart", not "doveadm log reopen"
- sorry

So it happens when you run killproc on dovecot



Do you happen to have shutdown_clients=no ?


Nope.

shutdown_clients = yes


Re: Orphaned processes after doveadm log reopen

2019-07-08 Thread Tom Sommer via dovecot



On 2019-07-08 13:36, Tom Sommer via dovecot wrote:

I rotate logs every night on my Director, running "doveadm log reopen"


It happens on "/etc/init.d/dovecot restart", not "doveadm log reopen" - 
sorry


So it happens when you run killproc on dovecot

---
Tom


Re: Orphaned processes after doveadm log reopen

2019-07-08 Thread Tom Sommer via dovecot



On 2019-07-08 13:36, Tom Sommer via dovecot wrote:

I rotate logs every night on my Director, running "doveadm log reopen"

I've noticed a problem with processes not closing correctly, they can
be open for days and cause corrupt index files because they are
connected to different backends (so it writes to the wrong server when
the process disconnects)

23518 ?S140:53 dovecot/anvil [4 connections]
23519 ?S444:45 dovecot/log
23520 ?S497:19 dovecot/imap-login [8 pre-login + 27 
proxies]

23531 ?S  0:27 dovecot/config
26201 ?S551:53 dovecot/imap-login [7 pre-login + 33 
proxies]
22126 ?S960:15 dovecot/pop3-login [28 pre-login + 396 
proxies]
59515 ?S218:32 dovecot/pop3-login [44 pre-login + 468 
proxies]


Is there any way to force these orphaned processes to terminate after
a certain time?


I ran "strace -f -p" on 59515 which caused the process to wake up and 
terminate? Very strange.


I can send the dump from strace if you want, but since it contains 
addresses I would prefer it to be done off-list


Orphaned processes after doveadm log reopen

2019-07-08 Thread Tom Sommer via dovecot

I rotate logs every night on my Director, running "doveadm log reopen"

I've noticed a problem with processes not closing correctly, they can be 
open for days and cause corrupt index files because they are connected 
to different backends (so it writes to the wrong server when the process 
disconnects)


23518 ?S140:53 dovecot/anvil [4 connections]
23519 ?S444:45 dovecot/log
23520 ?S497:19 dovecot/imap-login [8 pre-login + 27 proxies]
23531 ?S  0:27 dovecot/config
26201 ?S551:53 dovecot/imap-login [7 pre-login + 33 proxies]
22126 ?S960:15 dovecot/pop3-login [28 pre-login + 396 
proxies]
59515 ?S218:32 dovecot/pop3-login [44 pre-login + 468 
proxies]


Is there any way to force these orphaned processes to terminate after a 
certain time?


--
Tom


Re: Frequent Out of Memory for service(config)

2019-05-19 Thread Tom Sommer via dovecot




On 2019-05-13 21:56, Root Kev via dovecot wrote:


Hello Group,

We have dovecot deployed as solely a Pop3 service that is used by our 
applications to pass mail from one application to another internally.  
We have roughly 4 applications that connect to the Pop3 service every 2 
seconds, to check for new messages and pop them for processing if they 
are present.  Depending on the site, we have between 1024-2048MB of 
memory set for default_vsz_limit.  In all systems we see the Out of 
memory alert several times a day. We previously did not see this at all 
when running on CentOS6, with less memory.


I see this too on servers running quota-service (dunno if it is 
related).


---
Tom


Re: [Dovecot] Dovecot LDA/LMTP vs postfix virtual delivery agent and the x-original-to header

2019-04-19 Thread Tom Sommer via dovecot



On 2019-04-19 15:26, Aki Tuomi via dovecot wrote:

Unfortunately we have quite long list of things to do, so sometimes 
even trivial things can take a long time.


Not to hijack the thread, but perhaps you could elaborate on what has 
changed within Dovecot?


Timo seems to be put in the background, releases are less frequent and 
with less changes/additions. The days of "Oh, great idea - I added that, 
see this commit" seem gone.


Is this because OX acquired Dovecot, so priorities have changed? Or what 
is going on?


Mostly just curious.

--
Tom


Re: quota-service with Director - A workaround

2019-03-23 Thread Tom Sommer via dovecot

On 2019-03-21 10:28, Sami Ketola via dovecot wrote:
On 20 Mar 2019, at 18.17, Tom Sommer via dovecot  
wrote:



On 2019-03-20 16:40, Sami Ketola via dovecot wrote:

On 20 Mar 2019, at 17.13, Tom Sommer via dovecot 
 wrote:
I realize quota-service on Director is not supported, which is a 
shame.
As a workaround I'm thinking of setting up quota-service on one of 
my backend nodes, and have all my Postfix services ask this one node 
for the quota status.
This sort of defeats the purpose of the Director (having per-user 
assigned hot nodes), since now this one node running the 
quota-service will access all mailboxes to check the status of all 
inbound mail.

Is this a problem though? In terms of NFS locking etc. etc.?
Might be. Wouldn't it be just easier to use the overquota-flag 
available since 2.2.16 and set up overquota flag in LDAP or userdb of 
choice and configure postfix to check that flag?


I don't really want to involve LDAP in my setup :)


So use what ever your shared userdb service is as you must have one if
you are using multiple backends and directors.


Does it work with mysql userdb? Is there an example to look at anywhere?


Re: quota-service with Director - A workaround

2019-03-20 Thread Tom Sommer via dovecot



On 2019-03-20 16:40, Sami Ketola via dovecot wrote:

On 20 Mar 2019, at 17.13, Tom Sommer via dovecot  
wrote:


I realize quota-service on Director is not supported, which is a 
shame.


As a workaround I'm thinking of setting up quota-service on one of my 
backend nodes, and have all my Postfix services ask this one node for 
the quota status.


This sort of defeats the purpose of the Director (having per-user 
assigned hot nodes), since now this one node running the quota-service 
will access all mailboxes to check the status of all inbound mail.


Is this a problem though? In terms of NFS locking etc. etc.?


Might be. Wouldn't it be just easier to use the overquota-flag 
available since 2.2.16 and set up overquota flag in LDAP or userdb of 
choice and configure postfix to check that flag?


I don't really want to involve LDAP in my setup :)


quota-service with Director - A workaround

2019-03-20 Thread Tom Sommer via dovecot

I realize quota-service on Director is not supported, which is a shame.

As a workaround I'm thinking of setting up quota-service on one of my 
backend nodes, and have all my Postfix services ask this one node for 
the quota status.


This sort of defeats the purpose of the Director (having per-user 
assigned hot nodes), since now this one node running the quota-service 
will access all mailboxes to check the status of all inbound mail.


Is this a problem though? In terms of NFS locking etc. etc.?

--
Tom


Re: [ext] Dovecot Wiki: Please disable edit on double click

2019-03-20 Thread Tom Sommer via dovecot

On 2019-03-20 12:02, Aki Tuomi via dovecot wrote:

On 20.3.2019 12.59, Ralf Hildebrandt via dovecot wrote:

* Michael Goth via dovecot :


could you maybe disable the 'edit on doubleclick' feature on
wiki2.dovecot.org?

Everytime I try to select a word by double clicking on it, I end up 
in
editing mode. It's just a minor thing, but maybe I'm not the only one 
who's

annoyed by this ;)


+1

Amen to that. I never bothered to ask, but it annoys the shit out of 
me!


+10


It should be disabled now, unless you log into the wiki. Then you need
to disable it from preferences.


Happy day!


Director 2.3.5: openssl_iostream_handle_error

2019-03-07 Thread Tom Sommer via dovecot

Thanks for 2.3.5 - it fixed a lot of stuff :)

I see this Panic in my Director logs now, randomly and rarely:

Mar 07 11:01:29 pop3-login: Panic: file iostream-openssl.c: line 586 
(openssl_iostream_handle_error): assertion failed: (errno != 0)
Mar 07 11:01:29 auth: Warning: auth client 13625 disconnected with 19 
pending requests: EOF
Mar 07 11:01:29 pop3-login: Fatal: master: service(pop3-login): child 
13625 killed with signal 6 (core not dumped - 
https://dovecot.org/bugreport.html#coredumps - set 
/proc/sys/fs/suid_dumpable to 2)


Hoping the above is enough to debug

--
Tom


Re: Request: option to hide user IP/HELO content from mail sent via submissiond

2018-10-30 Thread Tom Sommer



On 2018-10-19 18:55, Lee Maguire wrote:

For reasons of user privacy and security I usually configure
submission servers to not include accurate IP address and HELO
information of authenticated users. (Usually replacing it with a
private-use domain / IPv6 address.)


https://github.com/dovecot/core/pull/74


Re: Calendar function ?

2018-10-22 Thread Tom Sommer

On 2018-10-22 19:58, Dave Stevens wrote:

On Mon, 22 Oct 2018 19:48:17 +0200
Tom Sommer  wrote:


On 2018-10-22 12:56, María Arrea wrote:

> We use sabredav for caldav+cardav and roundcube+agendav for nice
> web ui :)


Is Sabre still maintained?


http://sabre.io/ says 2018 is most recent version


Yea, but: http://sabre.io/blog/2017/development-on-hold/


Re: Calendar function ?

2018-10-22 Thread Tom Sommer



On 2018-10-22 12:56, María Arrea wrote:

We use sabredav for caldav+cardav and roundcube+agendav for nice web ui 
:)



Is Sabre still maintained?


Re: VOLATILEDIR not really used?

2018-10-08 Thread Tom Sommer

On 2018-10-05 21:56, Timo Sirainen wrote:


Looks like it works. But could you:


1) Add it to github as merge request


Done, for the first part of the code.

2) MAILBOX_LIST_PATH_TYPE_VOLATILEDIR might be useful, but it's kind of 
a separate change of its own. To simplify, in Maildir code you could 
have box->list->set->volatile_dir != NULL ? 
box->list->set->volatile_dir : control_dir. That way it's falling back 
to control_dir like the previous code when VOLATILEDIR wasn't set.


I just copied the code from above:
if (mailbox_get_path_to(box, MAILBOX_LIST_PATH_TYPE_CONTROL,
_dir) <= 0)

If I do as you suggest, I get this?:

maildir-uidlist.c:277:47: error: dereferencing pointer to incomplete 
type
  uidlist->path_dotlock = i_strconcat(box->list->set->volatile_dir != 
NULL ? box->list->set->volatile_dir : control_dir, 
"/"MAILDIR_UIDLIST_NAME, NULL);


That's the reason I expanded mailbox_get_path_to(), since that angle 
worked


Thanks for the feedback :)

--
Tom


Re: VOLATILEDIR not really used?

2018-10-05 Thread Tom Sommer

On 2018-10-05 11:50, Tom Sommer wrote:

On 2018-10-05 11:35, Timo Sirainen wrote:

On 4 Oct 2018, at 17.13, Tom Sommer  wrote:



On 2018-10-04 15:55, Timo Sirainen wrote:

On 4 Oct 2018, at 14.39, Tom Sommer  wrote:


Is this correct, and if so are there any plans to move dotlocks 
etc. to this directory?
What dotlocks? I guess mbox and Maildir have some locks that could 
be

moved there, but a better performance optimization for those
installations would be to switch to sdbox/mdbox. Other than that, I
don't remember there being anything important that could be moved
there.


Maildir locks yes, switching format is not a procedure I feel 
comfortable with :)


Would it be possible to move the maildir/mbox locks to VOLATILEDIR?


Sure it would be possible, but it's such a low priority for us that I
doubt we'll be implementing it.


Might try and look into it myself.


Something like this? See attached.diff --git a/src/lib-storage/index/maildir/maildir-uidlist.c b/src/lib-storage/index/maildir/maildir-uidlist.c
index eb9d972..4ae2be0 100644
--- a/src/lib-storage/index/maildir/maildir-uidlist.c
+++ b/src/lib-storage/index/maildir/maildir-uidlist.c
@@ -64,6 +64,7 @@ HASH_TABLE_DEFINE_TYPE(path_to_maildir_uidlist_rec,
 struct maildir_uidlist {
 	struct mailbox *box;
 	char *path;
+	char *path_dotlock;
 	struct maildir_index_header *mhdr;
 
 	int fd;
@@ -138,7 +139,7 @@ static int maildir_uidlist_lock_timeout(struct maildir_uidlist *uidlist,
 {
 	struct mailbox *box = uidlist->box;
 	const struct mailbox_permissions *perm = mailbox_get_permissions(box);
-	const char *path = uidlist->path;
+	const char *path_dotlock = uidlist->path_dotlock;
 	mode_t old_mask;
 	const enum dotlock_create_flags dotlock_flags =
 		nonblock ? DOTLOCK_CREATE_FLAG_NONBLOCK : 0;
@@ -157,7 +158,7 @@ static int maildir_uidlist_lock_timeout(struct maildir_uidlist *uidlist,
 
 	for (i = 0;; i++) {
 		old_mask = umask(0777 & ~perm->file_create_mode);
-		ret = file_dotlock_create(>dotlock_settings, path,
+		ret = file_dotlock_create(>dotlock_settings, path_dotlock,
 	  dotlock_flags, >dotlock);
 		umask(old_mask);
 		if (ret > 0)
@@ -172,11 +173,11 @@ static int maildir_uidlist_lock_timeout(struct maildir_uidlist *uidlist,
 		if (errno != ENOENT || i == MAILDIR_DELETE_RETRY_COUNT) {
 			if (errno == EACCES) {
 mailbox_set_critical(box, "%s",
-	eacces_error_get_creating("file_dotlock_create", path));
+	eacces_error_get_creating("file_dotlock_create", path_dotlock));
 			} else {
 mailbox_set_critical(box,
 	"file_dotlock_create(%s) failed: %m",
-	path);
+	path_dotlock);
 			}
 			return -1;
 		}
@@ -259,16 +260,21 @@ struct maildir_uidlist *maildir_uidlist_init(struct maildir_mailbox *mbox)
 	struct mailbox *box = >box;
 	struct maildir_uidlist *uidlist;
 	const char *control_dir;
+	const char *volatile_dir;
 
 	if (mailbox_get_path_to(box, MAILBOX_LIST_PATH_TYPE_CONTROL,
 _dir) <= 0)
 		i_unreached();
+	if (mailbox_get_path_to(box, MAILBOX_LIST_PATH_TYPE_VOLATILEDIR,
+_dir) <= 0)
+i_unreached();	
 
 	uidlist = i_new(struct maildir_uidlist, 1);
 	uidlist->box = box;
 	uidlist->mhdr = >maildir_hdr;
 	uidlist->fd = -1;
 	uidlist->path = i_strconcat(control_dir, "/"MAILDIR_UIDLIST_NAME, NULL);
+	uidlist->path_dotlock = i_strconcat(volatile_dir, "/"MAILDIR_UIDLIST_NAME, NULL);
 	i_array_init(>records, 128);
 	hash_table_create(>files, default_pool, 4096,
 			  maildir_filename_base_hash,
@@ -333,6 +339,7 @@ void maildir_uidlist_deinit(struct maildir_uidlist **_uidlist)
 	array_free(>records);
 	str_free(>hdr_extensions);
 	i_free(uidlist->path);
+	i_free(uidlist->path_dotlock);
 	i_free(uidlist);
 }
 
diff --git a/src/lib-storage/mailbox-list.c b/src/lib-storage/mailbox-list.c
index e409a50..c7b0ce9 100644
--- a/src/lib-storage/mailbox-list.c
+++ b/src/lib-storage/mailbox-list.c
@@ -1454,6 +1454,10 @@ bool mailbox_list_set_get_root_path(const struct mailbox_list_settings *set,
 		path = set->control_dir != NULL ?
 			set->control_dir : set->root_dir;
 		break;
+	case MAILBOX_LIST_PATH_TYPE_VOLATILEDIR:
+		path = set->volatile_dir != NULL ?
+set->volatile_dir : set->root_dir;
+break;
 	case MAILBOX_LIST_PATH_TYPE_LIST_INDEX:
 		if (set->list_index_dir != NULL) {
 			if (set->list_index_dir[0] == '/') {
diff --git a/src/lib-storage/mailbox-list.h b/src/lib-storage/mailbox-list.h
index ef476ae..f46359d 100644
--- a/src/lib-storage/mailbox-list.h
+++ b/src/lib-storage/mailbox-list.h
@@ -85,6 +85,8 @@ enum mailbox_list_path_type {
 	MAILBOX_LIST_PATH_TYPE_ALT_MAILBOX,
 	/* Return control directory */
 	MAILBOX_LIST_PATH_TYPE_CONTROL,
+	/* Return volatile directory */
+	MAILBOX_LIST_PATH_TYPE_VOLATILEDIR,
 	/* Return index directory ("" for in-memory) */
 	MAILBOX_LIST_PATH_TYPE_INDEX,
 	/* Return the private index directory (NULL if none) */


Re: VOLATILEDIR not really used?

2018-10-05 Thread Tom Sommer

On 2018-10-05 11:35, Timo Sirainen wrote:

On 4 Oct 2018, at 17.13, Tom Sommer  wrote:



On 2018-10-04 15:55, Timo Sirainen wrote:

On 4 Oct 2018, at 14.39, Tom Sommer  wrote:


Is this correct, and if so are there any plans to move dotlocks etc. 
to this directory?

What dotlocks? I guess mbox and Maildir have some locks that could be
moved there, but a better performance optimization for those
installations would be to switch to sdbox/mdbox. Other than that, I
don't remember there being anything important that could be moved
there.


Maildir locks yes, switching format is not a procedure I feel 
comfortable with :)


Would it be possible to move the maildir/mbox locks to VOLATILEDIR?


Sure it would be possible, but it's such a low priority for us that I
doubt we'll be implementing it.


Hrmm okay, fair enough. Seems like a small task though.

Might try and look into it myself.


Re: VOLATILEDIR not really used?

2018-10-04 Thread Tom Sommer



On 2018-10-04 15:55, Timo Sirainen wrote:

On 4 Oct 2018, at 14.39, Tom Sommer  wrote:


Is this correct, and if so are there any plans to move dotlocks etc. 
to this directory?



What dotlocks? I guess mbox and Maildir have some locks that could be
moved there, but a better performance optimization for those
installations would be to switch to sdbox/mdbox. Other than that, I
don't remember there being anything important that could be moved
there.


Maildir locks yes, switching format is not a procedure I feel 
comfortable with :)


Would it be possible to move the maildir/mbox locks to VOLATILEDIR?

--
Tom Sommer


VOLATILEDIR not really used?

2018-10-04 Thread Tom Sommer

Hi

According to the docs, setting VOLATILEDIR will improve I/O performance 
when using NFS - but as far as I can see, only vsize lock-files are put 
here, and little else?


Is this correct, and if so are there any plans to move dotlocks etc. to 
this directory?


Thanks.
--
Tom


2.3.3: Panic: file ostream-zlib.c: line 37 (o_stream_zlib_close): assertion failed

2018-10-02 Thread Tom Sommer

I see this in my logs after 2.3.3:

using zlib plugin, ofc.

Oct 02 10:01:39 imap(u...@example.com)<50643>: Panic: 
file ostream-zlib.c: line 37 (o_stream_zlib_close): assertion failed: 
(zstream->ostream.finished || zstream->ostream.ostream.stream_errno != 0 
|| zstream->ostream.error_handling_disabled)
Oct 02 10:01:39 imap(u...@example.com)<50643>: Error: 
Raw backtrace: /usr/lib64/dovecot/libdovecot.so.0(+0xce56a) 
[0x7f442487556a] -> /usr/lib64/dovecot/libdovecot.so.0(+0xce5b1) 
[0x7f44248755b1] -> /usr/lib64/dovecot/libdovecot.so.0(+0x3d941) 
[0x7f44247e4941] -> /usr/lib64/dovecot/lib20_zlib_plugin.so(+0x5cdf) 
[0x7f44233c4cdf] -> /usr/lib64/dovecot/libdovecot.so.0(+0xf4b46) 
[0x7f442489bb46] -> 
/usr/lib64/dovecot/libdovecot.so.0(o_stream_destroy+0x13) 
[0x7f442489be83] -> 
/usr/lib64/dovecot/libdovecot-storage.so.0(maildir_save_finish+0x173) 
[0x7f4424b93973] -> 
/usr/lib64/dovecot/libdovecot-storage.so.0(mailbox_save_cancel+0x36) 
[0x7f4424b68696] -> dovecot/imap [u...@example.com X.X.X.X 
APPEND](+0xede9) [0x55a9d7e86de9] -> dovecot/imap [u...@example.com 
X.X.X.X APPEND](command_exec+0x65) [0x55a9d7e956e5] -> dovecot/imap 
[u...@example.com X.X.X.X APPEND](+0xf9e6) [0x55a9d7e879e6] -> 
/usr/lib64/dovecot/libdovecot.so.0(io_loop_call_io+0x55) 
[0x7f442488c275] -> 
/usr/lib64/dovecot/libdovecot.so.0(io_loop_handler_run_internal+0xbf) 
[0x7f442488e13f] -> 
/usr/lib64/dovecot/libdovecot.so.0(io_loop_handler_run+0x55) 
[0x7f442488c365] -> /usr/lib64/dovecot/libdovecot.so.0(io_loop_run+0x38) 
[0x7f442488c588] -> 
/usr/lib64/dovecot/libdovecot.so.0(master_service_run+0x13) 
[0x7f4424808053] -> dovecot/imap [u...@example.com X.X.X.X 
APPEND](main+0x32d) [0x55a9d7ea20dd] -> 
/lib64/libc.so.6(__libc_start_main+0x100) [0x7f4424431d20] -> 
dovecot/imap [u...@example.com X.X.X.X APPEND](+0xe419) [0x55a9d7e86419]


No clue how to reproduce


--
Tom


Re: Adding namespace alias_for causes index resync?

2018-09-12 Thread Tom Sommer

On 2018-09-12 16:54, Tom Sommer wrote:

I just added a new namespace-alias with alias_for.

Apparently this causes all mailbox indexes to be resynced? Is this
intentional and/or is there some way to avoid this? My NFS storage
pretty much kills itself when hundreds of thousands of users needs to
resync indexes :)


Also when removing the alias_for namespace, I have to run a "doveadm 
force-resync" on the account?


I thought alias_for would be non-destructive to current users?

---
Tom


Adding namespace alias_for causes index resync?

2018-09-12 Thread Tom Sommer

I just added a new namespace-alias with alias_for.

Apparently this causes all mailbox indexes to be resynced? Is this 
intentional and/or is there some way to avoid this? My NFS storage 
pretty much kills its elf when hundreds of thausands of users needs to 
resync indexes :)


Thanks

--
Tom


Disconnects on Director

2018-09-10 Thread Tom Sommer

Hi

I am seeing a lot of clients time out on my Director. I'm wondering if 
this is "normal" or there is some opportunity to tweak something?


Log line:
imap-login: Info: proxy(u...@example.com): disconnecting 1.2.3.4 
(Disconnected by client: read(size=433) failed: Connection timed out(37s 
idle, in=939, out=3083)): user=, method=CRAM-MD5, 
rip=1.2.3.4, lip=172.0.0.0, TLS: read(size=433) failed: Connection timed 
out, session=


Any hints? Nothing is logged by the kernel.

2.3.2.1

--
Tom


Re: Is the Doveadm HTTP API considered stable for production use?

2018-08-23 Thread Tom Sommer

On 2018-08-22 11:55, James Beck wrote:

Hi,

I'm running 2.2.34 in production (installed from Debian stretch
backports) and want to rework some scripts. Can the HTTP API be
considered stable in 2.2.34 please?


I use it in 2.3 and it works just fine :)

I might have used it in 2.2 as well, I'm not 100% sure though.

--
Tom


allow_nets based on RBL

2018-08-23 Thread Tom Sommer
This was brought up in 2014, and left without conclusion, so I thought 
it would be time to bump it :)


I would love a way to do allow_nets based on an RBL check, could this be 
added to the feature-list?


https://wiki2.dovecot.org/PasswordDatabase/ExtraFields/AllowNets

Thanks
--
Tom


imap-login: Error: BUG: Authentication server sent unknown id

2018-08-17 Thread Tom Sommer

I randomly get these errors on my Director

Aug 17 10:52:37 imap-login: Error: BUG: Authentication server sent 
unknown id 98448
Aug 17 10:52:37 imap-login: Warning: Auth connection closed with 2 
pending requests (max 2 secs, pid=27036, Received broken input: FAIL 
98448   user=u...@example.com)
Aug 17 10:52:37 auth: Warning: auth client 27036 disconnected with 1 
pending requests: EOF


2.3.2.1

--
Tom


lmtp 2.3.2.1 segfault with backtrace

2018-08-14 Thread Tom Sommer

lmtp on Director crash with 2.3.2.1

# gdb /usr/libexec/dovecot/lmtp /var/core/60174
GNU gdb (GDB) Red Hat Enterprise Linux (7.2-92.el6)
Copyright (C) 2010 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later 


This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show 
copying"

and "show warranty" for details.
This GDB was configured as "x86_64-redhat-linux-gnu".
For bug reporting instructions, please see:
...
Reading symbols from /usr/libexec/dovecot/lmtp...Reading symbols from 
/usr/lib/debug/usr/libexec/dovecot/lmtp.debug...done.

done.
[New Thread 60174]
Reading symbols from /usr/lib64/dovecot/libdovecot-lda.so.0...Reading 
symbols from 
/usr/lib/debug/usr/lib64/dovecot/libdovecot-lda.so.0.0.0.debug...done.

done.
Loaded symbols for /usr/lib64/dovecot/libdovecot-lda.so.0
Reading symbols from 
/usr/lib64/dovecot/libdovecot-storage.so.0...Reading symbols from 
/usr/lib/debug/usr/lib64/dovecot/libdovecot-storage.so.0.0.0.debug...done.

done.
Loaded symbols for /usr/lib64/dovecot/libdovecot-storage.so.0
Reading symbols from /usr/lib64/dovecot/libdovecot.so.0...Reading 
symbols from 
/usr/lib/debug/usr/lib64/dovecot/libdovecot.so.0.0.0.debug...done.

done.
Loaded symbols for /usr/lib64/dovecot/libdovecot.so.0
Reading symbols from /lib64/libc.so.6...(no debugging symbols 
found)...done.

Loaded symbols for /lib64/libc.so.6
Reading symbols from /lib64/librt.so.1...(no debugging symbols 
found)...done.

Loaded symbols for /lib64/librt.so.1
Reading symbols from /lib64/libdl.so.2...(no debugging symbols 
found)...done.

Loaded symbols for /lib64/libdl.so.2
Reading symbols from /lib64/ld-linux-x86-64.so.2...(no debugging symbols 
found)...done.

Loaded symbols for /lib64/ld-linux-x86-64.so.2
Reading symbols from /lib64/libpthread.so.0...(no debugging symbols 
found)...done.

[Thread debugging using libthread_db enabled]
Loaded symbols for /lib64/libpthread.so.0
Reading symbols from 
/usr/lib64/dovecot/libssl_iostream_openssl.so...Reading symbols from 
/usr/lib/debug/usr/lib64/dovecot/libssl_iostream_openssl.so.debug...done.

done.
Loaded symbols for /usr/lib64/dovecot/libssl_iostream_openssl.so
Reading symbols from /usr/lib64/libssl.so.10...(no debugging symbols 
found)...done.

Loaded symbols for /usr/lib64/libssl.so.10
Reading symbols from /usr/lib64/libcrypto.so.10...(no debugging symbols 
found)...done.

Loaded symbols for /usr/lib64/libcrypto.so.10
Reading symbols from /lib64/libgssapi_krb5.so.2...(no debugging symbols 
found)...done.

Loaded symbols for /lib64/libgssapi_krb5.so.2
Reading symbols from /lib64/libkrb5.so.3...(no debugging symbols 
found)...done.

Loaded symbols for /lib64/libkrb5.so.3
Reading symbols from /lib64/libcom_err.so.2...(no debugging symbols 
found)...done.

Loaded symbols for /lib64/libcom_err.so.2
Reading symbols from /lib64/libk5crypto.so.3...(no debugging symbols 
found)...done.

Loaded symbols for /lib64/libk5crypto.so.3
Reading symbols from /lib64/libz.so.1...(no debugging symbols 
found)...done.

Loaded symbols for /lib64/libz.so.1
Reading symbols from /lib64/libkrb5support.so.0...(no debugging symbols 
found)...done.

Loaded symbols for /lib64/libkrb5support.so.0
Reading symbols from /lib64/libkeyutils.so.1...(no debugging symbols 
found)...done.

Loaded symbols for /lib64/libkeyutils.so.1
Reading symbols from /lib64/libresolv.so.2...(no debugging symbols 
found)...done.

Loaded symbols for /lib64/libresolv.so.2
Reading symbols from /lib64/libselinux.so.1...(no debugging symbols 
found)...done.

Loaded symbols for /lib64/libselinux.so.1
Core was generated by `dovecot/lmtp'.
Program terminated with signal 11, Segmentation fault.
#0  smtp_client_command_set_replies (cmd=0x0, replies=1) at 
smtp-client-command.c:401

401 i_assert(cmd->replies_expected == 1 ||
Missing separate debuginfos, use: debuginfo-install 
glibc-2.12-1.212.el6.x86_64 keyutils-libs-1.4-5.el6.x86_64 
krb5-libs-1.10.3-65.el6.x86_64 libcom_err-1.41.12-24.el6.x86_64 
libselinux-2.0.94-7.el6.x86_64 openssl-1.0.1e-57.el6.x86_64 
zlib-1.2.3-29.el6.x86_64

(gdb) bt full
#0  smtp_client_command_set_replies (cmd=0x0, replies=1) at 
smtp-client-command.c:401

__func__ = "smtp_client_command_set_replies"
#1  0x7f5d14f40f3f in smtp_client_transaction_data_cb 
(reply=0x7ffe5cd19650, trans=0x7f5d176ae2b8) at 
smtp-client-transaction.c:658

conn = 0x7f5d176ade80
rcpt = 0x7f5d176ae560
i = 
count = 1
#2  0x7f5d14f3e941 in smtp_client_command_fail_reply (_cmd=optimized out>, reply=0x7ffe5cd19650) at smtp-client-command.c:299

cmd = 0x7f5d17600d18
tmp_cmd = 
conn = 0x7f5d176ade80
state = 
callback = 0x7f5d14f40e80 
#3  0x7f5d14f4113f in smtp_client_transaction_fail_reply 
(trans=0x7f5d176ae2b8, reply=0x7ffe5cd19650) at 
smtp-client-transaction.c:365

  

Re: Set X-Original-To based an ORCPT?

2018-08-07 Thread Tom Sommer

On 2018-08-07 13:07, Marco Giunta wrote:

Hi,
to get a 'Delivered-to' header based on ORCPT, I wrote a patch
(attached) to force Dovecot lmtp to advertise DSN after a LHLO command.
In this way, Postfix add an ORCPT to the RCTP command
(http://postfix.1071664.n5.nabble.com/pipe-flags-vs-lmtp-td11587.html#a11596).

Be carefully: in this way DSN notification is broken, but they were
broken in any case at the time I wrote the patch (read the entire post
linked above).

The first patch is for Dovecot 2.2.x: after apply, you cannot disable
the DSN advertisement. The other is for Dovecot 2.3.0: you can
enable/disable the advertisement using the new bool parameter
'lmtp_lhlo_dsn'.


Interesting that support is actually built in, but simply not 
advertised.


https://github.com/dovecot/core/commit/38f624b427aa8b6fad3765e6efd97c85a7f97a09

Maybe there is a plan?

--
Tom


Re: Set X-Original-To based an ORCPT?

2018-08-07 Thread Tom Sommer

On 2015-09-02 22:01, Peer Heinlein wrote:

Since

http://dovecot.org/pipermail/dovecot-cvs/2014-November/025241.html

Dovecot's LMTP does support ORCPT.

Is it possible to set X-Original-To-Header based on that ORCPT?


Any news or response on this? I too am in need of this header being 
passed and saved correctly.


Thanks.

--
Tom


Re: [Dovecot] quota-status not working in distributed environment

2018-07-27 Thread Tom Sommer

On 2013-06-16 21:46, Timo Sirainen wrote:

On 14.6.2013, at 9.15, Benoit Panizzon  wrote:

Is there a way to get quota-status to also use the proxy feature to 
request

the quota information from the correct machine?


Looks like this is a missing feature. I first thought quota-status
would go through doveadm protocol, which would make this work via
doveadm proxying, but looks like it doesn't. Perhaps it optionally
should.


Any news on this? Seems strange to lose this feature when running 
Director.


--
Tom


Re: v2.3.2.1 released

2018-07-10 Thread Tom Sommer

On 2018-07-10 13:17, Timo Sirainen wrote:

On 10 Jul 2018, at 10.40, Tom Sommer  wrote:


On 2018-07-09 16:48, Timo Sirainen wrote:

https://dovecot.org/releases/2.3/dovecot-2.3.2.1.tar.gz
https://dovecot.org/releases/2.3/dovecot-2.3.2.1.tar.gz.sig
v2.3.2 still had a few unexpected bugs:
- SSL/TLS servers may have crashed during client disconnection
- lmtp: With lmtp_rcpt_check_quota=yes mail deliveries may have
  sometimes assert-crashed.
- v2.3.2: "make check" may have crashed with 32bit systems


Thank you for the fast release, much appreciated


Are the imap-login crashes now gone?


Yes, all gone


Re: v2.3.2.1 released

2018-07-10 Thread Tom Sommer

On 2018-07-09 16:48, Timo Sirainen wrote:

https://dovecot.org/releases/2.3/dovecot-2.3.2.1.tar.gz
https://dovecot.org/releases/2.3/dovecot-2.3.2.1.tar.gz.sig

v2.3.2 still had a few unexpected bugs:

 - SSL/TLS servers may have crashed during client disconnection
 - lmtp: With lmtp_rcpt_check_quota=yes mail deliveries may have
   sometimes assert-crashed.
 - v2.3.2: "make check" may have crashed with 32bit systems


Thank you for the fast release, much appreciated


Re: 2.3.2 director imap-login segfaults

2018-07-06 Thread Tom Sommer

On 2018-07-06 10:30, Timo Sirainen wrote:

On 5 Jul 2018, at 15.12, Tom Sommer  wrote:


My director has started segfaulting since upgradeing to 2.3.2:

#0  0x7fa19b3ec6ed in i_stream_get_root_io () from 
/usr/lib64/dovecot/libdovecot.so.0

No symbol table info available.
#1  0x7fa19b3ec9b5 in i_stream_set_input_pending () from 
/usr/lib64/dovecot/libdovecot.so.0

No symbol table info available.
#2  0x7fa198d48b35 in openssl_iostream_bio_sync () from 
/usr/lib64/dovecot/libssl_iostream_openssl.so

No symbol table info available.
#3  0x7fa198d4920a in openssl_iostream_more () from 
/usr/lib64/dovecot/libssl_iostream_openssl.so

No symbol table info available.


Can you try if the attached patch fixes it?


I just switched away from source to the centos repo, so I will have to 
do a complete reinstall from source - But if you really need me to, I 
can do that.


---
Tom


Re: 2.3.2 director imap-login segfaults

2018-07-05 Thread Tom Sommer
t;, delayed=) at login-proxy.c:529
proxy = 0x7f6983ff8990
client = 0x0
ipstr = 
delay_ms = 
__func__ = "login_proxy_free_full"
#8 0x7f697d8d1aca in login_proxy_free_delayed (side=, status=, proxy=0x0) at login-proxy.c:541
No locals.
#9 login_proxy_free_errstr (side=, status=, proxy=0x0) at login-proxy.c:129
proxy = 0x7f6983ff8990
reason = 0x7f697f6ae068
#10 login_proxy_finished (side=, status=, proxy=0x0) at login-proxy.c:619
errstr = 
server_side = true
#11 0x7f697d62f6d5 in io_loop_call_io (io=0x7f6981019440) at
ioloop.c:674
ioloop = 0x7f697f6b6d00
t_id = 2
__func__ = "io_loop_call_io"
#12 0x7f697d6316af in io_loop_handler_run_internal (ioloop=) at ioloop-epoll.c:222
ctx = 0x7f697f6e5de0
events = 
event = 0x7f6982f5f4e0
list = 0x7f6980b980e0
io = 
tv = {tv_sec = 0, tv_usec = 369635}
events_count = 
msecs = 
ret = 1
i = 
call = 
__func__ = "io_loop_handler_run_internal"
#13 0x7f697d62f7c5 in io_loop_handler_run (ioloop=0x7f697f6b6d00) at
ioloop.c:726
__func__ = "io_loop_handler_run"
#14 0x7f697d62f9e8 in io_loop_run (ioloop=0x7f697f6b6d00) at
ioloop.c:699
__func__ = "io_loop_run"
#15 0x7f697d5ac963 in master_service_run (service=0x7f697f6b6b90,
callback=) at master-service.c:767
No locals.
#16 0x7f697d8d31b3 in login_binary_run (binary=, argc=2, argv=0x7f697f6b68a0) at main.c:549
set_pool = 0x7f697f6b7e80
login_socket = 
c = 
#17 0x7f697d1d6d1d in __libc_start_main () from /lib64/libc.so.6
No symbol table info available.
#18 0x7f697dd00599 in _start ()
No symbol table info available.

---
Tom 

On 2018-07-05 15:43, Aki Tuomi wrote:

> Can you install debuginfo and try again? 
> 
> --- 
> Aki Tuomi 
> Dovecot oy 
> 
>  Original message  
> From: Tom Sommer  
> Date: 05/07/2018 14:12 (GMT+01:00) 
> To: Dovecot  
> Subject: 2.3.2 director imap-login segfaults 
> My director has started segfaulting since upgradeing to 2.3.2:
> 
> # gdb /usr/libexec/dovecot/imap-login ./core.9757
> GNU gdb (GDB) Red Hat Enterprise Linux (7.2-92.el6)
> Copyright (C) 2010 Free Software Foundation, Inc.
> License GPLv3+: GNU GPL version 3 or later 
> <http://gnu.org/licenses/gpl.html>
> This is free software: you are free to change and redistribute it.
> There is NO WARRANTY, to the extent permitted by law.  Type "show 
> copying"
> and "show warranty" for details.
> This GDB was configured as "x86_64-redhat-linux-gnu".
> For bug reporting instructions, please see:
> <http://www.gnu.org/software/gdb/bugs/>...
> Reading symbols from /usr/libexec/dovecot/imap-login...(no debugging 
> symbols found)...done.
> [New Thread 9757]
> Reading symbols from /usr/lib64/dovecot/libdovecot-login.so.0...(no 
> debugging symbols found)...done.
> Loaded symbols for /usr/lib64/dovecot/libdovecot-login.so.0
> Reading symbols from /usr/lib64/dovecot/libdovecot.so.0...(no debugging 
> symbols found)...done.
> Loaded symbols for /usr/lib64/dovecot/libdovecot.so.0
> Reading symbols from /lib64/libc.so.6...(no debugging symbols 
> found)...done.
> Loaded symbols for /lib64/libc.so.6
> Reading symbols from /usr/lib64/libssl.so.10...(no debugging symbols 
> found)...done.
> Loaded symbols for /usr/lib64/libssl.so.10
> Reading symbols from /usr/lib64/libcrypto.so.10...(no debugging symbols 
> found)...done.
> Loaded symbols for /usr/lib64/libcrypto.so.10
> Reading symbols from /lib64/librt.so.1...(no debugging symbols 
> found)...done.
> Loaded symbols for /lib64/librt.so.1
> Reading symbols from /lib64/libdl.so.2...(no debugging symbols 
> found)...done.
> Loaded symbols for /lib64/libdl.so.2
> Reading symbols from /lib64/ld-linux-x86-64.so.2...(no debugging symbols 
> found)...done.
> Loaded symbols for /lib64/ld-linux-x86-64.so.2
> Reading symbols from /lib64/libgssapi_krb5.so.2...(no debugging symbols 
> found)...done.
> Loaded symbols for /lib64/libgssapi_krb5.so.2
> Reading symbols from /lib64/libkrb5.so.3...(no debugging symbols 
> found)...done.
> Loaded symbols for /lib64/libkrb5.so.3
> Reading symbols from /lib64/libcom_err.so.2...(no debugging symbols 
> found)...done.
> Loaded symbols for /lib64/libcom_err.so.2
> Reading symbols from /lib64/libk5crypto.so.3...(no debugging symbols 
> found)...done.
> Loaded symbols for /lib64/libk5crypto.so.3
> Reading symbols from /lib64/libz.so.1...(no debugging symbols 
> found)...done.
> Loaded symbols for /lib64/libz.so.1
> Reading symbols from /lib64/libpthread.so.0...(no debugging symbols 
> found)...done.
> [Thread debugging using libthread_db enabled]
> Loaded symbols for /lib64/libpthread.so.0
> Reading symbols from /lib64/libkrb5support.so.0...(no debugging symbols 
> found)...done.
> Loaded symbols for /lib64/libkrb5support.so.0
> Reading symbols 

2.3.2 director imap-login segfaults

2018-07-05 Thread Tom Sommer

My director has started segfaulting since upgradeing to 2.3.2:

# gdb /usr/libexec/dovecot/imap-login ./core.9757
GNU gdb (GDB) Red Hat Enterprise Linux (7.2-92.el6)
Copyright (C) 2010 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later 


This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show 
copying"

and "show warranty" for details.
This GDB was configured as "x86_64-redhat-linux-gnu".
For bug reporting instructions, please see:
...
Reading symbols from /usr/libexec/dovecot/imap-login...(no debugging 
symbols found)...done.

[New Thread 9757]
Reading symbols from /usr/lib64/dovecot/libdovecot-login.so.0...(no 
debugging symbols found)...done.

Loaded symbols for /usr/lib64/dovecot/libdovecot-login.so.0
Reading symbols from /usr/lib64/dovecot/libdovecot.so.0...(no debugging 
symbols found)...done.

Loaded symbols for /usr/lib64/dovecot/libdovecot.so.0
Reading symbols from /lib64/libc.so.6...(no debugging symbols 
found)...done.

Loaded symbols for /lib64/libc.so.6
Reading symbols from /usr/lib64/libssl.so.10...(no debugging symbols 
found)...done.

Loaded symbols for /usr/lib64/libssl.so.10
Reading symbols from /usr/lib64/libcrypto.so.10...(no debugging symbols 
found)...done.

Loaded symbols for /usr/lib64/libcrypto.so.10
Reading symbols from /lib64/librt.so.1...(no debugging symbols 
found)...done.

Loaded symbols for /lib64/librt.so.1
Reading symbols from /lib64/libdl.so.2...(no debugging symbols 
found)...done.

Loaded symbols for /lib64/libdl.so.2
Reading symbols from /lib64/ld-linux-x86-64.so.2...(no debugging symbols 
found)...done.

Loaded symbols for /lib64/ld-linux-x86-64.so.2
Reading symbols from /lib64/libgssapi_krb5.so.2...(no debugging symbols 
found)...done.

Loaded symbols for /lib64/libgssapi_krb5.so.2
Reading symbols from /lib64/libkrb5.so.3...(no debugging symbols 
found)...done.

Loaded symbols for /lib64/libkrb5.so.3
Reading symbols from /lib64/libcom_err.so.2...(no debugging symbols 
found)...done.

Loaded symbols for /lib64/libcom_err.so.2
Reading symbols from /lib64/libk5crypto.so.3...(no debugging symbols 
found)...done.

Loaded symbols for /lib64/libk5crypto.so.3
Reading symbols from /lib64/libz.so.1...(no debugging symbols 
found)...done.

Loaded symbols for /lib64/libz.so.1
Reading symbols from /lib64/libpthread.so.0...(no debugging symbols 
found)...done.

[Thread debugging using libthread_db enabled]
Loaded symbols for /lib64/libpthread.so.0
Reading symbols from /lib64/libkrb5support.so.0...(no debugging symbols 
found)...done.

Loaded symbols for /lib64/libkrb5support.so.0
Reading symbols from /lib64/libkeyutils.so.1...(no debugging symbols 
found)...done.

Loaded symbols for /lib64/libkeyutils.so.1
Reading symbols from /lib64/libresolv.so.2...(no debugging symbols 
found)...done.

Loaded symbols for /lib64/libresolv.so.2
Reading symbols from /lib64/libselinux.so.1...(no debugging symbols 
found)...done.

Loaded symbols for /lib64/libselinux.so.1
Reading symbols from /usr/lib64/dovecot/libssl_iostream_openssl.so...(no 
debugging symbols found)...done.

Loaded symbols for /usr/lib64/dovecot/libssl_iostream_openssl.so
Core was generated by `dovecot/imap-login [26 pre-l'.
Program terminated with signal 11, Segmentation fault.
#0  0x7fa19b3ec6ed in i_stream_get_root_io () from 
/usr/lib64/dovecot/libdovecot.so.0
Missing separate debuginfos, use: debuginfo-install 
dovecot-2.3.2-3.x86_64

(gdb) bt full
#0  0x7fa19b3ec6ed in i_stream_get_root_io () from 
/usr/lib64/dovecot/libdovecot.so.0

No symbol table info available.
#1  0x7fa19b3ec9b5 in i_stream_set_input_pending () from 
/usr/lib64/dovecot/libdovecot.so.0

No symbol table info available.
#2  0x7fa198d48b35 in openssl_iostream_bio_sync () from 
/usr/lib64/dovecot/libssl_iostream_openssl.so

No symbol table info available.
#3  0x7fa198d4920a in openssl_iostream_more () from 
/usr/lib64/dovecot/libssl_iostream_openssl.so

No symbol table info available.
#4  0x7fa198d49247 in ?? () from 
/usr/lib64/dovecot/libssl_iostream_openssl.so

No symbol table info available.
#5  0x7fa19b694862 in client_unref () from 
/usr/lib64/dovecot/libdovecot-login.so.0

No symbol table info available.
#6  0x7fa19b698adc in ?? () from 
/usr/lib64/dovecot/libdovecot-login.so.0

No symbol table info available.
#7  0x7fa19b699aca in ?? () from 
/usr/lib64/dovecot/libdovecot-login.so.0

No symbol table info available.
#8  0x7fa19b3f76d5 in io_loop_call_io () from 
/usr/lib64/dovecot/libdovecot.so.0

No symbol table info available.
#9  0x7fa19b3f96af in io_loop_handler_run_internal () from 
/usr/lib64/dovecot/libdovecot.so.0

No symbol table info available.
#10 0x7fa19b3f77c5 in io_loop_handler_run () from 
/usr/lib64/dovecot/libdovecot.so.0

No symbol table info available.
#11 0x7fa19b3f79e8 in io_loop_run () from 

Re: v2.3.2 released

2018-06-29 Thread Tom Sommer

On 2018-06-29 16:00, Timo Sirainen wrote:

On 29 Jun 2018, at 15.28, Tom Sommer  wrote:


On 2018-06-29 15:20, Timo Sirainen wrote:

On 29 Jun 2018, at 15.05, Tom Sommer  wrote:

On 2018-06-29 14:51, Timo Sirainen wrote:

v2.3.2 is mainly a bugfix release. It contains all the changes in
v2.2.36, as well as a bunch of other fixes (mainly for v2.3-only
bugs). Binary packages are already in https://repo.dovecot.org/

A simple "yum update" will result in a ton of these errors:
Jun 29 15:02:19 stats: Error: stats: Socket supports major version 
2, but we support only 3 (mixed old and new binaries?)
Should the yum update process perhaps not restart the dovecot 
service?

It sounds like you upgraded from v2.2.x to v2.3.2? The stats was
completely changed.


From 2.3.1 to 2.3.2:

Installed:
 dovecot.x86_64 2:2.3.2-3dovecot-mysql.x86_64 2:2.3.2-3

Updated:
 dovecot-pigeonhole.x86_64 2:2.3.2-3

Replaced:
 dovecot.x86_64 2:2.3.1-1dovecot-mysql.x86_64 2:2.3.1-1


That's weird. Something appears to be connecting to the stats socket
with old protocol version. What your doveconf -n?


service stats {
  chroot = empty
  client_limit = 0
  drop_priv_before_exec = no
  executable = stats
  extra_groups =
  group =
  idle_kill = 4294967295 secs
  privileged_group =
  process_limit = 1
  process_min_avail = 0
  protocol =
  service_count = 0
  type =
  unix_listener stats-reader {
group =
mode = 0600
user =
  }
  unix_listener stats-writer {
group = $default_internal_group
mode = 0660
user =
  }
  user = $default_internal_user
  vsz_limit = 18446744073709551615 B
}


This is the full error-log that happened after "yum update": 
https://pastebin.com/tUJaehdV


"Jun 29 15:02:20" is the "/etc/init.d/dovecot restart"

Maybe it was a one-time thing for only my setup, I don't know - Although 
I find it hard to understand how a stat-writer socket with an old 
version should be able to run in 2.3.1 with major version 2 for several 
months - and the same thing happened on all of my 22 
director-backend-servers.


Oh well.


Re: v2.3.2 released

2018-06-29 Thread Tom Sommer

On 2018-06-29 15:20, Timo Sirainen wrote:

On 29 Jun 2018, at 15.05, Tom Sommer  wrote:


On 2018-06-29 14:51, Timo Sirainen wrote:


v2.3.2 is mainly a bugfix release. It contains all the changes in
v2.2.36, as well as a bunch of other fixes (mainly for v2.3-only
bugs). Binary packages are already in https://repo.dovecot.org/


A simple "yum update" will result in a ton of these errors:

Jun 29 15:02:19 stats: Error: stats: Socket supports major version 2, 
but we support only 3 (mixed old and new binaries?)


Should the yum update process perhaps not restart the dovecot service?


It sounds like you upgraded from v2.2.x to v2.3.2? The stats was
completely changed.


From 2.3.1 to 2.3.2:

Installed:
  dovecot.x86_64 2:2.3.2-3dovecot-mysql.x86_64 2:2.3.2-3

Updated:
  dovecot-pigeonhole.x86_64 2:2.3.2-3

Replaced:
  dovecot.x86_64 2:2.3.1-1dovecot-mysql.x86_64 2:2.3.1-1


Re: v2.3.2 released

2018-06-29 Thread Tom Sommer

On 2018-06-29 14:51, Timo Sirainen wrote:


v2.3.2 is mainly a bugfix release. It contains all the changes in
v2.2.36, as well as a bunch of other fixes (mainly for v2.3-only
bugs). Binary packages are already in https://repo.dovecot.org/


A simple "yum update" will result in a ton of these errors:

Jun 29 15:02:19 stats: Error: stats: Socket supports major version 2, 
but we support only 3 (mixed old and new binaries?)


Should the yum update process perhaps not restart the dovecot service?

---
Tom


More and better logging

2018-06-25 Thread Tom Sommer
In general I feel there is a lack of debug and "when things go 
wrong"-logging in Dovecot (information that should perhaps be provided 
with a verbose toggle).


I was debugging problems with sasl user logins and a generic "SASL 
CRAM-MD5 authentication failed: Connection lost to authentication 
server" error in postfix, but nothing is logged in Dovecot when the 
error occurs.


After extensive debugging I finally discovered auth-penalty and 
auth_penalty_to_secs after digging in the code. I suspect this to be the 
cause, but truly I have no way of confirming this, since no log is 
provided when this is triggered? - it would be nicer if events such as 
these were simply logged for better debugging and troubleshooting.


Two cents :)

Thanks
--
Tom


2.3.1: Core dump on quota reach

2018-06-12 Thread Tom Sommer

Just upgraded from 2.2.34 to 2.3.1:

Jun 12 16:06:38 
lmtp(exam...@example.name)<46176><8JrHEW7TH1tgtAAAd9aEAw>: Info: 
msgid=: 
save failed to INBOX: The email account that you tried to reach is over 
quota (Mailbox is full).
Jun 12 16:06:38 
lmtp(exam...@example.name)<46176><8JrHEW7TH1tgtAAAd9aEAw>: Fatal: 
master: service(lmtp): child 46176 killed with signal 11 (core dumps 
disabled - https://dovecot.org/bugreport.html#coredumps)



And also, unrelated:

Jun 12 16:21:32 lmtp(53430): Error: lmtp-server: conn 
director.local:39204 [6]: Connection lost during data transfer: 
read(director.local:39204 [6]) failed: 


--
Tom


Auth cache prevents login when non-password fields change

2018-01-23 Thread Tom Sommer
The auth cache contains all fields returned by "password_query" (in 
itself a little odd), including the fields "nologin" and "reason".


If a user is cached with "nologin=Y" and the database-source is changed 
so nologin condition is no longer true, then the cache prevents login 
for as long as the TTL remain?


is there any way around this?

Thanks
--
Tom


Re: Package repository now available

2017-12-29 Thread Tom Sommer

On 2017-12-27 16:00, aki.tu...@dovecot.fi wrote:

Dovecot now has package repository for Debian, CentOS and Ubuntu
available at https://repo.dovecot.org/

Packages are provided for 2.3 series and forward. Please let us know
if you run into any trouble with these.



Thank you so much for this. Awesome!

When do you push the packages?
Sometimes you do re-releases of the source tarballs if minor bugs are 
found, so I'm wondering if it would make sense to wait 1 week to release 
rpms? So you don't yum-autoupdate all installations with possibly faulty 
releases?


Thanks again.

--
Tom Sommer


Re: NOTIFY broken in 2.2.31?

2017-07-04 Thread Tom Sommer



On 2017-06-30 17:09, Timo Sirainen wrote:

On 29 Jun 2017, at 13.27, Bojan Smojver  wrote:


Hi,

I just updated my Dovecot to the new build in Fedora 26, which is 
based

on 2.2.31. After I did that, Evolution (3.24.3) is no longer receiving
notifications about new mail in folders other than inbox (the same
version of Evo that worked fine against Dovecot 2.2.30.2). The only
folder that still gets notifications is inbox.

I see there have been changes to NOTIFY code in this release. What
would you like me to send to you in order to debug this problem?

When I run Evo with some debugging options turned on, I can see:

[imapx:B] I/O: 'B00010 NOTIFY SET (selected (MessageNew (UID
RFC822.SIZE RFC822.HEADER FLAGS) MessageExpunge FlagChange)) (personal
(MessageNew MessageExpunge MailboxName SubscriptionChange))'
[imapx:B] I/O: 'B00010 OK NOTIFY completed (0.001 + 0.000 secs).'


So, notify setting appears to be happening and it looks pretty much 
the

same as it does against version 2.2.30.2.


I was writing some further tests for NOTIFY, and looks like I
misunderstood the RFC and wrote broken tests. Then I thought that
NOTIFY had been broken for  years, but actually I just broke it only
in v2.2.31 :( Just revert this patch and it'll work:
https://github.com/dovecot/core/commit/64d2efdc4b0bdf92249840e9db89b91c8dc0f3a3.patch



New release?


Re: v2.2.30.1 released

2017-06-06 Thread Tom Sommer


On 2017-06-05 15:10, Tom Sommer wrote:

On 2017-05-31 15:24, Timo Sirainen wrote:

https://dovecot.org/releases/2.2/dovecot-2.2.30.1.tar.gz
https://dovecot.org/releases/2.2/dovecot-2.2.30.1.tar.gz.sig

Due to some release process changes I didn't notice that one important
bugfix wasn't included in the v2.2.30 release branch before I made the
release. So fixing it here with v2.2.30.1. Also included another less
important fix.

- quota_warning scripts weren't working in v2.2.30
- vpopmail still wasn't compiling

Also I guess should mention that in v2.2.30+ the "script" service's
protocol changed to a new version. If anyone had written their own
script services (not using the included "script" binary) they would
need some changes. I haven't heard of anyone having done that though.


Just upgraded my Director and got this within minutes.

Jun 05 15:03:34 master: Warning: service(auth-worker): process_limit
(100) reached, client connections are being dropped

...

Reverted to 2.2.26 :(


Appears fixed in 2.2.30.2


Re: v2.2.30.1 released

2017-06-05 Thread Tom Sommer


On 2017-05-31 15:24, Timo Sirainen wrote:

https://dovecot.org/releases/2.2/dovecot-2.2.30.1.tar.gz
https://dovecot.org/releases/2.2/dovecot-2.2.30.1.tar.gz.sig

Due to some release process changes I didn't notice that one important
bugfix wasn't included in the v2.2.30 release branch before I made the
release. So fixing it here with v2.2.30.1. Also included another less
important fix.

- quota_warning scripts weren't working in v2.2.30
- vpopmail still wasn't compiling

Also I guess should mention that in v2.2.30+ the "script" service's
protocol changed to a new version. If anyone had written their own
script services (not using the included "script" binary) they would
need some changes. I haven't heard of anyone having done that though.


Just upgraded my Director and got this within minutes.

Jun 05 15:03:34 master: Warning: service(auth-worker): process_limit 
(100) reached, client connections are being dropped


I did a "ps axf":

34384 ?S  0:00  \_ dovecot/auth worker: idling
34385 ?S  0:00  \_ dovecot/auth worker: idling
34386 ?S  0:00  \_ dovecot/auth worker: idling
34387 ?S  0:00  \_ dovecot/auth worker: idling
34388 ?S  0:00  \_ dovecot/auth worker: idling
34389 ?S  0:00  \_ dovecot/auth worker: idling
34390 ?S  0:00  \_ dovecot/auth worker: idling
34391 ?S  0:00  \_ dovecot/auth worker: idling
34392 ?S  0:00  \_ dovecot/auth worker: idling
34393 ?S  0:00  \_ dovecot/auth worker: idling
34394 ?S  0:00  \_ dovecot/auth worker: idling
34395 ?S  0:00  \_ dovecot/auth worker: idling
34396 ?S  0:00  \_ dovecot/auth worker: idling
34397 ?S  0:00  \_ dovecot/auth worker: idling
34398 ?S  0:00  \_ dovecot/auth worker: idling
34399 ?S  0:00  \_ dovecot/auth worker: idling
34400 ?S  0:00  \_ dovecot/auth worker: idling
34401 ?S  0:00  \_ dovecot/auth worker: idling
34402 ?S  0:00  \_ dovecot/auth worker: idling
34403 ?S  0:00  \_ dovecot/auth worker: idling
34404 ?S  0:00  \_ dovecot/auth worker: idling
34405 ?S  0:00  \_ dovecot/auth worker: idling
34406 ?S  0:00  \_ dovecot/auth worker: idling
34407 ?S  0:00  \_ dovecot/auth worker: idling
34408 ?S  0:00  \_ dovecot/auth worker: idling
34409 ?S  0:00  \_ dovecot/auth worker: idling
34410 ?S  0:00  \_ dovecot/auth worker: idling
34411 ?S  0:00  \_ dovecot/auth worker: idling
34412 ?S  0:00  \_ dovecot/auth worker: idling
34413 ?S  0:00  \_ dovecot/auth worker: idling
34414 ?S  0:00  \_ dovecot/auth worker: idling
34415 ?S  0:00  \_ dovecot/auth worker: idling
34416 ?S  0:00  \_ dovecot/auth worker: idling
34417 ?S  0:00  \_ dovecot/auth worker: idling
34418 ?S  0:00  \_ dovecot/auth worker: idling
34419 ?S  0:00  \_ dovecot/auth worker: idling
34420 ?S  0:00  \_ dovecot/auth worker: idling
34421 ?S  0:00  \_ dovecot/auth worker: idling
34422 ?S  0:00  \_ dovecot/auth worker: idling
34423 ?S  0:00  \_ dovecot/auth worker: idling
34424 ?S  0:00  \_ dovecot/auth worker: idling
34425 ?S  0:00  \_ dovecot/auth worker: idling
34426 ?S  0:00  \_ dovecot/auth worker: idling
34427 ?S  0:00  \_ dovecot/auth worker: idling
34428 ?S  0:00  \_ dovecot/auth worker: idling
34429 ?S  0:00  \_ dovecot/auth worker: idling
34430 ?S  0:00  \_ dovecot/auth worker: idling
34431 ?S  0:00  \_ dovecot/auth worker: idling
34432 ?S  0:00  \_ dovecot/auth worker: idling
34433 ?S  0:00  \_ dovecot/auth worker: idling
34434 ?S  0:00  \_ dovecot/auth worker: idling
34435 ?S  0:00  \_ dovecot/auth worker: idling
34436 ?S  0:00  \_ dovecot/auth worker: idling
34437 ?S  0:00  \_ dovecot/auth worker: idling
34438 ?S  0:00  \_ dovecot/auth worker: idling
34439 ?S  0:00  \_ dovecot/auth worker: idling
34440 ?S  0:00  \_ dovecot/auth worker: idling
34441 ?S  0:00  \_ dovecot/auth worker: idling
34442 ?S  0:00  \_ dovecot/auth worker: idling
34443 ?S  0:00  \_ dovecot/auth worker: idling
3 ?S  0:00  \_ dovecot/auth worker: idling
34445 ?S  0:00  \_ dovecot/auth worker: idling
34446 ?S  0:00  \_ dovecot/auth worker: idling
34447 ?S  0:00  \_ dovecot/auth worker: idling
34448 ?S  0:00  \_ dovecot/auth worker: idling
34449 ?S  0:00  \_ dovecot/auth worker: idling
34450 ?S  0:00  \_ dovecot/auth worker: idling
34451 ?S  0:00  \_ dovecot/auth worker: idling
34452 ?S   

Re: v2.2.30.1 released

2017-05-31 Thread Tom Sommer


On 2017-05-31 15:24, Timo Sirainen wrote:

https://dovecot.org/releases/2.2/dovecot-2.2.30.1.tar.gz
https://dovecot.org/releases/2.2/dovecot-2.2.30.1.tar.gz.sig

Due to some release process changes I didn't notice that one important
bugfix wasn't included in the v2.2.30 release branch before I made the
release. So fixing it here with v2.2.30.1. Also included another less
important fix.

- quota_warning scripts weren't working in v2.2.30
- vpopmail still wasn't compiling

Also I guess should mention that in v2.2.30+ the "script" service's
protocol changed to a new version. If anyone had written their own
script services (not using the included "script" binary) they would
need some changes. I haven't heard of anyone having done that though.


A quick request to just call these 2.2.31 etc.?

// Tom


Re: lmtp segfault after upgrade

2017-05-19 Thread Tom Sommer


On 2017-05-19 08:41, Timo Sirainen wrote:


On 2 May 2017, at 11.19, Timo Sirainen <t...@iki.fi> wrote:

On 2 May 2017, at 10.41, Tom Sommer <m...@tomsommer.dk> wrote:

(gdb) bt full
#0  i_stream_seek (stream=0x21, v_offset=0) at istream.c:298
_stream = 
#1  0x7fe98391ff32 in i_stream_concat_read_next (stream=0x1efe6c0) 
at istream-concat.c:77

prev_input = 0x1ef1560
data = 0x0
data_size = 
size = 
#2  i_stream_concat_read (stream=0x1efe6c0) at istream-concat.c:175
This isn't very obvious.. There hasn't been any changes to 
istream-concat code in 2.2.29 and I can't really think of any other 
changes either that could be causing these crashes. Do these crashes 
happen to all mail deliveries or only some (any idea of percentage)? 
Maybe only for deliveries that have multiple recipients (in different 
backends)? We'll try to reproduce, but I'd think someone else would 
have already noticed/complained if it was that badly broken..


What's your doveconf -n? Also can you try running via valgrind to see 
what it logs before the crash? :


service lmtp {
executable = /usr/bin/valgrind --vgdb=no -q /usr/libexec/dovecot/lmtp # 
or whatever the lmtp path really is

}


I don't really know why this started happening with you now, but it 
should be fixed by 
https://github.com/dovecot/core/commit/16b5dc27e7db42849510403d37e3629aba14de21


Somehow I always end up being hit by the weirdest bugs :)

Thank you for the fix.

Any chance of a 2.2.30 with this included? :)


Re: lmtp segfault after upgrade

2017-05-18 Thread Tom Sommer



---
Tom

On 2017-05-18 10:05, Teemu Huovila wrote:

On 18.05.2017 10:55, Tom Sommer wrote:

On 2017-05-18 09:36, Teemu Huovila wrote:

Hello Tom

On 02.05.2017 11:19, Timo Sirainen wrote:

On 2 May 2017, at 10.41, Tom Sommer <m...@tomsommer.dk> wrote:


(gdb) bt full
#0  i_stream_seek (stream=0x21, v_offset=0) at istream.c:298
   _stream = 
#1  0x7fe98391ff32 in i_stream_concat_read_next 
(stream=0x1efe6c0) at istream-concat.c:77

   prev_input = 0x1ef1560
   data = 0x0
   data_size = 
   size = 
#2  i_stream_concat_read (stream=0x1efe6c0) at istream-concat.c:175


This isn't very obvious.. There hasn't been any changes to 
istream-concat code in 2.2.29 and I can't really think of any other 
changes either that could be causing these crashes. Do these crashes 
happen to all mail deliveries or only some (any idea of percentage)? 
Maybe only for deliveries that have multiple recipients (in 
different backends)? We'll try to reproduce, but I'd think someone 
else would have already noticed/complained if it was that badly 
broken..


What's your doveconf -n? Also can you try running via valgrind to 
see what it logs before the crash? :


service lmtp {
  executable = /usr/bin/valgrind --vgdb=no -q 
/usr/libexec/dovecot/lmtp # or whatever the lmtp path really is

}


As this is not easily reproducible with a common lmtp proxying
configuration, we would be interested in the doveconf -n output from
all involved nodes (proxy, director, backend).

Did you have a chance to try the valgrind wrapper adviced by Timo?


Timo already fixed this? I think?

https://github.com/dovecot/core/commit/167dbb662c2ddedeb7b34383c18bdcf0537c0c84

The commit in question fixes an asser failure. The issue you reported
is an invalid memory access. The commit was not intended to fix your
report. Has the crash stopped happening in your environment?


Well, I downgraded to 2.2.26 and haven't looked at 2.2.29 since. So I 
guess it's not fixed.


I'll give it another look with valgrind.

// Tom


Re: lmtp segfault after upgrade

2017-05-18 Thread Tom Sommer

On 2017-05-18 09:36, Teemu Huovila wrote:

Hello Tom

On 02.05.2017 11:19, Timo Sirainen wrote:

On 2 May 2017, at 10.41, Tom Sommer <m...@tomsommer.dk> wrote:


(gdb) bt full
#0  i_stream_seek (stream=0x21, v_offset=0) at istream.c:298
   _stream = 
#1  0x7fe98391ff32 in i_stream_concat_read_next 
(stream=0x1efe6c0) at istream-concat.c:77

   prev_input = 0x1ef1560
   data = 0x0
   data_size = 
   size = 
#2  i_stream_concat_read (stream=0x1efe6c0) at istream-concat.c:175


This isn't very obvious.. There hasn't been any changes to 
istream-concat code in 2.2.29 and I can't really think of any other 
changes either that could be causing these crashes. Do these crashes 
happen to all mail deliveries or only some (any idea of percentage)? 
Maybe only for deliveries that have multiple recipients (in different 
backends)? We'll try to reproduce, but I'd think someone else would 
have already noticed/complained if it was that badly broken..


What's your doveconf -n? Also can you try running via valgrind to see 
what it logs before the crash? :


service lmtp {
  executable = /usr/bin/valgrind --vgdb=no -q 
/usr/libexec/dovecot/lmtp # or whatever the lmtp path really is

}


As this is not easily reproducible with a common lmtp proxying
configuration, we would be interested in the doveconf -n output from
all involved nodes (proxy, director, backend).

Did you have a chance to try the valgrind wrapper adviced by Timo?


Timo already fixed this? I think?

https://github.com/dovecot/core/commit/167dbb662c2ddedeb7b34383c18bdcf0537c0c84

---
Tom


Re: lmtp segfault after upgrade

2017-05-18 Thread Tom Sommer


On 2017-05-18 09:36, Teemu Huovila wrote:

Hello Tom

On 02.05.2017 11:19, Timo Sirainen wrote:

On 2 May 2017, at 10.41, Tom Sommer <m...@tomsommer.dk> wrote:


(gdb) bt full
#0  i_stream_seek (stream=0x21, v_offset=0) at istream.c:298
   _stream = 
#1  0x7fe98391ff32 in i_stream_concat_read_next 
(stream=0x1efe6c0) at istream-concat.c:77

   prev_input = 0x1ef1560
   data = 0x0
   data_size = 
   size = 
#2  i_stream_concat_read (stream=0x1efe6c0) at istream-concat.c:175


This isn't very obvious.. There hasn't been any changes to 
istream-concat code in 2.2.29 and I can't really think of any other 
changes either that could be causing these crashes. Do these crashes 
happen to all mail deliveries or only some (any idea of percentage)? 
Maybe only for deliveries that have multiple recipients (in different 
backends)? We'll try to reproduce, but I'd think someone else would 
have already noticed/complained if it was that badly broken..


What's your doveconf -n? Also can you try running via valgrind to see 
what it logs before the crash? :


service lmtp {
  executable = /usr/bin/valgrind --vgdb=no -q 
/usr/libexec/dovecot/lmtp # or whatever the lmtp path really is

}


As this is not easily reproducible with a common lmtp proxying
configuration, we would be interested in the doveconf -n output from
all involved nodes (proxy, director, backend).

Did you have a chance to try the valgrind wrapper adviced by Timo?


Timo already fixed this?

https://github.com/dovecot/core/commit/167dbb662c2ddedeb7b34383c18bdcf0537c0c84


---
Tom


Re: lmtp segfault after upgrade

2017-05-02 Thread Tom Sommer

On 2017-05-02 09:35, Aki Tuomi wrote:

On 2017-05-02 10:20, Tom Sommer wrote:

On 2017-05-01 19:26, Aki Tuomi wrote:

On May 1, 2017 at 8:21 PM Tom Sommer <m...@tomsommer.dk> wrote:


I just upgraded our Director to 2.2.29.1 from 2.2.26, and now my 
dmesg

and /var/log/messages are getting flooded by these errors:

lmtp[45758]: segfault at 21 ip 7fb412d3ad11 sp 7ffe83ad2df0
error 4 in libdovecot.so.0.0.0[7fb412c95000+11c000]

Any ideas?

-- Tom


Try get a core dump and run it thru gdb.



[root@director1 dovecot]# gdb /usr/libexec/dovecot/lmtp core.19749
GNU gdb (GDB) Red Hat Enterprise Linux (7.2-92.el6)
Copyright (C) 2010 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later 
<http://gnu.org/licenses/gpl.html>

This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show 
copying"

and "show warranty" for details.
This GDB was configured as "x86_64-redhat-linux-gnu".
For bug reporting instructions, please see:
<http://www.gnu.org/software/gdb/bugs/>...
Reading symbols from /usr/libexec/dovecot/lmtp...done.
[New Thread 19749]
Reading symbols from /usr/lib/dovecot/libdovecot-lda.so.0...done.
Loaded symbols for /usr/lib/dovecot/libdovecot-lda.so.0
Reading symbols from /usr/lib/dovecot/libdovecot-storage.so.0...done.
Loaded symbols for /usr/lib/dovecot/libdovecot-storage.so.0
Reading symbols from /usr/lib/dovecot/libdovecot.so.0...done.
Loaded symbols for /usr/lib/dovecot/libdovecot.so.0
Reading symbols from /lib64/libc.so.6...(no debugging symbols 
found)...done.

Loaded symbols for /lib64/libc.so.6
Reading symbols from /lib64/librt.so.1...(no debugging symbols 
found)...done.

Loaded symbols for /lib64/librt.so.1
Reading symbols from /lib64/libdl.so.2...(no debugging symbols 
found)...done.

Loaded symbols for /lib64/libdl.so.2
Reading symbols from /lib64/ld-linux-x86-64.so.2...(no debugging 
symbols found)...done.

Loaded symbols for /lib64/ld-linux-x86-64.so.2
Reading symbols from /lib64/libpthread.so.0...(no debugging symbols 
found)...done.

[Thread debugging using libthread_db enabled]
Loaded symbols for /lib64/libpthread.so.0
Reading symbols from 
/usr/lib/dovecot/libssl_iostream_openssl.so...done.

Loaded symbols for /usr/lib/dovecot/libssl_iostream_openssl.so
Reading symbols from /usr/lib64/libssl.so.10...(no debugging symbols 
found)...done.

Loaded symbols for /usr/lib64/libssl.so.10
Reading symbols from /usr/lib64/libcrypto.so.10...(no debugging 
symbols found)...done.

Loaded symbols for /usr/lib64/libcrypto.so.10
Reading symbols from /lib64/libgssapi_krb5.so.2...(no debugging 
symbols found)...done.

Loaded symbols for /lib64/libgssapi_krb5.so.2
Reading symbols from /lib64/libkrb5.so.3...(no debugging symbols 
found)...done.

Loaded symbols for /lib64/libkrb5.so.3
Reading symbols from /lib64/libcom_err.so.2...(no debugging symbols 
found)...done.

Loaded symbols for /lib64/libcom_err.so.2
Reading symbols from /lib64/libk5crypto.so.3...(no debugging symbols 
found)...done.

Loaded symbols for /lib64/libk5crypto.so.3
Reading symbols from /lib64/libz.so.1...(no debugging symbols 
found)...done.

Loaded symbols for /lib64/libz.so.1
Reading symbols from /lib64/libkrb5support.so.0...(no debugging 
symbols found)...done.

Loaded symbols for /lib64/libkrb5support.so.0
Reading symbols from /lib64/libkeyutils.so.1...(no debugging symbols 
found)...done.

Loaded symbols for /lib64/libkeyutils.so.1
Reading symbols from /lib64/libresolv.so.2...(no debugging symbols 
found)...done.

Loaded symbols for /lib64/libresolv.so.2
Reading symbols from /lib64/libselinux.so.1...(no debugging symbols 
found)...done.

Loaded symbols for /lib64/libselinux.so.1
Core was generated by `dovecot/lmtp'.
Program terminated with signal 11, Segmentation fault.
#0  i_stream_seek (stream=0x21, v_offset=0) at istream.c:298
298 if (v_offset >= stream->v_offset &&
Missing separate debuginfos, use: debuginfo-install 
glibc-2.12-1.209.el6_9.1.x86_64 keyutils-libs-1.4-5.el6.x86_64 
krb5-libs-1.10.3-65.el6.x86_64 libcom_err-1.41.12-23.el6.x86_64 
libselinux-2.0.94-7.el6.x86_64 openssl-1.0.1e-57.el6.x86_64 
zlib-1.2.3-29.el6.x86_64

Can you run bt full please?



(gdb) bt full
#0  i_stream_seek (stream=0x21, v_offset=0) at istream.c:298
_stream = 
#1  0x7fe98391ff32 in i_stream_concat_read_next (stream=0x1efe6c0) 
at istream-concat.c:77

prev_input = 0x1ef1560
data = 0x0
data_size = 
size = 
#2  i_stream_concat_read (stream=0x1efe6c0) at istream-concat.c:175
cstream = 0x1efe6c0
data = 0x0
size = 
data_size = 0
cur_data_pos = 
new_pos = 
new_bytes_count = 
ret = 
last_stream = 
__FUNCTION__ = "i_stream_concat_read"
#3  0x7fe98391d1f5 in i_stream_read (stream=0x1efe730) at 
istream.c:174

_stream = 0x1efe6c0
 

Re: lmtp segfault after upgrade

2017-05-02 Thread Tom Sommer

On 2017-05-01 19:26, Aki Tuomi wrote:

On May 1, 2017 at 8:21 PM Tom Sommer <m...@tomsommer.dk> wrote:


I just upgraded our Director to 2.2.29.1 from 2.2.26, and now my dmesg
and /var/log/messages are getting flooded by these errors:

lmtp[45758]: segfault at 21 ip 7fb412d3ad11 sp 7ffe83ad2df0
error 4 in libdovecot.so.0.0.0[7fb412c95000+11c000]

Any ideas?

--
Tom


Try get a core dump and run it thru gdb.



[root@director1 dovecot]# gdb /usr/libexec/dovecot/lmtp core.19749
GNU gdb (GDB) Red Hat Enterprise Linux (7.2-92.el6)
Copyright (C) 2010 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later 
<http://gnu.org/licenses/gpl.html>

This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show 
copying"

and "show warranty" for details.
This GDB was configured as "x86_64-redhat-linux-gnu".
For bug reporting instructions, please see:
<http://www.gnu.org/software/gdb/bugs/>...
Reading symbols from /usr/libexec/dovecot/lmtp...done.
[New Thread 19749]
Reading symbols from /usr/lib/dovecot/libdovecot-lda.so.0...done.
Loaded symbols for /usr/lib/dovecot/libdovecot-lda.so.0
Reading symbols from /usr/lib/dovecot/libdovecot-storage.so.0...done.
Loaded symbols for /usr/lib/dovecot/libdovecot-storage.so.0
Reading symbols from /usr/lib/dovecot/libdovecot.so.0...done.
Loaded symbols for /usr/lib/dovecot/libdovecot.so.0
Reading symbols from /lib64/libc.so.6...(no debugging symbols 
found)...done.

Loaded symbols for /lib64/libc.so.6
Reading symbols from /lib64/librt.so.1...(no debugging symbols 
found)...done.

Loaded symbols for /lib64/librt.so.1
Reading symbols from /lib64/libdl.so.2...(no debugging symbols 
found)...done.

Loaded symbols for /lib64/libdl.so.2
Reading symbols from /lib64/ld-linux-x86-64.so.2...(no debugging symbols 
found)...done.

Loaded symbols for /lib64/ld-linux-x86-64.so.2
Reading symbols from /lib64/libpthread.so.0...(no debugging symbols 
found)...done.

[Thread debugging using libthread_db enabled]
Loaded symbols for /lib64/libpthread.so.0
Reading symbols from /usr/lib/dovecot/libssl_iostream_openssl.so...done.
Loaded symbols for /usr/lib/dovecot/libssl_iostream_openssl.so
Reading symbols from /usr/lib64/libssl.so.10...(no debugging symbols 
found)...done.

Loaded symbols for /usr/lib64/libssl.so.10
Reading symbols from /usr/lib64/libcrypto.so.10...(no debugging symbols 
found)...done.

Loaded symbols for /usr/lib64/libcrypto.so.10
Reading symbols from /lib64/libgssapi_krb5.so.2...(no debugging symbols 
found)...done.

Loaded symbols for /lib64/libgssapi_krb5.so.2
Reading symbols from /lib64/libkrb5.so.3...(no debugging symbols 
found)...done.

Loaded symbols for /lib64/libkrb5.so.3
Reading symbols from /lib64/libcom_err.so.2...(no debugging symbols 
found)...done.

Loaded symbols for /lib64/libcom_err.so.2
Reading symbols from /lib64/libk5crypto.so.3...(no debugging symbols 
found)...done.

Loaded symbols for /lib64/libk5crypto.so.3
Reading symbols from /lib64/libz.so.1...(no debugging symbols 
found)...done.

Loaded symbols for /lib64/libz.so.1
Reading symbols from /lib64/libkrb5support.so.0...(no debugging symbols 
found)...done.

Loaded symbols for /lib64/libkrb5support.so.0
Reading symbols from /lib64/libkeyutils.so.1...(no debugging symbols 
found)...done.

Loaded symbols for /lib64/libkeyutils.so.1
Reading symbols from /lib64/libresolv.so.2...(no debugging symbols 
found)...done.

Loaded symbols for /lib64/libresolv.so.2
Reading symbols from /lib64/libselinux.so.1...(no debugging symbols 
found)...done.

Loaded symbols for /lib64/libselinux.so.1
Core was generated by `dovecot/lmtp'.
Program terminated with signal 11, Segmentation fault.
#0  i_stream_seek (stream=0x21, v_offset=0) at istream.c:298
298 if (v_offset >= stream->v_offset &&
Missing separate debuginfos, use: debuginfo-install 
glibc-2.12-1.209.el6_9.1.x86_64 keyutils-libs-1.4-5.el6.x86_64 
krb5-libs-1.10.3-65.el6.x86_64 libcom_err-1.41.12-23.el6.x86_64 
libselinux-2.0.94-7.el6.x86_64 openssl-1.0.1e-57.el6.x86_64 
zlib-1.2.3-29.el6.x86_64


Re: lmtp segfault after upgrade

2017-05-02 Thread Tom Sommer


On 2017-05-01 19:34, Aki Tuomi wrote:

On May 1, 2017 at 8:26 PM Aki Tuomi <aki.tu...@dovecot.fi> wrote:



> On May 1, 2017 at 8:21 PM Tom Sommer <m...@tomsommer.dk> wrote:
>
>
> I just upgraded our Director to 2.2.29.1 from 2.2.26, and now my dmesg
> and /var/log/messages are getting flooded by these errors:
>
> lmtp[45758]: segfault at 21 ip 7fb412d3ad11 sp 7ffe83ad2df0
> error 4 in libdovecot.so.0.0.0[7fb412c95000+11c000]
>
> Any ideas?
>
> --
> Tom

Try get a core dump and run it thru gdb.

Aki


Also, dovecot should log some kind of trace and message into syslog or
whatever log_path points to.


Only this:

May 02 09:00:34 lmtp(17467): Fatal: master: service(lmtp): child 17467 
killed with signal 11 (core dumps disabled)


.. and lots of them.


lmtp segfault after upgrade

2017-05-01 Thread Tom Sommer
I just upgraded our Director to 2.2.29.1 from 2.2.26, and now my dmesg 
and /var/log/messages are getting flooded by these errors:


lmtp[45758]: segfault at 21 ip 7fb412d3ad11 sp 7ffe83ad2df0 
error 4 in libdovecot.so.0.0.0[7fb412c95000+11c000]
lmtp[46304]: segfault at 91 ip 7f771ba64d11 sp 7ffe830635a0 
error 4 in libdovecot.so.0.0.0[7f771b9bf000+11c000]
lmtp[45775]: segfault at 21 ip 7f206ed81d11 sp 7ffd0f0daad0 
error 4 in libdovecot.so.0.0.0[7f206ecdc000+11c000]
lmtp[46649]: segfault at 21 ip 7ff4aa027d11 sp 7ffc05841430 
error 4 in libdovecot.so.0.0.0[7ff4a9f82000+11c000]
lmtp[45774]: segfault at 21 ip 7ff1f3e0bd11 sp 7ffe0ba00340 
error 4 in libdovecot.so.0.0.0[7ff1f3d66000+11c000]
lmtp[45776]: segfault at 411 ip 7f96ded95d11 sp 7ffce6401870 
error 4 in libdovecot.so.0.0.0[7f96decf+11c000]
lmtp[46896]: segfault at 21 ip 7fbf1aa70d11 sp 7ffc79e22bf0 
error 4 in libdovecot.so.0.0.0[7fbf1a9cb000+11c000]
lmtp[46695]: segfault at 21 ip 7fd5ccad6d11 sp 7ffe19e759d0 
error 4 in libdovecot.so.0.0.0[7fd5cca31000+11c000]
lmtp[45773]: segfault at 21 ip 7f0eb7b65d11 sp 7fff9d02e570 
error 4 in libdovecot.so.0.0.0[7f0eb7ac+11c000]
lmtp[47101]: segfault at 21 ip 7fa95b052d11 sp 7fff2d5485a0 
error 4 in libdovecot.so.0.0.0[7fa95afad000+11c000]
lmtp[45772]: segfault at 21 ip 7f2f52b49d11 sp 7fff99a4a6e0 
error 4 in libdovecot.so.0.0.0[7f2f52aa4000+11c000]
lmtp[46595]: segfault at 21 ip 7f0ccbdbad11 sp 7ffd17bfe9e0 
error 4 in libdovecot.so.0.0.0[7f0ccbd15000+11c000]
lmtp[45770]: segfault at 21 ip 7f94a576ed11 sp 7ffc29c2ff00 
error 4 in libdovecot.so.0.0.0[7f94a56c9000+11c000]
lmtp[45769]: segfault at 21 ip 7f34999b5d11 sp 7ffe28818350 
error 4 in libdovecot.so.0.0.0[7f349991+11c000]
lmtp[47371]: segfault at 21 ip 7f13c611fd11 sp 7ffcdb000b10 
error 4 in libdovecot.so.0.0.0[7f13c607a000+11c000]
lmtp[47419]: segfault at 21 ip 7f4010ff2d11 sp 7ffd97519420 
error 4 in libdovecot.so.0.0.0[7f4010f4d000+11c000]
lmtp[47511]: segfault at 581 ip 7fd5e3d21d11 sp 7ffce61f29c0 
error 4 in libdovecot.so.0.0.0[7fd5e3c7c000+11c000]
lmtp[47277]: segfault at 21 ip 7f04c3d6cd11 sp 7ffc421da1b0 
error 4 in libdovecot.so.0.0.0[7f04c3cc7000+11c000]
lmtp[45771]: segfault at 21 ip 7f55c690ed11 sp 7fff35521fc0 
error 4 in libdovecot.so.0.0.0[7f55c6869000+11c000]
lmtp[47543]: segfault at 21 ip 7f11ea946d11 sp 7ffdd9a97690 
error 4 in libdovecot.so.0.0.0[7f11ea8a1000+11c000]
lmtp[47669]: segfault at 21 ip 7f8f58abbd11 sp 7ffdd5573220 
error 4 in libdovecot.so.0.0.0[7f8f58a16000+11c000]
lmtp[46959]: segfault at 41 ip 7ff9d6f3ed11 sp 7ffe28dc8ea0 
error 4 in libdovecot.so.0.0.0[7ff9d6e99000+11c000]


Any ideas?

--
Tom


dsync with namespaces

2017-03-23 Thread Tom Sommer
I have a major dsync headache which I am hoping someone can help me 
with.


I have to migrate mails from a 'namespace/inbox/prefix=INBOX.' server to 
a 'namespace/inbox/prefix=' server.


So I'm trying
# dsync -o 'namespace/inbox/prefix=' sync -1 -f -u x...@xx.com tcp:1.2.3.4

It works fine for some accounts, however for others many hundreds and 
sometimes thousands of mails go missing from the destination inbox, and 
a resync doesn't pick them up. The debug output doesn't recognize these 
either.


If I remove the 'namespace/inbox/prefix=' part, then I get 'Error: 
Couldn't find namespace for mailbox X' errors.


Both source and destination is running 2.2.27

I can't imapsync :S

Thanks.
--
Tom


Custom cache_key for passdb-sql

2017-03-15 Thread Tom Sommer
The cache_key for the SQL-passdb should be created automatically, but 
for some reason the service (%s) is not part of the key, and I need it 
to be


I've tried to set a manual cache_key, but that fails:

Mar 15 11:54:02 auth: Fatal: sql /etc/dovecot/dovecot-sql.conf 
cache_key=%s%u%{real_ip}: Can't open configuration file 
/etc/dovecot/dovecot-sql.conf cache_key=%s%u%{real_ip}: No such file or 
directory



passdb {
  args = /etc/dovecot/dovecot-sql.conf cache_key=%s%u%{real_ip}
  driver = sql
}

Is this not possible?

--
Tom


Re: v2.2.28 released

2017-03-06 Thread Tom Sommer


On 2017-02-24 14:34, Timo Sirainen wrote:

http://dovecot.org/releases/2.2/dovecot-2.2.28.tar.gz
http://dovecot.org/releases/2.2/dovecot-2.2.28.tar.gz.sig


Are there any plans to do a bugfix-release, that includes the few issues 
seen in the mailing-list, or do you consider 2.2.28 safe to upgrade to?


Thanks

---
Tom


Re: quota-warning: possible to have size also?

2017-03-05 Thread Tom Sommer



On 2017-03-05 19:13, dovecot@avv.solutions wrote:


I have a questions though: when running the warning script, the
example foundis it possible to pass the *quota size *as argument also?
This would be useful with per-user quota.

e.g. /some/script xx% username *xxxbytes* (order is not relevant of 
course)


+1 and quota-limit as well.


Re: Director+NFS Experiences

2017-02-27 Thread Tom Sommer

On 2017-02-27 10:40, Sami Ketola wrote:

On 24 Feb 2017, at 21.28, Mark Moseley  wrote:
Attached. No claims are made on the quality of my code :)




With recent dovecots you probably should not use set_host_weight(
server, '0’ ) to mark backend
down but instead should use director commands HOST-DOWN and HOST-UP in
combination with HOST-FLUSH.


This is already the case in the latest version of Poolmon


Re: Changes to userdb not picked up

2017-02-08 Thread Tom Sommer
I know, but I'm changing the quota in dict:sql so running commands on 
the mailserver itself it something I would like to avoid.


---
Tom

On 2017-02-06 14:29, Urban Loesch wrote:

You can flush the cache with: "doveadm auth cache flush $USER"

Regards
Urban


Am 06.02.2017 um 13:59 schrieb Tom Sommer:
I have my quota limits stored in userdb and auth_cache enabled 
(default settings).


When I change the quota limit the old limit is still cached for the 
user for another hour. Is there any way to prevent this from 
happening?


Thanks



Changes to userdb not picked up

2017-02-06 Thread Tom Sommer
I have my quota limits stored in userdb and auth_cache enabled (default 
settings).


When I change the quota limit the old limit is still cached for the user 
for another hour. Is there any way to prevent this from happening?


Thanks

--
Tom


Auth cache does not take %real_rip into account

2017-01-31 Thread Tom Sommer
I run a Director setup with a webmail in front, the webmail is in 
login_trusted_networks and sends IMAP-ID x-original-ip to log the client 
IP.


If I enable auth_debug on the director, I see that the cache key 
contains the client IP, and not the %real_rip.


This is causing problems because in my passdb SQL query, I use the 
%real_rip to determine if login is allowed.


Should %real_rip not be added to the auth cache key? Or should it be the 
cache key instead of the %rip?


Thanks
--
Tom


Re: Quota count does not work with lock_method=dotlock

2017-01-24 Thread Tom Sommer

On 2017-01-24 10:25, Aki Tuomi wrote:

On 24.01.2017 11:13, Tom Sommer wrote:

On 2017-01-18 15:27, mkli...@gmx.de wrote:


dovecot crashes when I switch the quota tracking from dict to count.


I have the same problem, but I use 'dict:file' as quota backend -
Maybe the error is due to quota_vsizes and not 'count'.

// Tom Sommer


Hi!

What version of dovecot are you both using?


2.2.27 (c0f36b0)


Re: Quota count does not work with lock_method=dotlock

2017-01-24 Thread Tom Sommer

On 2017-01-18 15:27, mkli...@gmx.de wrote:


dovecot crashes when I switch the quota tracking from dict to count.


I have the same problem, but I use 'dict:file' as quota backend - Maybe 
the error is due to quota_vsizes and not 'count'.


// Tom Sommer


Re: Poolmon: Problem with index-locking

2017-01-10 Thread Tom Sommer


On 2017-01-10 22:34, Timo Sirainen wrote:

On 10 Jan 2017, at 20.38, Tom Sommer <m...@tomsommer.dk> wrote:


I have Poolmon (https://github.com/brandond/poolmon) set up. When it 
does all the checks concurrently, obviously there are locking issues 
on each mailserver it tests:


"Warning: Locking transaction log file 
/indexes/dovecot.list.index.log took 60 seconds (syncing)"


It's just an empty mailbox.

Is there any way to do a login test, without locking the index files? 
Hence avoiding these warnings/errors?


Are they accessing the same mail account on all the backend servers?


Yes.


I guess that could be troublesome, although I don't know why it would
still take 60 seconds.


Well if all backends try to log in at the same time, it will naturally 
cause NFS locking issues? As they are all fighting for a lock.

Plus, my NFS box is having a hard time atm. :)


Anyway better would be if it used a
backend-specific mail user, but I'm not sure if poolmon supports that
now.


I suppose, just feels like a 'wrong' test-case. Would be better if 
poolmon could do send some kind of 'check' signal to doveadm on the 
backend to do a test? Or if you could perform a login which did not.


Anyway, my log is flooded with these warnings atm.


Poolmon: Problem with index-locking

2017-01-10 Thread Tom Sommer
I have Poolmon (https://github.com/brandond/poolmon) set up. When it 
does all the checks concurrently, obviously there are locking issues on 
each mailserver it tests:


"Warning: Locking transaction log file 
/indexes/dovecot.list.index.log took 60 seconds (syncing)"


It's just an empty mailbox.

Is there any way to do a login test, without locking the index files? 
Hence avoiding these warnings/errors?


Thanks.
--
Tom


Index-cache sizes

2016-11-21 Thread Tom Sommer
I had a customer with an INBOX cache of 400MB+ (5.5GB maildir) which 
gave me "Error: mmap_anon(474165248) failed: Cannot allocate memory" 
errors, then I deleted the cache files and ran 'doveadm index -u 
x...@example.com "*"', now the INBOX cache is 7MB


What is up with that? Is the cache now missing something or does it make 
sense to wipe cache files and reindex cache every-so-often?


--
Tom


Re: Errors with count:User quota and NFS

2016-11-04 Thread Tom Sommer


On 2016-11-01 09:47, Tom Sommer wrote:

On 2016-10-31 22:04, Timo Sirainen wrote:

Oct 31 10:52:37 imap(x...@.xx): Warning: Locking transaction log 
file /mnt/nfs/xxx.xx/xxx/indexes/dovecot.list.index.log took 31 
seconds (syncing)
Oct 31 10:52:37 imap(x...@xxx.xx): Warning: Locking transaction log 
file /mnt/nfs/xxx.xx/xxx/indexes/dovecot.list.index.log took 31 
seconds (syncing)


This just means something is being slow. Not necessarily a problem.
Although it could also indicate a deadlock. Is this Maildir? Did you
say you were using lock_method=dotlock?


I removed dotlock some time ago (using director) and switched to:

lock_method = fcntl
mail_fsync = always

With "count" as quota backend I get a lot of these errors on the 
director:


imap-login: Error: proxy(x...@.xxx): Login for xxx.xxx.xxx.xxx:143
timed out in state=2 (after 30 secs, local=x:58478):
user=<x...@.dk>, method=CRAM-MD5, rip=, lip=x, TLS,
session=<pTl0UTlAV8VWNF5+>

on the server:

imap(x...@.xxx): Warning: Locking transaction log file
/mnt/nfs/.dk/x/indexes/.INBOX/dovecot.index.log took 32
seconds (appending)


Actually this is worse than I thought. I don't know if it's because of 
the lock, or it's a general bug in 'count', but whenever I switch quota 
backend to 'count', customers complain that they aren't receiving any 
mails.
Mails are stored correctly on the server, but it seems Dovecot doesn't 
show them, perhaps due to corrupt/hanging/locked indexes.


So either 'count' is very I/O sensitive by design, and so useless on 
NFS, or there is some bug in there that breaks indexing.


Using maildir and lmtp.

The only errors I see in the logs, are the ones pasted here.


2.2.26.0: Error: redis: Unexpected input (state=0): -ERR max number of clients reached

2016-11-01 Thread Tom Sommer

I use redis as quota backend (currently).

After upgrading to 2.2.26.0 I see a ton of "Error: redis: Unexpected 
input (state=0): -ERR max number of clients reached" errors.


It looks like either more Redis connections are being made, or Redis 
connections don't time out/are reused correctly anymore.


No such errors were seen in 2.2.24.

--
Tom


Re: Errors with count:User quota and NFS

2016-11-01 Thread Tom Sommer

On 2016-10-31 22:04, Timo Sirainen wrote:

Oct 31 10:52:37 imap(x...@.xx): Warning: Locking transaction log 
file /mnt/nfs/xxx.xx/xxx/indexes/dovecot.list.index.log took 31 
seconds (syncing)
Oct 31 10:52:37 imap(x...@xxx.xx): Warning: Locking transaction log 
file /mnt/nfs/xxx.xx/xxx/indexes/dovecot.list.index.log took 31 
seconds (syncing)


This just means something is being slow. Not necessarily a problem.
Although it could also indicate a deadlock. Is this Maildir? Did you
say you were using lock_method=dotlock?


I removed dotlock some time ago (using director) and switched to:

lock_method = fcntl
mail_fsync = always

With "count" as quota backend I get a lot of these errors on the 
director:


imap-login: Error: proxy(x...@.xxx): Login for xxx.xxx.xxx.xxx:143 
timed out in state=2 (after 30 secs, local=x:58478): 
user=, method=CRAM-MD5, rip=, lip=x, TLS, 
session=


on the server:

imap(x...@.xxx): Warning: Locking transaction log file 
/mnt/nfs/.dk/x/indexes/.INBOX/dovecot.index.log took 32 seconds 
(appending)


Re: Errors with count:User quota and NFS

2016-10-31 Thread Tom Sommer

On 2016-10-31 11:01, Tom Sommer wrote:

I upgraded to 2.2.26.0 and enabled count as quota backend, expecting
the recent fixes to allow me to use the backend, however I get the
following errors:


It just occured to me, that the reason for the locking/errors may be, 
that it is big mailboxes being recalculated?


---
Tom


Errors with count:User quota and NFS

2016-10-31 Thread Tom Sommer
I upgraded to 2.2.26.0 and enabled count as quota backend, expecting the 
recent fixes to allow me to use the backend, however I get the following 
errors:


Oct 31 10:52:13 imap(x...@.xx): Error: Transaction log file 
/mnt/nfs/.xx/xxx/indexes/dovecot.list.index.log: marked corrupted
Oct 31 10:52:15 imap(x...@.xx): Error: Transaction log file 
/mnt/nfs/.xx/xxx/indexes/dovecot.list.index.log: marked corrupted
Oct 31 10:52:37 imap(x...@.xx): Warning: Locking transaction log file 
/mnt/nfs/xxx.xx/xxx/indexes/dovecot.list.index.log took 31 seconds 
(syncing)
Oct 31 10:52:37 imap(x...@xxx.xx): Warning: Locking transaction log file 
/mnt/nfs/xxx.xx/xxx/indexes/dovecot.list.index.log took 31 seconds 
(syncing)
Oct 31 10:52:43 imap(x...@xxx.xx): Error: Transaction log file 
/mnt/nfs/xxx.xx/xx/indexes/dovecot.list.index.log: marked corrupted
Oct 31 10:52:52 imap(x...@xxx.xx): Warning: Locking transaction log file 
/mnt/nfs/xxx.xx/xxx/indexes/dovecot.list.index.log took 31 seconds 
(syncing)
Oct 31 10:53:04 imap(xxx@xxx.x): Warning: Locking transaction log file 
/mnt/nfs/xxx.xx/xxx/indexes/dovecot.list.index.log took 60 seconds 
(syncing)
Oct 31 10:53:06 imap(x...@xxx.xx): Warning: Locking transaction log file 
/mnt/nfs/xxx.xxx/xxx/indexes/dovecot.list.index.log took 31 seconds 
(syncing)


(all different accounts)

If I disable count as backend, there are no errors.

I'm running my mail-storage on NFS, so I suspect the errors are due to 
locking? So is there no way to run count as quota backend with NFS?


Thanks.
--
Tom


Re: Change dovecot hostname

2016-08-22 Thread Tom Sommer

Removing the headers entirely was discussed:

http://dovecot.markmail.org/search/?q=received#query:received+page:1+mid:t4utsjcionjcfwce+state:results

Don't know if it was forgotten for 2.3, but hope not :)

---
Tom

On 2016-08-22 15:14, Scott W. Sander wrote:
Here are some example headers from an email sent from an internal 
Exchange

account to an account on Dovecot (u...@domain.test):

---

Received: from mail.domain.test
by appserver4.domain.com (Dovecot) with LMTP id 
z7RGLzH4uldlPAAAxdv4Dw

for ; Mon, 22 Aug 2016 09:03:45 -0400
Received: from mail.domain.com (exchangefe1.domain.com [10.1.0.225])
(using TLSv1.2 with cipher ECDHE-RSA-AES256-SHA (256/256 bits))
(No client certificate requested)
by mail.domain.test (Postfix) with ESMTPS id BEB1B200C4
for ; Mon, 22 Aug 2016 09:03:45 -0400 (EDT)
Received: from exchangebe2.domain.com
 ([fe80::31cb:366e:5ce0:a40c]) by exchangefe1.domain.com ([::1])
 with mapi id 14.03.0294.000; Mon, 22 Aug 2016 09:03:46 -0400

---

I want the part that says "by appserver4.domain.com (Dovecot)" to say 
"by
mail.domain.test (Dovecot)".  I don't want it to say the FQDN of the 
actual

host server that is running Dovecot.

The server currently referenced as "mail.domain.test" in the headers is
postfix running on the same machine.

Thanks in advance!

On Fri, Aug 19, 2016 at 7:11 PM Joseph Tam  wrote:


"Scott W. Sander" writes:

> I have noticed that the name of my private server running dovecot appears
> in email headers rather than the public-friendly name of my server.

Which headers are you taking about?

If you're talking about Received: headers, that's usually inserted by
your MTA, not dovecot.

Joseph Tam 



Header's corrupted flag is set

2016-08-11 Thread Tom Sommer

I have a ton of these errors in my logs:

Aug 11 14:16:04 imap(x...@x.): Error: Corrupted index 
file./dovecot.list.index: Header's corrupted flag is set


Does Dovecot not automatically fix these? Or what is the correct action 
to take?


--
Tom


  1   2   >