mailbox_list_index and maildir_very_dirty_syncs are in conflicts?
Hi, I’m running Dovecot 2.2.19 with Maildir as storage and LDA for delivery. I noticed that if I set mailbox_list_index=yes and maildir_very_dirty_syncs=yes when I login via IMAP the STATUS command don’t “see” new messages in sub-folders (like Spam). Example, a new message as arrived in Spam folder (I can see in Maildir/new/) but IMAP STATUS don’t see it: . STATUS Spam (MESSAGES UNSEEN) * STATUS Spam (MESSAGES 139 UNSEEN 0) (in this case the date of “dovecot.list.index” files is not updated) If I comment out mailbox_list_index=yes or maildir_very_dirty_syncs=yes I can see the new message: . STATUS Spam (MESSAGES UNSEEN) * STATUS Spam (MESSAGES 140 UNSEEN 1) (in this case the date of “dovecot.list.index” files is updated after the STATUS command) Instead the SELECT command works fine always. An importante note, my dovecot LDA configuration (on MX servers) don’t update index files: protocol lda { mail_location = maildir:~/Maildir:INDEX=MEMORY mail_plugins = quota acl expire fts fts_solr zlib sieve } These because I need to filter incoming email via Sieve but since I cannot use LMTP (and Director) on MX (but I have Director for POP/IMAP access) the only way for not corrupting dovecot.index files is not update their on delivery emails. But reading http://wiki2.dovecot.org/MailLocation/Maildir this shouldn’t be a problem (Optimizations “maildir_very_dirty_syncs=yes” … It's still safe to deliver new mails to new/ …) since MX deliver new emails in new/. Is a Dovecot limit or is a problem of my configuration? Thanks my configuration: # 2.2.19 # Pigeonhole version 0.4.9 auth_cache_negative_ttl = 5 mins auth_cache_size = 10 M auth_cache_ttl = 20 mins auth_master_user_separator = * auth_worker_max_count = 50 deliver_log_format = msgid=%m, from=%f, subject="%s": %$ dict { acl = mysql:/etc/dovecot/dovecot-dict-sql.conf.ext expire = mysql:/etc/dovecot/dovecot-dict-sql.conf.ext sqlquota = mysql:/etc/dovecot/dovecot-dict-sql.conf.ext } disable_plaintext_auth = no first_valid_gid = 89 first_valid_uid = 89 imap_client_workarounds = delay-newmail tb-extra-mailbox-sep tb-lsub-flags imap_idle_notify_interval = 29 mins imap_logout_format = in=%i out=%o session=<%{session}> last_valid_gid = 89 last_valid_uid = 89 lda_mailbox_autocreate = yes lda_mailbox_autosubscribe = yes listen = 10.96.3.156 login_trusted_networks = 10.0.0.0/24 mail_fsync = always mail_location = maildir:~/Maildir mail_plugins = quota acl expire fts fts_solr zlib mailbox_list_index = yes maildir_very_dirty_syncs = yes managesieve_notify_capability = mailto managesieve_sieve_capability = fileinto reject envelope encoded-character subaddress comparator-i;ascii-numeric relational regex imap4flags copy include variables body enotify environment mailbox date index ihave duplicate vnd.dovecot.duplicate mmap_disable = yes namespace { list = children location = maildir:%%h/Maildir:INDEX=~/Maildir/shared/%%u prefix = shared/%%n/ separator = / subscriptions = no type = shared } namespace inbox { inbox = yes location = mailbox Drafts { auto = subscribe special_use = \Drafts } mailbox Sent { auto = subscribe special_use = \Sent } mailbox "Sent Messages" { special_use = \Sent } mailbox Spam { auto = subscribe special_use = \Junk } mailbox Trash { auto = subscribe special_use = \Trash } prefix = separator = / } passdb { args = /etc/dovecot/dovecot-deny-sql.conf.ext deny = yes driver = sql } passdb { args = /etc/dovecot/extra/master-users driver = passwd-file master = yes pass = yes } passdb { args = /etc/dovecot/dovecot-sql.conf.ext driver = sql } plugin { acl = vfile acl_shared_dict = proxy::acl antispam_backend = mailtrain antispam_mail_notspam = --ham antispam_mail_sendmail = /usr/bin/sa-learn antispam_mail_spam = --spam antispam_spam = Spam antispam_trash = Trash expire = Trash expire2 = Spam expire_dict = proxy::expire fts = solr fts_solr = url=http://10.0.0.1:8983/solr/ quota = maildir:UserQuota quota2 = dict:Quota Usage::noenforcing:proxy::sqlquota quota_grace = 10M quota_rule2 = Trash:storage=+100M quota_warning = storage=95%% quota-warning 95 %u quota_warning2 = storage=80%% quota-warning 80 %u sieve = ~/.dovecot.sieve sieve_before = /etc/dovecot/sieve/before.sieve sieve_dir = ~/sieve sieve_extensions = +vnd.dovecot.duplicate -vacation zlib_save = gz zlib_save_level = 6 } pop3_client_workarounds = outlook-no-nuls oe-ns-eoh pop3_fast_size_lookups = yes pop3_logout_format = top=%t/%p, retr=%r/%b, del=%d/%m, size=%s, bytes=%i/%o, session=<%{session}> protocols = imap pop3 sieve sendmail_path = /var/qmail/bin/sendmail service auth { client_limit = 6524 unix_listener auth-userdb { group = vchkpw mode = 0660 user = vpopmail } } service dict { process_limit = 500 unix_listener dict { group = vchkpw mode = 0660 user = vpopmail } } service
Dovecot cluster using GlusterFS
Hello, I have recently setup mailserver solution using 2-node master-master setup (mainly based on MySQL M-M replication and GlusterFS with 2 replica volume) on Ubuntu 14.04 (Dovecot 2.2.9). Unfortunately even with shared-storage-aware setting: mail_nfs_index = yes mail_nfs_storage = yes mail_fsync = always mmap_disable = yes ..I have hit strange issues pretty soon especially when user was manipulating same mailbox from multiple devices at the same time. Most issues was about corrupted indexes which was solved easily by just putting them on local storage of each node: mail_location = maildir:/srv/mail/%d/%u:INDEX=/var/lib/dovecot/index/%d/%u But I still hit issues like this one: dovecot: lmtp(6276, u...@example.com): Error: Broken file /srv/mail/example.com/u...@example.com/dovecot-uidlist line 8529: UIDs not ordered (8527 >= 8527) Which I am not sure how serious it is or if it's possible to solve or workaround? Anyway because of the above and high possibility of GlusterFS split-brains, I have decided to setup Dovecot Director according to the docs [1] but I have a couple of questions: - is custom monitoring still required? Poolmon [2] is 4 year old so I would suppose there's some progress since that? - it's not possible to have same backends and directors in Dovecot <2.2.17. I can backport newer Dovecot for Ubuntu Trusty, so this is not an issue, but.. - documentation states that it still doesn't work for LMTP [3]? Which is probably important for my setup, because both Postfix servers are using dovecot-lmtp for mail delivery so there can be still some issues (but probably less frequent?) when both servers will deliver new mails for one user at once. So do I really have to split directors from backends? Anyone has experience with clustered Dovecot setup? Why is Dovecot behaving so bad when it pretends to be shared storage friendly? Are these issues only specific for older Dovecot? Or is there something wrong in my architecture design? Thanks for any help, Filip --- [1] http://wiki2.dovecot.org/Director [2] https://github.com/brandond/poolmon/ [3] "LMTP however doesn't currently support mixing recipients to both being proxied and store locally." --- BTW if someone is interested in SaltStack, here are Salt formulas for Dovecot + Postfix + GlusterFS + Roundcube + Mailman setup which we are using: https://github.com/tcpcloud/salt-formula-dovecot https://github.com/tcpcloud/salt-formula-postfix https://github.com/tcpcloud/salt-formula-roundcube https://github.com/tcpcloud/salt-formula-glusterfs signature.asc Description: Digital signature
Re: v2.2.20 release candidate released
On 12/05/2015 04:32 AM, Gerhard Wiesinger wrote: like in nginx And OCSP Stapling would be nice too :-)
Re: v2.2.20 release candidate released
On 03.12.2015 14:51, Timo Sirainen wrote: http://dovecot.org/releases/2.2/rc/dovecot-2.2.20.rc1.tar.gz http://dovecot.org/releases/2.2/rc/dovecot-2.2.20.rc1.tar.gz.sig v2.2.20 probably will be released tomorrow or maybe during weekend. + ssl_options: Added support for no_ticket Hello TImo, great to see that inseucre session tickets (violating PFS) can be disabled. Is it possible to configure the secure session caching mechanism? e.g. like in nginx: https://bjornjohansen.no/optimizing-https-nginx Thnx. Ciao, Gerhard
Re: Dovecot cluster using GlusterFS
Hi Filip, On 12/05/2015 10:42 AM, Filip Pytloun wrote: I have recently setup mailserver solution using 2-node master-master setup (mainly based on MySQL M-M replication and GlusterFS with 2 replica volume) on Ubuntu 14.04 (Dovecot 2.2.9). that's no good idea due to different reasons - see below. Anyway because of the above and high possibility of GlusterFS split-brains, I have decided to setup Dovecot Director according to the docs [1] but I have a couple of questions: - is custom monitoring still required? Poolmon [2] is 4 year old so I would suppose there's some progress since that? Using the dovecot director, poolmon is strongly recommended - see the official dovecot wiki. The tool seems to be old, but the task is the same since a couple of years ;-) - it's not possible to have same backends and directors in Dovecot <2.2.17. I can backport newer Dovecot for Ubuntu Trusty, so this is not an issue, but.. You have to use 2 instances of dovecot when running the director and the backend on the same server. Otherwise, you have to use 2 systems. - documentation states that it still doesn't work for LMTP [3]? Which is probably important for my setup, because both Postfix servers are using dovecot-lmtp for mail delivery so there can be still some issues (but probably less frequent?) when both servers will deliver new mails for one user at once. So do I really have to split directors from backends? At the moment, I cannot recognize the requirement for using lmtp over the directors. When using postfix for delivering e-mails to the backend, do this directly with an corresponding MX record. Why is Dovecot behaving so bad when it pretends to be shared storage friendly? Are these issues only specific for older Dovecot? Or is there something wrong in my architecture design? IMHO, this is not a problem of dovecot. In general, using cluster filesystems in such a setup is no good idea. You've already mentioned all the problems of such a setup. In particular, the performance of GlusterFS is absolutely not suitable for (bigger) mailserver cluster. Asking any search engine for "dovecot clustering" shows a lot of results for good cluster designs. In particular, a shared mailbox storage should be avoided. Here, the dovecot internal replication mechanism on the backends seems to be the best solution. The rest of the setup (e.g., clustered directors or smtp servers) is trivial. Due to the fact, that MUAs are able to use A records only and the behavior using round robin A records is catastrophically, a service IP address is recommended. Here, keepalived is your friend. HTH. Best regards, Gordon -- Technischer Leiter & stellv. Direktor Universitätsrechenzentrum (URZ) E.-M.-Arndt-Universität Greifswald Felix-Hausdorff-Str. 12 17489 Greifswald Germany Tel. +49 3834 86 1456 Fax. +49 3834 86 1401
LIST MANAGEMENT BROKEN
Timo I have for two days try to unsubscribe from this list using email mailman. The list server does not send me confirmation request. Please fix your server and remove me
Re: Dovecot cluster using GlusterFS
Il 05.12.2015 10:42 Filip Pytloun ha scritto: Hello, I have recently setup mailserver solution using 2-node master-master setup (mainly based on MySQL M-M replication and GlusterFS with 2 replica volume) on Ubuntu 14.04 (Dovecot 2.2.9). Unfortunately even with shared-storage-aware setting: mail_nfs_index = yes mail_nfs_storage = yes mail_fsync = always mmap_disable = yes With only these setting you don't solve the problem of shared storage. ..I have hit strange issues pretty soon especially when user was manipulating same mailbox from multiple devices at the same time. Most issues was about corrupted indexes which was solved easily by just putting them on local storage of each node: mail_location = maildir:/srv/mail/%d/%u:INDEX=/var/lib/dovecot/index/%d/%u But I still hit issues like this one: dovecot: lmtp(6276, u...@example.com): Error: Broken file /srv/mail/example.com/u...@example.com/dovecot-uidlist line 8529: UIDs not ordered (8527 >= 8527) Which I am not sure how serious it is or if it's possible to solve or workaround? You need Director for POP/IMAP and also LMTP so you can solve all "Broken file" and "corrupted indexes" problems. Anyway because of the above and high possibility of GlusterFS split-brains, I have decided to setup Dovecot Director according to the docs [1] but I have a couple of questions: - is custom monitoring still required? Poolmon [2] is 4 year old so I would suppose there's some progress since that? For me poolmon works fine. - it's not possible to have same backends and directors in Dovecot <2.2.17. I can backport newer Dovecot for Ubuntu Trusty, so this is not an issue, but.. Yes is possibile (also with < 2.2.17), create two instances, like dovecot and director, two config directory /etc/dovecot/ and /etc/director/ and bind on differents IPs. - documentation states that it still doesn't work for LMTP [3]? Which is probably important for my setup, because both Postfix servers are using dovecot-lmtp for mail delivery so there can be still some issues (but probably less frequent?) when both servers will deliver new mails for one user at once. So do I really have to split directors from backends? I'm running Director and backend on the same server for POP/IMAP, and in another configuration and Director for LMTP is on the same server (but with 2.2.19). Anyone has experience with clustered Dovecot setup? Why is Dovecot behaving so bad when it pretends to be shared storage friendly? Are these issues only specific for older Dovecot? Or is there something wrong in my architecture design? You need Director, Dovecot has not problems with shared storage, big installation are always using shared storage (like NFS). -- Alessio Cecchi Postmaster AT http://www.qboxmail.it http://www.linkedin.com/in/alessice