make check failing in CentOS 6

2017-02-27 Thread Peter Ajamian
Dovecot builds just fine, but fails the tests in src/lib-index.

Note that reverting this commit fixes the issue:
https://github.com/dovecot/core/commit/dfa4b048ec9a174a42d6668e94501db2fb70793a

$ make check
for bin in test-mail-index-map test-mail-index-modseq
test-mail-index-sync-ext test-mail-index-transaction-finish
test-mail-index-transaction-update test-mail-transaction-log-append
test-mail-transaction-log-view; do \
  if !  ./$bin; then exit 1; fi; \
done
mail index map lookup seq range .. : ok
0 / 1 tests failed
mail_transaction_log_file_get_modseq_next_offset() ... : ok
0 / 1 tests failed
mail index sync ext atomic inc ... : ok
0 / 1 tests failed
mail index transaction finish flag updates n_so_far=0  : ok
mail index transaction finish flag updates n_so_far=1  : ok
mail index transaction finish flag updates n_so_far=2  : ok
mail index transaction finish check conflicts n_so_far=0 . : ok
mail index transaction finish check conflicts n_so_far=1 . : ok
mail index transaction finish check conflicts n_so_far=2 . : ok
mail index transaction finish modseq updates n_so_far=0 .. : ok
mail index transaction finish modseq updates n_so_far=1 .. : ok
mail index transaction finish modseq updates n_so_far=2 .. : ok
mail index transaction finish expunges n_so_far=0  : ok
mail index transaction finish expunges n_so_far=1  : ok
mail index transaction finish expunges n_so_far=2  : ok
0 / 12 tests failed
mail index append  : ok
mail index append with uids .. : ok
mail index flag update fast paths  : ok
mail index flag update simple merges . : ok
mail index flag update complex merges  : ok
mail index flag update random  : ok
mail index flag update appends ... : ok
mail index cancel flag updates ... : ok
mail index transaction get flag update pos ... : ok
mail index modseq update . : ok
mail index expunge ... : ok
test-mail-index-transaction-update.c:649: Assert(#1) failed:
new_hdr.day_stamp == tests[i].new_day_stamp + timezone
test-mail-index-transaction-update.c:652: Assert(#1) failed:
memcmp(new_hdr.day_first_uid, tests[i].new_day_first_uid,
sizeof(uint32_t) * 8) == 0
test-mail-index-transaction-update.c:649: Assert(#3) failed:
new_hdr.day_stamp == tests[i].new_day_stamp + timezone
test-mail-index-transaction-update.c:652: Assert(#3) failed:
memcmp(new_hdr.day_first_uid, tests[i].new_day_first_uid,
sizeof(uint32_t) * 8) == 0
test-mail-index-transaction-update.c:649: Assert(#4) failed:
new_hdr.day_stamp == tests[i].new_day_stamp + timezone
test-mail-index-transaction-update.c:649: Assert(#5) failed:
new_hdr.day_stamp == tests[i].new_day_stamp + timezone
test-mail-index-transaction-update.c:652: Assert(#5) failed:
memcmp(new_hdr.day_first_uid, tests[i].new_day_first_uid,
sizeof(uint32_t) * 8) == 0
test-mail-index-transaction-update.c:649: Assert(#6) failed:
new_hdr.day_stamp == tests[i].new_day_stamp + timezone
test-mail-index-transaction-update.c:652: Assert(#6) failed:
memcmp(new_hdr.day_first_uid, tests[i].new_day_first_uid,
sizeof(uint32_t) * 8) == 0
test-mail-index-transaction-update.c:649: Assert(#7) failed:
new_hdr.day_stamp == tests[i].new_day_stamp + timezone
test-mail-index-transaction-update.c:652: Assert(#7) failed:
memcmp(new_hdr.day_first_uid, tests[i].new_day_first_uid,
sizeof(uint32_t) * 8) == 0
test-mail-index-transaction-update.c:649: Assert(#8) failed:
new_hdr.day_stamp == tests[i].new_day_stamp + timezone
test-mail-index-transaction-update.c:652: Assert(#8) failed:
memcmp(new_hdr.day_first_uid, tests[i].new_day_first_uid,
sizeof(uint32_t) * 8) == 0
test-mail-index-transaction-update.c:649: Assert(#9) failed:
new_hdr.day_stamp == tests[i].new_day_stamp + timezone
test-mail-index-transaction-update.c:652: Assert(#9) failed:
memcmp(new_hdr.day_first_uid, tests[i].new_day_first_uid,
sizeof(uint32_t) * 8) == 0
test-mail-index-transaction-update.c:649: Assert(#10) failed:
new_hdr.day_stamp == tests[i].new_day_stamp + timezone
test-mail-index-transaction-update.c:652: Assert(#10) failed:
memcmp(new_hdr.day_first_uid, tests[i].new_day_first_uid,
sizeof(uint32_t) * 8) == 0
test-mail-index-transaction-update.c:649: Assert(#11) failed:
new_hdr.day_stamp == tests[i].new_day_stamp + timezone
test-mail-index-transaction-update.c:649: Assert(#12) failed:
new_hdr.day_stamp == tests[i].new_day_stamp + timezone
mail index 

Re: Replacement for antispam plugin

2017-02-27 Thread Jeff Kletsky

Glad I poked around on the list today!

Thanks to all for the suggestions about integration with dspam.

I'll definitely have to look into this, as I rely on moving messages
to a specific folder with various IMAP clients to retrain dspam false
positives and negatives.

A quick pair of questions:

* Does Dovecot support the IMAP "MOVE" command at this time?

* If so, what is the syntax for "COPY or MOVE" for the _causes variables?


I did see messages from 2011 discussing it, but nothing since.


While the script looks like it could be modified for use with dspam
(with the great suggestions from others on the list), it has the same
problem as "antispam" with bulk moves being serialized and tying of
the client until they complete.  I'll probably have to break down and
look into using FreeBSD's auditd to trigger the actions and then
de-queue the successfully processed messages.

Sieve doesn't look like it can handle asynchronous processing, but I'd
certainly be interested if I'm missing something there.  One less thing
to configure and maintain!



Jeff



On 2/12/17 5:52 AM, Aki Tuomi wrote:

On February 10, 2017 at 10:06 AM Aki Tuomi  wrote:


Hi!
Since antispam plugin is deprecated and we would really prefer people
not to use it, we wrote instructions on how to replace it with
IMAPSieve. Comments and suggestions are most welcome.

https://wiki.dovecot.org/HowTo/AntispamWithSieve

---
Aki Tuomi
Dovecot oy

Hi everyone,

thank you all for your feedback, questions and comments. We have upgraded the 
documentation based on this, including information how to exclude Trash folder 
in ham script.

Aki



Fwd: Some mails do not get replicated anymore after memory-exhaust

2017-02-27 Thread Christoph Kluge
Hey guys,

overall I have an working dovecot replication between 2 servers running on
amazon cloud. Sadly I had some messages that my server ran out of memory.
After investigating a little bit further I realized that some mails didn't
got replicated, but I'm not sure if this was related to the memory exhaust.
I was expecting that the full-sync would catch them up but sadly it's not.

Attached I'm adding:
* /etc/dovecot/dovecot.conf from both servers
* one sample of my memory-exhaust exception
* maildir directory listing of one mailbox on both servers
* commands + outpot of manual attempt for full-replication
* grep information of missing mail inside Maildir on both servers

Here is my configuration from both servers. The configugration is 1:1 the
same except the mail_replica server. Please note one server runs on debian
8.7 and the other one on 7.11.

 SERVER A
> # dovecot -n
> # 2.2.13: /etc/dovecot/dovecot.conf
> # OS: Linux 3.2.0-4-amd64 x86_64 Debian 8.7
>  SERVER B
> # dovecot -n
> # 2.2.13: /etc/dovecot/dovecot.conf
> # OS: Linux 2.6.32-34-pve i686 Debian 7.11
> auth_mechanisms = plain login
> disable_plaintext_auth = no
> doveadm_password = 
> doveadm_port = 12345
> listen = *,[::]
> log_timestamp = "%Y-%m-%d %H:%M:%S "
> mail_max_userip_connections = 100
> mail_plugins = notify replication quota
> mail_privileged_group = vmail
> passdb {
>   args = /etc/dovecot/dovecot-sql.conf
>   driver = sql
> }
> plugin {
>   mail_replica = tcp:*..de
>   quota = dict:user::file:/var/vmail/%d/%n/.quotausage
>   replication_full_sync_interval = 1 hours
>   sieve = /var/vmail/%d/%n/.sieve
>   sieve_max_redirects = 25
> }
> protocols = imap
> replication_max_conns = 2
> service aggregator {
>   fifo_listener replication-notify-fifo {
> mode = 0666
> user = vmail
>   }
>   unix_listener replication-notify {
> mode = 0666
> user = vmail
>   }
> }
> service auth {
>   unix_listener /var/spool/postfix/private/auth {
> group = postfix
> mode = 0660
> user = postfix
>   }
>   unix_listener auth-userdb {
> group = vmail
> mode = 0600
> user = vmail
>   }
>   user = root
> }
> service config {
>   unix_listener config {
> user = vmail
>   }
> }
> service doveadm {
>   inet_listener {
> port = 12345
>   }
>   user = vmail
> }
> service imap-login {
>   client_limit = 1000
>   process_limit = 512
> }
> service lmtp {
>   unix_listener /var/spool/postfix/private/dovecot-lmtp {
> group = postfix
> mode = 0600
> user = postfix
>   }
> }
> service replicator {
>   process_min_avail = 1
>   unix_listener replicator-doveadm {
> mode = 0666
>   }
> }
> ssl_cert =  ssl_key =  ssl_protocols = !SSLv2 !SSLv3
> userdb {
>   driver = prefetch
> }
> userdb {
>   args = /etc/dovecot/dovecot-sql.conf
>   driver = sql
> }
> protocol imap {
>   mail_plugins = notify replication quota imap_quota
> }
> protocol pop3 {
>   mail_plugins = quota
>   pop3_uidl_format = %08Xu%08Xv
> }
> protocol lda {
>   mail_plugins = notify replication quota sieve
>   postmaster_address = webmaster@localhost
> }
> protocol lmtp {
>   mail_plugins = notify replication quota sieve
>   postmaster_address = webmaster@localhost
> }


This is the exception which I got several times:

Feb 26 16:16:39 mx dovecot: replicator: Panic: data stack: Out of memory
> when allocating 268435496 bytes
> Feb 26 16:16:39 mx dovecot: replicator: Error: Raw backtrace:
> /usr/lib/dovecot/libdovecot.so.0(+0x6b6fe) [0x7f7ca2b0a6fe] ->
> /usr/lib/dovecot/libdovecot.so.0(+0x6b7ec) [0x7f7ca2b0a7ec] ->
> /usr/lib/dovecot/libdovecot.so.0(i_fatal+0) [0x7f7ca2ac18fb] ->
> /usr/lib/dovecot/libdovecot.so.0(+0x6977e) [0x7f7ca2b0877e] ->
> /usr/lib/dovecot/libdovecot.so.0(+0x699db) [0x7f7ca2b089db] ->
> /usr/lib/dovecot/libdovecot.so.0(+0x82198) [0x7f7ca2b21198] ->
> /usr/lib/dovecot/libdovecot.so.0(+0x6776d) [0x7f7ca2b0676d] ->
> /usr/lib/dovecot/libdovecot.so.0(buffer_write+0x6c) [0x7f7ca2b069dc] ->
> dovecot/replicator(replicator_queue_push+0x14e) [0x7f7ca2fa17ae] ->
> dovecot/replicator(+0x4f9e) [0x7f7ca2fa0f9e] -> dovecot/replicator(+0x4618)
> [0x7f7ca2fa0618] -> dovecot/replicator(+0x4805) [0x7f7ca2fa0805] ->
> /usr/lib/dovecot/libdovecot.so.0(io_loop_call_io+0x3f) [0x7f7ca2b1bd0f]
> -> /usr/lib/dovecot/libdovecot.so.0(io_loop_handler_run_internal+0xf9)
> [0x7f7ca2b1cd09] -> /usr/lib/dovecot/libdovecot.so.0(io_loop_handler_run+0x9)
> [0x7f7ca2b1bd79] -> /usr/lib/dovecot/libdovecot.so.0(io_loop_run+0x38)
> [0x7f7ca2b1bdf8] -> /usr/lib/dovecot/libdovecot.so.0(master_service_run+0x13)
> [0x7f7ca2ac6dc3] -> dovecot/replicator(main+0x195) [0x7f7ca2f9f8b5] ->
> /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf5) [0x7f7ca2715b45]
> -> dovecot/replicator(+0x395d) [0x7f7ca2f9f95d]
> Feb 26 16:16:39 mx dovecot: imap(***.com): Warning: replication(***.com):
> Sync failure:
> Feb 26 16:16:39 mx dovecot: replicator: Fatal: master:
> service(replicator): child 24012 killed with signal 6 (core dumps disabled)


This is the 

Archive key for the Dovecot Automatic Debian Package Archive (Xi) updated

2017-02-27 Thread Stephan Bosch
Hi,

I've updated the archive key for the Xi archive. So, when updating, you
will initially get a key error.

To fix this, either upgrade the debian-dovecot-auto-keyring package
(preferred), or update your key manually from the archive.key located in
the repository root.

Regards,


Stephan.


indexer-worker assert in v2.2.28 (fts-lucene)

2017-02-27 Thread Daniel J. Luke
I upgraded to 2.2.28 and started seeing this logged:

indexer-worker(dluke): Panic: file mailbox-list.c: line 1158 
(mailbox_list_try_mkdir_root): assertion failed: (strncmp(root_dir, path, 
strlen(root_dir)) == 0)
indexer-worker(dluke): Error: Raw backtrace: 
2   libdovecot.0.dylib  0x00010ec63d24 default_fatal_finish 
+ 36 
-> 3   libdovecot.0.dylib  0x00010ec64b5b 
i_internal_fatal_handler + 43 
-> 4   libdovecot.0.dylib  0x00010ec64039 i_panic + 169 
-> 5   libdovecot-storage.0.dylib  0x00010eac0950 
mailbox_list_try_mkdir_root + 1248 
-> 6   libdovecot-storage.0.dylib  0x00010eac0a39 
mailbox_list_mkdir_root + 25 
-> 7   lib21_fts_lucene_plugin.so  0x000110adc949 
fts_backend_lucene_update_set_build_key + 73 
-> 8   lib20_fts_plugin.so 0x00010ee736cf 
fts_backend_update_set_build_key + 79 
-> 9   lib20_fts_plugin.so 0x00010ee74317 fts_build_mail + 
599 
-> 10  lib20_fts_plugin.so 0x00010ee7981a fts_mail_precache 
+ 794 
-> 11  libdovecot-storage.0.dylib  0x00010eaa71e9 mail_precache + 
25 
-> 12  indexer-worker  0x00010ea9e503 
master_connection_input + 1523
-> 13  libdovecot.0.dylib  0x00010ec78b89 io_loop_call_io + 
89 
-> 14  libdovecot.0.dylib  0x00010ec7a96d 
io_loop_handler_run_internal + 269 
-> 15  libdovecot.0.dylib  0x00010ec7907f 
io_loop_handler_run + 303 
-> 16  libdovecot.0.dylib  0x00010ec78e58 io_loop_run + 88 
-> 17  libdovecot.0.dylib  0x00010ec03458 
master_service_run + 24 
-> 18  indexer-worker  0x00010ea9ddb4 main + 340 
-> 19  libdyld.dylib   0x7fffba7df255 start + 1

Anyone else? 
-- 
Daniel J. Luke


Re: Director+NFS Experiences

2017-02-27 Thread Tom Sommer

On 2017-02-27 10:40, Sami Ketola wrote:

On 24 Feb 2017, at 21.28, Mark Moseley  wrote:
Attached. No claims are made on the quality of my code :)




With recent dovecots you probably should not use set_host_weight(
server, '0’ ) to mark backend
down but instead should use director commands HOST-DOWN and HOST-UP in
combination with HOST-FLUSH.


This is already the case in the latest version of Poolmon


Re: Director+NFS Experiences

2017-02-27 Thread Sami Ketola

> On 24 Feb 2017, at 21.28, Mark Moseley  wrote:
> Attached. No claims are made on the quality of my code :)
> 


With recent dovecots you probably should not use set_host_weight( server, '0’ ) 
to mark backend 
down but instead should use director commands HOST-DOWN and HOST-UP in 
combination with HOST-FLUSH.

Sami