Re: [Dovecot-news] v2.2.29.rc1 released

2017-04-06 Thread Daniel J. Luke
On Apr 6, 2017, at 1:33 PM, Timo Sirainen  wrote:
> Planning to release v2.2.29 on Monday. Please find and report any bugs before 
> that.

I'm seeing still seeing the assert that started showing up for me with 2.2.28 
(https://www.dovecot.org/pipermail/dovecot/2017-February/107250.html)

Below I generate it using doveadm with dovecot 2.2.29rc1 (output slightly 
cleaned up so the backtrace is easier to read)

% sudo doveadm index -A \*
doveadm(dluke): Panic: file mailbox-list.c: line 1159 
(mailbox_list_try_mkdir_root): assertion failed: (strncmp(root_dir, path, 
strlen(root_dir)) == 0)
doveadm(dluke): Error: Raw backtrace: 
2   libdovecot.0.dylib  0x00010cfed284 default_fatal_finish 
+ 36 
-> 3   libdovecot.0.dylib  0x00010cfed05d 
default_fatal_handler + 61 
-> 4   libdovecot.0.dylib  0x00010cfed5a9 i_panic + 169 
-> 5   libdovecot-storage.0.dylib  0x00010ce49677 
mailbox_list_try_mkdir_root + 1207 
-> 6   libdovecot-storage.0.dylib  0x00010ce49759 
mailbox_list_mkdir_root + 25 
-> 7   lib21_fts_lucene_plugin.so  0x00010ee6ab39 
fts_backend_lucene_update_set_build_key + 73 
-> 8   lib20_fts_plugin.so 0x00010d2017cf 
fts_backend_update_set_build_key + 79 
-> 9   lib20_fts_plugin.so 0x00010d2027a7 fts_build_mail + 
1479 
-> 10  lib20_fts_plugin.so 0x00010d20795a fts_mail_precache 
+ 794 
-> 11  libdovecot-storage.0.dylib  0x00010ce2ff59 mail_precache + 
25 
-> 12  doveadm 0x00010cd319df cmd_index_run + 
1007 
-> 13  doveadm 0x00010cd2c0e5 
doveadm_mail_next_user + 421 
-> 14  doveadm 0x00010cd2d6f5 
doveadm_mail_cmd_exec + 645 
-> 15  doveadm 0x00010cd2d32d 
doveadm_cmd_ver2_to_mail_cmd_wrapper + 1629 
-> 16  doveadm 0x00010cd3c696 
doveadm_cmd_run_ver2 + 1046 
-> 17  doveadm 0x00010cd3c257 
doveadm_cmd_try_run_ver2 + 55 
-> 18  doveadm 0x00010cd3ef79 main + 1001 
-> 19  libdyld.dylib   0x7fff93655235 start + 1
Abort

-- 
Daniel J. Luke


sieve does not seem to be working

2017-04-06 Thread Robert Moskowitz

my local.conf has:

#90-sieve.conf
plugin {
  sieve_before = /home/sieve/globalfilter.sieve
}

and cat /home/sieve/globalfilter.sieve

require ["fileinto","mailbox"];
if anyof
  (
header :contains "X-Spam-Flag" "YES",
header :contains "subject" "***SPAM***"
  )
{
  fileinto :create "Spam";
}

There IS a globalfilter.svbin

when I tried:

sendmail -i test...@test.htt-consult.com < sample-spam-GTUBE-junk.txt

amavis is flagging it as ***Spam***

but it stays in inbox.  So I tried:

sieve-test -e -l /home/vmail/test.htt-consult.com/testit3/ 
/home/sieve/globalfilter.sieve 
/home/vmail/test.htt-consult.com/testit3/cur/1491512409.M371278P6513.z9m9z.test.htt-consult.com\,S\=1823\,W\=1868\:2\,

info: msgid=: stored mail into mailbox 'Spam'.
sieve-test(root): Info: final result: success

And it DID get copied to Spam, not moved.  I am now seeing it in inbox 
and Spam.  Of course ownership on the message in Spam were wrong 
(root:root instead of vmail:mail, but I fixed that).


So two questions, probably linked:

Why did sieve not work.  Is the subject test case sensitive?

If it is case sensitive, why did the sieve-test work?

thanks


Solved? - Re: Spam instead of Junk folder

2017-04-06 Thread Robert Moskowitz
I fixed a value in Postfixadmin and it looks kind of like the folders 
are being created properly.


When I log directly into dovecot I get:

c list "" *
* LIST (\HasNoChildren \Sent) "." Sent
* LIST (\HasNoChildren \Trash) "." Trash
* LIST (\HasNoChildren \Drafts) "." Drafts
* LIST (\HasNoChildren) "." Spam
* LIST (\HasNoChildren) "." INBOX
c OK List completed.

But why does not Spam have some \something like the ones above it?

thanks

On 04/06/2017 03:18 PM, Robert Moskowitz wrote:
Traditionally I have used 'Spam' as the folder name for all those 
emails that get tagged as, well Spam.


But it seems that the standard is now 'Junk' as from 15-mailboxes.conf

# Space separated list of IMAP SPECIAL-USE attributes as specified by
# RFC 6154: \All \Archive \Drafts \Flagged \Junk \Sent \Trash
#special_use =


If I have in local.conf

# 15-mailboxes.conf

namespace inbox {
  mailbox Spam {
special_use = \Junk
  }
}

This would auto make Spam, but:

Would Junk (and all the others specified in 15-mailboxes.conf) still 
get created?


What actually controls which folders get created?



Maybe it is an sql config error?

2017-04-06 Thread Robert Moskowitz

I am looking at these messages in maillog:

Apr  6 15:46:58 z9m9z dovecot: dict: Error: mysql(localhost): Connect 
failed to database (postfix): Can't connect to local MySQL server 
through socket '/var/lib/mysql/mysql.sock' (13) - waiting for 25 seconds 
before retry
Apr  6 15:46:58 z9m9z dovecot: dict: Error: mysql(localhost): Connect 
failed to database (postfix): Can't connect to local MySQL server 
through socket '/var/lib/mysql/mysql.sock' (13) - waiting for 1 seconds 
before retry
Apr  6 15:46:58 z9m9z dovecot: dict: Error: dict sql lookup failed: Not 
connected to database


and wondering if my config is wrong.  Here is what I have:

in local.conf:

#dovecot.conf
protocols = imap pop3 lmtp sieve
dict {
  sqlquota = mysql:/etc/dovecot/dovecot-dict-sql.conf.ext
}

#10-auth.conf
!include conf.d/auth-sql.conf.ext

#auth-sql.conf.ext
userdb {
  driver = prefetch
}

#90-quota.conf
plugin {
  quota = dict:user::proxy::sqlquota
  trash = /etc/dovecot/dovecot-trash.conf.ext
}

dovecot-sql.conf.ext:

driver = mysql
connect = host=localhost dbname=postfix user=postfix password=mailpassword
default_pass_scheme = MD5-CRYPT

# following should all be on one line.
password_query = SELECT username as user, password, 
concat('/home/vmail/', maild
ir) as userdb_home, concat('maildir:/home/vmail/', maildir) as 
userdb_mail, 101
as userdb_uid, 12 as userdb_gid FROM mailbox WHERE username = '%u' AND 
active =

'1'

# following should all be on one line
user_query = SELECT concat('/home/vmail/', maildir) as home, 
concat('maildir:/ho
me/vmail/', maildir) as mail, 101 AS uid, 12 AS gid, 
CONCAT('*:messages=3:by
tes=', quota) as quota_rule FROM mailbox WHERE username = '%u' AND 
active = '1'

[root@z9m9z dovecot]#

dovecot-dict-sql.conf.ext:

connect = host=localhost dbname=postfix user=postfix password=mailserv
map {
  pattern = priv/quota/storage
  table = quota2
  username_field = username
  value_field = bytes
}
map {
  pattern = priv/quota/messages
  table = quota2
  username_field = username
  value_field = messages
}

Users ARE getting authenticated:

# openssl s_client -connect z9m9z.test.htt-consult.com:993
CONNECTED(0003)

 Cert stuff cut

---
SSL handshake has read 1676 bytes and written 405 bytes
---
New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-GCM-SHA384

 Cert stuff cut

---
* OK [CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE 
IDLE AUTH=PLAIN] Dovecot ready.

a login fa...@test.htt-consult.com faxitpaaswd
a OK [CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE 
IDLE SORT SORT=DISPLAY THREAD=REFERENCES THREAD=REFS 
THREAD=ORDEREDSUBJECT MULTIAPPEND URL-PARTIAL CATENATE UNSELECT CHILDREN 
NAMESPACE UIDPLUS LIST-EXTENDED I18NLEVEL=1 CONDSTORE QRESYNC ESEARCH 
ESORT SEARCHRES WITHIN CONTEXT=SEARCH LIST-STATUS SPECIAL-USE BINARY 
MOVE QUOTA] Logged in

b list "" *
* LIST (\HasNoChildren \Sent) "." Sent
* LIST (\HasNoChildren \Trash) "." Trash
* LIST (\HasNoChildren \Drafts) "." Drafts
* LIST (\HasNoChildren) "." Spam
* LIST (\HasNoChildren) "." INBOX
b OK List completed.

==

So perhaps it is with the quota sql on sending/recv mail?

thanks


Spam instead of Junk folder

2017-04-06 Thread Robert Moskowitz
Traditionally I have used 'Spam' as the folder name for all those emails 
that get tagged as, well Spam.


But it seems that the standard is now 'Junk' as from 15-mailboxes.conf

# Space separated list of IMAP SPECIAL-USE attributes as specified by
# RFC 6154: \All \Archive \Drafts \Flagged \Junk \Sent \Trash
#special_use =


If I have in local.conf

# 15-mailboxes.conf

namespace inbox {
  mailbox Spam {
special_use = \Junk
  }
}

This would auto make Spam, but:

Would Junk (and all the others specified in 15-mailboxes.conf) still get 
created?


What actually controls which folders get created?


Re: IMAP hibernate and scalability in general

2017-04-06 Thread Timo Sirainen
On 6 Apr 2017, at 21.14, Mark Moseley  wrote:
> 
>> 
>> imap-hibernate processes are similar to imap-login processes in that they
>> should be able to handle thousands or even tens of thousands of connections
>> per process.
>> 
> 
> TL;DR: In a director/proxy setup, what's a good client_limit for
> imap-login/pop3-login?

You should have the same number of imap-login processes as the number of CPU 
cores, so they can use all the available CPU without doing unnecessary context 
switches. The client_limit is then large enough to handle all the concurrent 
connections you need, but not so large that it would bring down the whole 
system if it actually happens.

> Would the same apply for imap-login when it's being used in proxy mode? I'm
> moving us to a director setup (cf. my other email about director rings
> getting wedged from a couple days ago) and, again, for the sake of starting
> conservatively, I've got imap-login set to a client limit of 20, since I
> figure that proxying is a lot more work than just doing IMAP logins. I'm
> doing auth to mysql at both stages (at the proxy level and at the backend
> level).

Proxying isn't doing any disk IO or any other blocking operations. There's no 
benefit to having more processes. The only theoretical advantage would be if 
some client could trigger a lot of CPU work and cause delays to handling other 
clients, but I don't think that's possible (unless somehow via OpenSSL but I'd 
guess that would be a bug in it then).

> Should I be able to handle a much higher client_limit for imap-login and
> pop3-login than 20?

Yeah.


Re: IMAP hibernate and scalability in general

2017-04-06 Thread Mark Moseley
On Thu, Apr 6, 2017 at 3:10 AM, Timo Sirainen  wrote:

> On 6 Apr 2017, at 9.56, Christian Balzer  wrote:
> >
> >> For no particular reason besides wanting to start conservatively, we've
> got
> >> client_limit set to 50 on the hibernate procs (with 1100 total
> hibernated
> >> connections on the box I'm looking at). At only a little over a meg
> each,
> >> I'm fine with those extra processes.
> >>
> > Yeah, but 50 would be a tad too conservative for our purposes here.
> > I'll keep an eye on it and see how it goes, first checkpoint would be at
> > 1k hibernated sessions. ^_^
>
> imap-hibernate processes are similar to imap-login processes in that they
> should be able to handle thousands or even tens of thousands of connections
> per process.
>

TL;DR: In a director/proxy setup, what's a good client_limit for
imap-login/pop3-login?

Would the same apply for imap-login when it's being used in proxy mode? I'm
moving us to a director setup (cf. my other email about director rings
getting wedged from a couple days ago) and, again, for the sake of starting
conservatively, I've got imap-login set to a client limit of 20, since I
figure that proxying is a lot more work than just doing IMAP logins. I'm
doing auth to mysql at both stages (at the proxy level and at the backend
level).

On a sample director box, I've got 1 imap connections, varying from
50mbit/sec to the backends up to 200mbit/sec. About a third of the
connections are TLS, if that makes a diff. That's pretty normal from what
I've seen. The director servers are usually 90-95% idle.

Should I be able to handle a much higher client_limit for imap-login and
pop3-login than 20?


Re: Dovecot impatient with mysql?

2017-04-06 Thread Timo Sirainen
On 6 Apr 2017, at 20.37, Robert Moskowitz  wrote:
> 
> Oh, that time is an exponential backoff on mysql not responding.
> 
> So where is the time dovecot waits before backing off configured?

Looks like these were missing from the example dovecot-sql.conf.ext:

# connect_timeout- Connect timeout in seconds (default: 5)
# read_timeout   - Read timeout in seconds (default: 30)
# write_timeout  - Write timeout in seconds (default: 30)

So e.g.:

connect = ... connect_timeout=30


Re: Dovecot impatient with mysql?

2017-04-06 Thread Robert Moskowitz

Oh, that time is an exponential backoff on mysql not responding.

So where is the time dovecot waits before backing off configured?

On 04/06/2017 01:04 PM, Robert Moskowitz wrote:



On 04/06/2017 12:50 PM, George Kontostanos wrote:
On Thu, Apr 6, 2017 at 7:10 PM, Robert Moskowitz 
 wrote:

It seems dovecot is impatient with connecting with mysql, as I see in
maillog entries like:

Apr  6 11:48:30 z9m9z dovecot: dict: Error: mysql(localhost): 
Connect failed
to database (postfix): Can't connect to local MySQL server through 
socket

'/var/lib/mysql/mysql.sock' (13) - waiting for 5 seconds before retry
Apr  6 11:48:35 z9m9z dovecot: dict: Error: mysql(localhost): 
Connect failed
to database (postfix): Can't connect to local MySQL server through 
socket

'/var/lib/mysql/mysql.sock' (13) - waiting for 25 seconds before retry

I suspect it does connect eventually.  This is a test system with 
only 1GB

of memory and free reports:

   totalusedfree  shared buff/cache 
available

Mem:1025484  696344   24556 21528 304584  251552
Swap:524284   92168  432116


The production box has 2GB, so if the problem is mysql is swapping 
out, that
will be 'fixed', if it is processor, well this is an ARMv7 duo core, 
as is

the production box.  I am considering buying the new quad core.

Is there anything I can do to get dovecot more patient with mysql, 
or just

ignore there messages?

thank you

I really don't understand how you reached to the conclusion that
dovecot is impatient.


Well, it waits a varying amount of time before reporting the 
connection failed.  Not a fixed amount of time.


Why mysql takes so long to connect via a sock is separate, but I 
suspect it is the low memory and duo core limitations.


Maybe the question is more what is mysql not doing to take so long to 
respond to the connection.




[Dovecot-news] v2.2.29.rc1 released

2017-04-06 Thread Timo Sirainen
http://dovecot.org/releases/2.2/rc/dovecot-2.2.29.rc1.tar.gz
http://dovecot.org/releases/2.2/rc/dovecot-2.2.29.rc1.tar.gz.sig

Planning to release v2.2.29 on Monday. Please find and report any bugs before 
that.

 * When Dovecot encounters an internal error, it logs the real error and
   usually logs another line saying what function failed. Previously the
   second log line's error message was a rather uninformative "Internal
   error occurred. Refer to server log for more information." Now the
   real error message is duplicated in this second log line.
 * lmtp: If a delivery has multiple recipients, run autoexpunging only
   for the last recipient. This avoids a problem where a long
   autoexpunge run causes LMTP client to timeout between the DATA
   replies, resulting in duplicate mail deliveries.
 * config: Don't stop the process due to idling. Otherwise the
   configuration is reloaded when the process restarts.
 * mail_log plugin: Differentiate autoexpunges from regular expunges
 * imapc: Use LOGOUT to cleanly disconnect from server.
 * lib-http: Internal status codes (>9000) are no longer visible in logs
 * director: Log vhost count changes and HOST-UP/DOWN

 + quota: Add plugin { quota_max_mail_size } setting to limit the
   maximum individual mail size that can be saved.
 + imapc: Add imapc_features=delay-login. If set, connecting to the
   remote IMAP server isn't done until it's necessary.
 + imapc: Add imapc_connection_retry_count and
   imapc_connection_retry_interval settings.
 + imap, pop3, indexer-worker: Add (deinit) to process title before
   autoexpunging runs.
 + Added %{encrypt} and %{decrypt} variables
 + imap/pop3 proxy: Log proxy state in errors as human-readable string.
 + imap/pop3-login: All forward_* extra fields returned by passdb are
   sent to the next hop when proxying using ID/XCLIENT commands. On the
   receiving side these fields are imported and sent to auth process
   where they're accessible via %{passdb:forward_*}. This is done only
   if the sending IP address matches login_trusted_networks.
 + imap-login: If imap_id_retain=yes, send the IMAP ID string to
   auth process. %{client_id} expands to it in auth process. The ID
   string is also sent to the next hop when proxying.
 - fts-tika: Fixed crash when parsing attachment without
   Content-Disposition header. Broken by 2.2.28.
 - trash plugin was broken in 2.2.28
 - auth: When passdb/userdb lookups were done via auth-workers, too much
   data was added to auth cache. This could have resulted in wrong
   replies when using multiple passdbs/userdbs.
 - auth: passdb { skip & mechanisms } were ignored for the first passdb
 - oauth2: Various fixes, including fixes to crashes
 - dsync: Large Sieve scripts (or other large metadata) weren't always
   synced.
 - Index rebuild (e.g. doveadm force-resync) set all mails as \Recent
 - imap-hibernate: %{userdb:*} wasn't expanded in mail_log_prefix
 - doveadm: Exit codes weren't preserved when proxying commands via
   doveadm-server. Almost all errors used exit code 75 (tempfail).
 - ACLs weren't applied to not-yet-existing autocreated mailboxes.
 - Fixed a potential crash when parsing a broken message header.
 - cassandra: Fallback consistency settings weren't working correctly.
 - doveadm director status : "Initial config" was always empty
 - imapc: Various reconnection fixes.

___
Dovecot-news mailing list
Dovecot-news@dovecot.org
https://dovecot.org/mailman/listinfo/dovecot-news


v2.2.29.rc1 released

2017-04-06 Thread Timo Sirainen
http://dovecot.org/releases/2.2/rc/dovecot-2.2.29.rc1.tar.gz
http://dovecot.org/releases/2.2/rc/dovecot-2.2.29.rc1.tar.gz.sig

Planning to release v2.2.29 on Monday. Please find and report any bugs before 
that.

 * When Dovecot encounters an internal error, it logs the real error and
   usually logs another line saying what function failed. Previously the
   second log line's error message was a rather uninformative "Internal
   error occurred. Refer to server log for more information." Now the
   real error message is duplicated in this second log line.
 * lmtp: If a delivery has multiple recipients, run autoexpunging only
   for the last recipient. This avoids a problem where a long
   autoexpunge run causes LMTP client to timeout between the DATA
   replies, resulting in duplicate mail deliveries.
 * config: Don't stop the process due to idling. Otherwise the
   configuration is reloaded when the process restarts.
 * mail_log plugin: Differentiate autoexpunges from regular expunges
 * imapc: Use LOGOUT to cleanly disconnect from server.
 * lib-http: Internal status codes (>9000) are no longer visible in logs
 * director: Log vhost count changes and HOST-UP/DOWN

 + quota: Add plugin { quota_max_mail_size } setting to limit the
   maximum individual mail size that can be saved.
 + imapc: Add imapc_features=delay-login. If set, connecting to the
   remote IMAP server isn't done until it's necessary.
 + imapc: Add imapc_connection_retry_count and
   imapc_connection_retry_interval settings.
 + imap, pop3, indexer-worker: Add (deinit) to process title before
   autoexpunging runs.
 + Added %{encrypt} and %{decrypt} variables
 + imap/pop3 proxy: Log proxy state in errors as human-readable string.
 + imap/pop3-login: All forward_* extra fields returned by passdb are
   sent to the next hop when proxying using ID/XCLIENT commands. On the
   receiving side these fields are imported and sent to auth process
   where they're accessible via %{passdb:forward_*}. This is done only
   if the sending IP address matches login_trusted_networks.
 + imap-login: If imap_id_retain=yes, send the IMAP ID string to
   auth process. %{client_id} expands to it in auth process. The ID
   string is also sent to the next hop when proxying.
 - fts-tika: Fixed crash when parsing attachment without
   Content-Disposition header. Broken by 2.2.28.
 - trash plugin was broken in 2.2.28
 - auth: When passdb/userdb lookups were done via auth-workers, too much
   data was added to auth cache. This could have resulted in wrong
   replies when using multiple passdbs/userdbs.
 - auth: passdb { skip & mechanisms } were ignored for the first passdb
 - oauth2: Various fixes, including fixes to crashes
 - dsync: Large Sieve scripts (or other large metadata) weren't always
   synced.
 - Index rebuild (e.g. doveadm force-resync) set all mails as \Recent
 - imap-hibernate: %{userdb:*} wasn't expanded in mail_log_prefix
 - doveadm: Exit codes weren't preserved when proxying commands via
   doveadm-server. Almost all errors used exit code 75 (tempfail).
 - ACLs weren't applied to not-yet-existing autocreated mailboxes.
 - Fixed a potential crash when parsing a broken message header.
 - cassandra: Fallback consistency settings weren't working correctly.
 - doveadm director status : "Initial config" was always empty
 - imapc: Various reconnection fixes.


Re: Dovecot impatient with mysql?

2017-04-06 Thread Robert Moskowitz



On 04/06/2017 12:50 PM, George Kontostanos wrote:

On Thu, Apr 6, 2017 at 7:10 PM, Robert Moskowitz  wrote:

It seems dovecot is impatient with connecting with mysql, as I see in
maillog entries like:

Apr  6 11:48:30 z9m9z dovecot: dict: Error: mysql(localhost): Connect failed
to database (postfix): Can't connect to local MySQL server through socket
'/var/lib/mysql/mysql.sock' (13) - waiting for 5 seconds before retry
Apr  6 11:48:35 z9m9z dovecot: dict: Error: mysql(localhost): Connect failed
to database (postfix): Can't connect to local MySQL server through socket
'/var/lib/mysql/mysql.sock' (13) - waiting for 25 seconds before retry

I suspect it does connect eventually.  This is a test system with only 1GB
of memory and free reports:

   totalusedfree  shared buff/cache available
Mem:1025484  696344   24556 21528  304584  251552
Swap:524284   92168  432116


The production box has 2GB, so if the problem is mysql is swapping out, that
will be 'fixed', if it is processor, well this is an ARMv7 duo core, as is
the production box.  I am considering buying the new quad core.

Is there anything I can do to get dovecot more patient with mysql, or just
ignore there messages?

thank you

I really don't understand how you reached to the conclusion that
dovecot is impatient.


Well, it waits a varying amount of time before reporting the connection 
failed.  Not a fixed amount of time.


Why mysql takes so long to connect via a sock is separate, but I suspect 
it is the low memory and duo core limitations.


Maybe the question is more what is mysql not doing to take so long to 
respond to the connection.


Re: Dovecot impatient with mysql?

2017-04-06 Thread George Kontostanos
On Thu, Apr 6, 2017 at 7:10 PM, Robert Moskowitz  wrote:
> It seems dovecot is impatient with connecting with mysql, as I see in
> maillog entries like:
>
> Apr  6 11:48:30 z9m9z dovecot: dict: Error: mysql(localhost): Connect failed
> to database (postfix): Can't connect to local MySQL server through socket
> '/var/lib/mysql/mysql.sock' (13) - waiting for 5 seconds before retry
> Apr  6 11:48:35 z9m9z dovecot: dict: Error: mysql(localhost): Connect failed
> to database (postfix): Can't connect to local MySQL server through socket
> '/var/lib/mysql/mysql.sock' (13) - waiting for 25 seconds before retry
>
> I suspect it does connect eventually.  This is a test system with only 1GB
> of memory and free reports:
>
>   totalusedfree  shared buff/cache available
> Mem:1025484  696344   24556 21528  304584  251552
> Swap:524284   92168  432116
>
>
> The production box has 2GB, so if the problem is mysql is swapping out, that
> will be 'fixed', if it is processor, well this is an ARMv7 duo core, as is
> the production box.  I am considering buying the new quad core.
>
> Is there anything I can do to get dovecot more patient with mysql, or just
> ignore there messages?
>
> thank you

I really don't understand how you reached to the conclusion that
dovecot is impatient.

-- 
George Kontostanos
---


Dovecot impatient with mysql?

2017-04-06 Thread Robert Moskowitz
It seems dovecot is impatient with connecting with mysql, as I see in 
maillog entries like:


Apr  6 11:48:30 z9m9z dovecot: dict: Error: mysql(localhost): Connect 
failed to database (postfix): Can't connect to local MySQL server 
through socket '/var/lib/mysql/mysql.sock' (13) - waiting for 5 seconds 
before retry
Apr  6 11:48:35 z9m9z dovecot: dict: Error: mysql(localhost): Connect 
failed to database (postfix): Can't connect to local MySQL server 
through socket '/var/lib/mysql/mysql.sock' (13) - waiting for 25 seconds 
before retry


I suspect it does connect eventually.  This is a test system with only 
1GB of memory and free reports:


  totalusedfree  shared buff/cache 
available

Mem:1025484  696344   24556 21528  304584  251552
Swap:524284   92168  432116


The production box has 2GB, so if the problem is mysql is swapping out, 
that will be 'fixed', if it is processor, well this is an ARMv7 duo 
core, as is the production box.  I am considering buying the new quad core.


Is there anything I can do to get dovecot more patient with mysql, or 
just ignore there messages?


thank you


Re: [Bug] FTS double escaping

2017-04-06 Thread azurit

Citát Aki Tuomi :


On 06.04.2017 14:58, azu...@pobox.sk wrote:

Hi,

i'm trying to resolve few problems with indexing 'From' headers using
FTS/Solr. I was tcpdumping the communication between Dovecot and
Jetty/Solr and noticed that 'From' headers, which includes also
sender's name, are double escaped. This is what was Dovecot sending to
Solr:

Name Surname
lt;t...@example.comgt;

As you can see, characters < and > were escaped to  and  which
were, again, escaped to lt; and gt;. This is doing problems
while trying to index whole e-mail address, as Solr sees it as
't...@example.com'.

I spend hours trying to figure out why i'm able to search in all parts
of e-mail addresses but searching for full and exact e-mail address
was successfull ONLY for messages which doesn't include sender's name
in 'From' header. Finally, after i found this bug, this fixed all
search problems:




I hope that, at least, this bug, reported by me, will be fixed. Thank
you.

azur


Hi!

Which dovecot version was this?

Aki




Sorry, forgot to mention it, 2.2.27, Debian Jessie (backports), 64bit.


Re: [Bug] FTS double escaping

2017-04-06 Thread Aki Tuomi


On 06.04.2017 14:58, azu...@pobox.sk wrote:
> Hi,
>
> i'm trying to resolve few problems with indexing 'From' headers using
> FTS/Solr. I was tcpdumping the communication between Dovecot and
> Jetty/Solr and noticed that 'From' headers, which includes also
> sender's name, are double escaped. This is what was Dovecot sending to
> Solr:
>
> Name Surname
> lt;t...@example.comgt;
>
> As you can see, characters < and > were escaped to  and  which
> were, again, escaped to lt; and gt;. This is doing problems
> while trying to index whole e-mail address, as Solr sees it as
> 't...@example.com'.
>
> I spend hours trying to figure out why i'm able to search in all parts
> of e-mail addresses but searching for full and exact e-mail address
> was successfull ONLY for messages which doesn't include sender's name
> in 'From' header. Finally, after i found this bug, this fixed all
> search problems:
>
>  replacement=""/>
>  replacement=""/>
>
> I hope that, at least, this bug, reported by me, will be fixed. Thank
> you.
>
> azur

Hi!

Which dovecot version was this?

Aki


[Bug] FTS double escaping

2017-04-06 Thread azurit

Hi,

i'm trying to resolve few problems with indexing 'From' headers using  
FTS/Solr. I was tcpdumping the communication between Dovecot and  
Jetty/Solr and noticed that 'From' headers, which includes also  
sender's name, are double escaped. This is what was Dovecot sending to  
Solr:


Name Surname  
lt;t...@example.comgt;


As you can see, characters < and > were escaped to  and  which  
were, again, escaped to lt; and gt;. This is doing problems  
while trying to index whole e-mail address, as Solr sees it as  
't...@example.com'.


I spend hours trying to figure out why i'm able to search in all parts  
of e-mail addresses but searching for full and exact e-mail address  
was successfull ONLY for messages which doesn't include sender's name  
in 'From' header. Finally, after i found this bug, this fixed all  
search problems:


replacement=""/>
replacement=""/>


I hope that, at least, this bug, reported by me, will be fixed. Thank you.

azur


Re: abrt reported "imap killed by SIGBUS"

2017-04-06 Thread Aki Tuomi
Hi!

Responses in middle.

Aki

On 06.04.2017 06:05, Hongying Liu wrote:
> Hi sbr-services,
>
> Could you give me some idea?
>
> ### abrt reported the error as below.
> [root@cupop4 log]# abrt-cli list --since 1488267244 id
> ad716dbfd3a68bbe0f055e32ebfe562f4f75df43
> reason: imap killed by SIGBUS
> time:   Sun Mar 19  2017 10:58:27 AM JST
> cmdline:dovecot/imap
> package:dovecot-2.2.10-7.el7
> uid:80180 (acaa2325)
> count:  2
> Directory:  /var/spool/abrt/ccpp-2017-03-19-10:58:27-4904
> 'abrt-cli report を実行する/var/spool/abrt/ccpp-2017-03-19-10:58:27-4904'
>
>
> ### /var/log/messages
> Mar 19 10:58:27 cupop4 abrt-hook-ccpp: Process 4904 (imap) of user 80180
> killed by SIGBUS - dumping core
>
> And, there are lots of ldap error in /var/log/messages. Dovecot uses the
> ldap as userdb and passwddb.
> Is the ldap error related to imap crash?
>
> Mar 19 10:58:21 cupop4 nslcd[1534]: [a378de] 
> ldap_result() failed: Invalid DN syntax: Invalid DN

This looks like an invalid DN syntax.

>
> ### /var/log/dovecot
>
> Mar 19 10:58:27 cupop4 dovecot: imap(acaa2325): Fatal: master:
> service(imap): child 4904 killed with signal 7 (core dumped)
>
Can you do p cache, p cache->hdr, p cache->hdr->file_seq and p reset_id

> ### gdb
>
> Core was generated by `dovecot/imap'.
> Program terminated with signal 7, Bus error.
> #0  0x7fc4a69f293e in mail_cache_lookup_offset (cache=0x7fc4a811a320,
> view=, seq=, offset_r=offset_r@entry
> =0x7ffe23a98900)
> at mail-cache-lookup.c:95
> 95 while (cache->hdr->file_seq != reset_id) {
> (gdb) bt
> #0  0x7fc4a69f293e in mail_cache_lookup_offset (cache=0x7fc4a811a320,
> view=, seq=, offset_r=offset_r@entry
> =0x7ffe23a98900)
> at mail-cache-lookup.c:95
> #1  0x7fc4a69f2afa in mail_cache_lookup_iter_init
> (view=view@entry=0x7fc4a81332b0,
> seq=, ctx_r=ctx_r@entry=0x7ffe23a988e0)
> at mail-cache-lookup.c:154
> #2  0x7fc4a69f2f8f in mail_cache_seq (seq=,
> view=0x7fc4a81332b0) at mail-cache-lookup.c:322
> #3  mail_cache_field_exists (view=view@entry=0x7fc4a81332b0, 
> seq=seq@entry=760,
> field=field@entry=4) at mail-cache-lookup.c:352
> #4  0x7fc4a69f3151 in mail_cache_lookup_field (view=0x7fc4a81332b0,
> dest_buf=dest_buf@entry=0x7ffe23a98a00, seq=760, field_idx=field_idx@entry
> =4)
> at mail-cache-lookup.c:413
> #5  0x7fc4a69d9439 in index_mail_cache_lookup_field
> (mail=mail@entry=0x7fc4a8128cd0,
> buf=buf@entry=0x7ffe23a98a00, field_idx=field_idx@entry=4)
> at index-mail.c:68
> #6  0x7fc4a69d94a7 in index_mail_get_fixed_field (mail=0x7fc4a8128cd0,
> field=, data=, data_size=8) at
> index-mail.c:130
> #7  0x7fc4a69d9fed in index_mail_get_cached_virtual_size
> (mail=mail@entry=0x7fc4a8128cd0, size_r=size_r@entry=0x7ffe23a98b20) at
> index-mail.c:403
> #8  0x7fc4a6987c1b in maildir_mail_get_virtual_size
> (_mail=0x7fc4a8128cd0, size_r=0x7ffe23a98b20) at maildir-mail.c:388
> #9  0x7fc4a69d9719 in index_mail_update_access_parts
> (mail=mail@entry=0x7fc4a8128cd0)
> at index-mail.c:1490
> #10 0x7fc4a69dbb93 in index_mail_set_seq (_mail=0x7fc4a8128cd0,
> seq=760, saving=) at index-mail.c:1521
> #11 0x7fc4a69e28cb in search_more_with_mail (mail=0x7fc4a8128cd0,
> ctx=0x7fc4a8132730) at index-search.c:1507
> #12 search_more_with_prefetching (mail_r=, ctx= out>) at index-search.c:1579
> #13 search_more (ctx=ctx@entry=0x7fc4a8132730,
> mail_r=mail_r@entry=0x7ffe23a98c20)
> at index-search.c:1650
> #14 0x7fc4a69e2ff4 in index_storage_search_next_nonblock
> (_ctx=0x7fc4a8132730, mail_r=0x7fc4a81120d8, tryagain_r=0x7ffe23a98c87) at
> index-search.c:1674
> #15 0x7fc4a69bc4ef in mailbox_search_next_nonblock
> (ctx=ctx@entry=0x7fc4a8132730,
> mail_r=mail_r@entry=0x7fc4a81120d8,
> tryagain_r=tryagain_r@entry=0x7ffe23a98c87) at mail-storage.c:1787
> #16 0x7fc4a69bc55d in mailbox_search_next (ctx=0x7fc4a8132730,
> mail_r=mail_r@entry=0x7fc4a81120d8) at mail-storage.c:1773
> #17 0x7fc4a6e97874 in imap_fetch_more_int (ctx=ctx@entry=0x7fc4a8112078,
> cancel=false) at imap-fetch.c:479
> #18 0x7fc4a6e98852 in imap_fetch_more (ctx=0x7fc4a8112078,
> cmd=cmd@entry=0x7fc4a8111ed0)
> at imap-fetch.c:556
> #19 0x7fc4a6e8c17d in cmd_fetch (cmd=0x7fc4a8111ed0) at cmd-fetch.c:284
> #20 0x7fc4a6e9601c in command_exec (cmd=cmd@entry=0x7fc4a8111ed0) at
> imap-commands.c:158
> #21 0x7fc4a6e94f1f in client_command_input (cmd=cmd@entry=0x7fc4a8111ed0)
> at imap-client.c:780
> #22 0x7fc4a6e95005 in client_command_input (cmd=0x7fc4a8111ed0) at
> imap-client.c:841
> #23 0x7fc4a6e952fd in client_handle_next_command
> (remove_io_r=, client=0x7fc4a8111530) at
> imap-client.c:879
> #24 client_handle_input (client=client@entry=0x7fc4a8111530) at
> imap-client.c:891
> #25 0x7fc4a6e956c5 in client_input (client=0x7fc4a8111530) at
> imap-client.c:933
> #26 0x7fc4a66e1a87 in io_loop_call_io (io=0x7fc4a811edd0) at
> ioloop.c:388
> #27 0x7fc4a66e290f in 

Re: IMAP hibernate and scalability in general

2017-04-06 Thread Christian Balzer
On Thu, 6 Apr 2017 13:10:03 +0300 Timo Sirainen wrote:

> On 6 Apr 2017, at 9.56, Christian Balzer  wrote:
> >   
> >> For no particular reason besides wanting to start conservatively, we've got
> >> client_limit set to 50 on the hibernate procs (with 1100 total hibernated
> >> connections on the box I'm looking at). At only a little over a meg each,
> >> I'm fine with those extra processes.
> >>   
> > Yeah, but 50 would be a tad too conservative for our purposes here.
> > I'll keep an eye on it and see how it goes, first checkpoint would be at
> > 1k hibernated sessions. ^_^  
> 
> imap-hibernate processes are similar to imap-login processes in that they 
> should be able to handle thousands or even tens of thousands of connections 
> per process.
> 
I assume the config processes are in the same category, they are happy
with 16k clients and using 169MB each, without any issues. ^.^

Christian
-- 
Christian BalzerNetwork/Systems Engineer
ch...@gol.com   Global OnLine Japan/Rakuten Communications
http://www.gol.com/


Re: [BUG] client state / Message count mismatch with imap-hibernate and mixed POP3/IMAP access

2017-04-06 Thread Christian Balzer

Hello Aki, Timo,

according to git this fix should be in 2.2.27, which I'm running, so I
guess this isn't it or something else is missing.
See:
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=859700

Regards,

Christian

On Thu, 6 Apr 2017 15:37:33 +0900 Christian Balzer wrote:

> Hello,
> 
> On Thu, 6 Apr 2017 09:24:23 +0300 Aki Tuomi wrote:
> 
> > On 06.04.2017 07:02, Christian Balzer wrote:  
> > >
> > > Hello,
> > >
> > > this is on Debian Jessie box, dovecot 2.2.27 from backports,
> > > imap-hibernate obviously enabled. 
> > >
> > > I've been seeing a few of these since starting this cluster (see previous
> > > mail), they all follow the same pattern, a user who accesses their mailbox
> > > with both POP3 and IMAP deletes mails with POP3 and the IMAP
> > > (imap-hibernate really) is getting confused and upset about this:
> > >
> > > ---
> > >
> > > Apr  6 09:55:49 mbx11 dovecot: imap-login: Login: 
> > > user=, method=PLAIN, rip=xxx.xxx.x.46, 
> > > lip=xxx.xxx.x.113, mpid=951561, secured, session=<2jBV+HRM1Pbc9w8u>
> > > Apr  6 10:01:06 mbx11 dovecot: pop3-login: Login: 
> > > user=, method=PLAIN, rip=xxx.xxx.x.46, 
> > > lip=xxx.xxx.x.41, mpid=35447, secured, session=
> > > Apr  6 10:01:07 mbx11 dovecot: pop3(redac...@gol.com): Disconnected: 
> > > Logged out top=0/0, retr=0/0, del=1/1, size=20674 
> > > session=
> > > Apr  6 10:01:07 mbx11 dovecot: imap(redac...@gol.com): Error: 
> > > imap-master: Failed to import client state: Message count mismatch after 
> > > handling expunges (0 != 1)
> > > Apr  6 10:01:07 mbx11 dovecot: imap(redac...@gol.com): Client state 
> > > initialization failed in=0 out=0 head=<0> del=<0> exp=<0> trash=<0> 
> > > session=<2jBV+HRM1Pbc9w8u>
> > > Apr  6 10:01:15 mbx11 dovecot: imap-login: Login: 
> > > user=, method=PLAIN, rip=xxx.xxx.x.46, 
> > > lip=xxx.xxx.x.113, mpid=993303, secured, session=<6QC6C3VMF/jc9w8u>
> > > Apr  6 10:07:42 mbx11 dovecot: imap-hibernate(redac...@gol.com): 
> > > Connection closed in=85 out=1066 head=<0> del=<0> exp=<0> trash=<0> 
> > > session=<6QC6C3VMF/jc9w8u>
> > >
> > > ---
> > >
> > > Didn't see these errors before introducing imap-hibernate, but then again
> > > this _could_ be something introduced between 2.2.13 (previous generation
> > > of servers) and .27, but I highly doubt it.
> > >
> > > My reading of the log is that the original IMAP session
> > > (<2jBV+HRM1Pbc9w8u>) fell over and terminated, resulting in the client to
> > > start up a new session.
> > > If so and with no false data/state transmitted to the client it would be
> > > not ideal, but not a critical problem either. 
> > >
> > > Would be delighted if Aki or Timo could comment on this.
> > >
> > >
> > > If you need any further data, let me know.
> > >
> > > Christian
> > 
> > Hi!
> > 
> > You could try updating to 2.2.28, if possible. I believe this bug is
> > fixed in 2.2.28, with
> > https://github.com/dovecot/core/commit/1fd44e0634ac312d0960f39f9518b71e08248b65
> >   
> Ah yes, that looks like the culprit indeed.
> 
> I shall poke the powers that be over at Debian bugs to expedite the 2.2.28
> release and backport.
> 
> Christian


-- 
Christian BalzerNetwork/Systems Engineer
ch...@gol.com   Global OnLine Japan/Rakuten Communications
http://www.gol.com/


Re: IMAP hibernate and scalability in general

2017-04-06 Thread Timo Sirainen
On 6 Apr 2017, at 9.56, Christian Balzer  wrote:
> 
>> For no particular reason besides wanting to start conservatively, we've got
>> client_limit set to 50 on the hibernate procs (with 1100 total hibernated
>> connections on the box I'm looking at). At only a little over a meg each,
>> I'm fine with those extra processes.
>> 
> Yeah, but 50 would be a tad too conservative for our purposes here.
> I'll keep an eye on it and see how it goes, first checkpoint would be at
> 1k hibernated sessions. ^_^

imap-hibernate processes are similar to imap-login processes in that they 
should be able to handle thousands or even tens of thousands of connections per 
process.


abrt reported "imap killed by SIGBUS"

2017-04-06 Thread Hongying Liu
Hi sbr-services,

Could you give me some idea?

### abrt reported the error as below.
[root@cupop4 log]# abrt-cli list --since 1488267244 id
ad716dbfd3a68bbe0f055e32ebfe562f4f75df43
reason: imap killed by SIGBUS
time:   Sun Mar 19  2017 10:58:27 AM JST
cmdline:dovecot/imap
package:dovecot-2.2.10-7.el7
uid:80180 (acaa2325)
count:  2
Directory:  /var/spool/abrt/ccpp-2017-03-19-10:58:27-4904
'abrt-cli report を実行する/var/spool/abrt/ccpp-2017-03-19-10:58:27-4904'


### /var/log/messages
Mar 19 10:58:27 cupop4 abrt-hook-ccpp: Process 4904 (imap) of user 80180
killed by SIGBUS - dumping core

And, there are lots of ldap error in /var/log/messages. Dovecot uses the
ldap as userdb and passwddb.
Is the ldap error related to imap crash?

Mar 19 10:58:21 cupop4 nslcd[1534]: [a378de] 
ldap_result() failed: Invalid DN syntax: Invalid DN

### /var/log/dovecot

Mar 19 10:58:27 cupop4 dovecot: imap(acaa2325): Fatal: master:
service(imap): child 4904 killed with signal 7 (core dumped)


### gdb

Core was generated by `dovecot/imap'.
Program terminated with signal 7, Bus error.
#0  0x7fc4a69f293e in mail_cache_lookup_offset (cache=0x7fc4a811a320,
view=, seq=, offset_r=offset_r@entry
=0x7ffe23a98900)
at mail-cache-lookup.c:95
95 while (cache->hdr->file_seq != reset_id) {
(gdb) bt
#0  0x7fc4a69f293e in mail_cache_lookup_offset (cache=0x7fc4a811a320,
view=, seq=, offset_r=offset_r@entry
=0x7ffe23a98900)
at mail-cache-lookup.c:95
#1  0x7fc4a69f2afa in mail_cache_lookup_iter_init
(view=view@entry=0x7fc4a81332b0,
seq=, ctx_r=ctx_r@entry=0x7ffe23a988e0)
at mail-cache-lookup.c:154
#2  0x7fc4a69f2f8f in mail_cache_seq (seq=,
view=0x7fc4a81332b0) at mail-cache-lookup.c:322
#3  mail_cache_field_exists (view=view@entry=0x7fc4a81332b0, seq=seq@entry=760,
field=field@entry=4) at mail-cache-lookup.c:352
#4  0x7fc4a69f3151 in mail_cache_lookup_field (view=0x7fc4a81332b0,
dest_buf=dest_buf@entry=0x7ffe23a98a00, seq=760, field_idx=field_idx@entry
=4)
at mail-cache-lookup.c:413
#5  0x7fc4a69d9439 in index_mail_cache_lookup_field
(mail=mail@entry=0x7fc4a8128cd0,
buf=buf@entry=0x7ffe23a98a00, field_idx=field_idx@entry=4)
at index-mail.c:68
#6  0x7fc4a69d94a7 in index_mail_get_fixed_field (mail=0x7fc4a8128cd0,
field=, data=, data_size=8) at
index-mail.c:130
#7  0x7fc4a69d9fed in index_mail_get_cached_virtual_size
(mail=mail@entry=0x7fc4a8128cd0, size_r=size_r@entry=0x7ffe23a98b20) at
index-mail.c:403
#8  0x7fc4a6987c1b in maildir_mail_get_virtual_size
(_mail=0x7fc4a8128cd0, size_r=0x7ffe23a98b20) at maildir-mail.c:388
#9  0x7fc4a69d9719 in index_mail_update_access_parts
(mail=mail@entry=0x7fc4a8128cd0)
at index-mail.c:1490
#10 0x7fc4a69dbb93 in index_mail_set_seq (_mail=0x7fc4a8128cd0,
seq=760, saving=) at index-mail.c:1521
#11 0x7fc4a69e28cb in search_more_with_mail (mail=0x7fc4a8128cd0,
ctx=0x7fc4a8132730) at index-search.c:1507
#12 search_more_with_prefetching (mail_r=, ctx=) at index-search.c:1579
#13 search_more (ctx=ctx@entry=0x7fc4a8132730,
mail_r=mail_r@entry=0x7ffe23a98c20)
at index-search.c:1650
#14 0x7fc4a69e2ff4 in index_storage_search_next_nonblock
(_ctx=0x7fc4a8132730, mail_r=0x7fc4a81120d8, tryagain_r=0x7ffe23a98c87) at
index-search.c:1674
#15 0x7fc4a69bc4ef in mailbox_search_next_nonblock
(ctx=ctx@entry=0x7fc4a8132730,
mail_r=mail_r@entry=0x7fc4a81120d8,
tryagain_r=tryagain_r@entry=0x7ffe23a98c87) at mail-storage.c:1787
#16 0x7fc4a69bc55d in mailbox_search_next (ctx=0x7fc4a8132730,
mail_r=mail_r@entry=0x7fc4a81120d8) at mail-storage.c:1773
#17 0x7fc4a6e97874 in imap_fetch_more_int (ctx=ctx@entry=0x7fc4a8112078,
cancel=false) at imap-fetch.c:479
#18 0x7fc4a6e98852 in imap_fetch_more (ctx=0x7fc4a8112078,
cmd=cmd@entry=0x7fc4a8111ed0)
at imap-fetch.c:556
#19 0x7fc4a6e8c17d in cmd_fetch (cmd=0x7fc4a8111ed0) at cmd-fetch.c:284
#20 0x7fc4a6e9601c in command_exec (cmd=cmd@entry=0x7fc4a8111ed0) at
imap-commands.c:158
#21 0x7fc4a6e94f1f in client_command_input (cmd=cmd@entry=0x7fc4a8111ed0)
at imap-client.c:780
#22 0x7fc4a6e95005 in client_command_input (cmd=0x7fc4a8111ed0) at
imap-client.c:841
#23 0x7fc4a6e952fd in client_handle_next_command
(remove_io_r=, client=0x7fc4a8111530) at
imap-client.c:879
#24 client_handle_input (client=client@entry=0x7fc4a8111530) at
imap-client.c:891
#25 0x7fc4a6e956c5 in client_input (client=0x7fc4a8111530) at
imap-client.c:933
#26 0x7fc4a66e1a87 in io_loop_call_io (io=0x7fc4a811edd0) at
ioloop.c:388
#27 0x7fc4a66e290f in io_loop_handler_run
(ioloop=ioloop@entry=0x7fc4a80fc750)
at ioloop-epoll.c:220
#28 0x7fc4a66e15d8 in io_loop_run (ioloop=0x7fc4a80fc750) at
ioloop.c:412
#29 0x7fc4a668e9e3 in master_service_run (service=0x7fc4a80fc5e0,
callback=callback@entry=0x7fc4a6e9ed70 ) at
master-service.c:571
#30 0x7fc4a6e89324 in main (argc=1, argv=0x7fc4a80fc390) at main.c:400

Thank you.

Best regards,
Hongying Liu


Re: Problem with Dovecot and BlackBerry

2017-04-06 Thread Michael Slusarz
> On April 4, 2017 at 5:07 AM Luca Bertoncello  wrote:
> 
> Hi all,
> 
> i've got a strange behaviour with a BlackBerry Classic Phone (BBOS
> 10.3.2.2876) in combination with Dovecot 2.2.13 while trying to fetch
> mails.
> 
> Before burying myself into debugging sessions, i try to get an
> understanding if the following is a Client- or a Server-specific error
> in the behaviour:
> 
> CIGA8 UID FETCH 10009:10035 (UID FLAGS) (CHANGEDSINCE NOMODSEQ)
> CIGA8 BAD Error in IMAP command UID FETCH: Invalid CHANGEDSINCE modseq.

That's a broken client.

https://tools.ietf.org/html/rfc7162#section-3.1.4.1

"CHANGEDSINCE :  The CHANGEDSINCE FETCH modifier allows
   the client to further subset the list of messages described by the
   sequence set.  The information described by message data items is
   only returned for messages that have a mod-sequence bigger than
   ."

michael


Re: Using filter in an imapsieve script?

2017-04-06 Thread Tobi
To further debug I wrote a little shell wrapper for my gpgit script.
That wrapper now is called from imap sieve script.
The wrapper writes exit code of gpgit and the mail content returned by
gpgit into a logfile. I can see that gpgit returns 0 and the mail
content returned is encrypted.
But still the mail that appears in sent mailbox is NOT encrypted. How
can that be?
It seems that for what reason ever the mail is stored as it's passed to
the filter and not how it's returned by filter.
Any idea whats going on?

Thanks for any idea how to solve this issue

tobi

Am 06.04.2017 um 08:31 schrieb Tobi:
> Hi Stephan
>
> yes the imap_sieve plugin is added to the mail_plugins for imap.
> Thanks for the hint with mail_debug. After enabling it I can see that
> the program seems to be called, so filter should not be the problem.
> But the result is that the message appears unencrypted in my sent folder
>
>> Apr  6 08:20:26 mbox2 dovecot: imap(tobs...@brain-force.ch): Debug:
> imapsieve: mailbox Sent: APPEND event
>> Apr  6 08:20:26 mbox2 dovecot: imap(tobs...@brain-force.ch): Debug:
> sieve: Pigeonhole version 0.4.16 (fed8554) initializing
>> Apr  6 08:20:26 mbox2 dovecot: imap(tobs...@brain-force.ch): Debug:
> sieve: include: sieve_global is not set; it is currently not possible to
> include `:global' scripts.
>> Apr  6 08:20:26 mbox2 dovecot: imap(tobs...@brain-force.ch): Debug:
> sieve: Sieve imapsieve plugin for Pigeonhole version 0.4.16 (fed8554) loaded
>> Apr  6 08:20:26 mbox2 dovecot: imap(tobs...@brain-force.ch): Debug:
> sieve: Sieve Extprograms plugin for Pigeonhole version 0.4.16 (fed8554)
> loaded
>> Apr  6 08:20:26 mbox2 dovecot: imap(tobs...@brain-force.ch): Debug:
> imapsieve: Static mailbox rule [1]: mailbox=`Spam' from=`*'
> causes=(COPY) =>
> before=`file:/home/vmail/brain-force.ch/tobster/dovecot-mail-filter.sieve'
> after=(none)
>> Apr  6 08:20:26 mbox2 dovecot: imap(tobs...@brain-force.ch): Debug:
> imapsieve: Static mailbox rule [2]: mailbox=`Sent' from=`*' causes=(COPY
> APPEND) =>
> before=`file:/home/vmail/brain-force.ch/tobster/dovecot-crypt-sent.sieve'
> after=(none)
>> Apr  6 08:20:26 mbox2 dovecot: imap(tobs...@brain-force.ch): Debug:
> imapsieve: Matched static mailbox rule [2]
>> Apr  6 08:20:26 mbox2 dovecot: imap(tobs...@brain-force.ch): Debug:
> sieve: file storage: Using active Sieve script path:
> /home/vmail/brain-force.ch/tobster/.dovecot.sieve
>> Apr  6 08:20:26 mbox2 dovecot: imap(tobs...@brain-force.ch): Debug:
> sieve: file storage: Using script storage path:
> /home/vmail/brain-force.ch/tobster/sieve
>> Apr  6 08:20:26 mbox2 dovecot: imap(tobs...@brain-force.ch): Debug:
> sieve: file storage: Relative path to sieve storage in active link: sieve/
>> Apr  6 08:20:26 mbox2 dovecot: imap(tobs...@brain-force.ch): Debug:
> sieve: file storage: Using Sieve script path:
> /home/vmail/brain-force.ch/tobster/dovecot-crypt-sent.sieve
>> Apr  6 08:20:26 mbox2 dovecot: imap(tobs...@brain-force.ch): Debug:
> sieve: file script: Opened script `dovecot-crypt-sent' from
> `/home/vmail/brain-force.ch/tobster/dovecot-crypt-sent.sieve'
>> Apr  6 08:20:26 mbox2 dovecot: imap(tobs...@brain-force.ch): Debug:
> sieve: Opening script 1 of 1 from
> `/home/vmail/brain-force.ch/tobster/dovecot-crypt-sent.sieve'
>> Apr  6 08:20:26 mbox2 dovecot: imap(tobs...@brain-force.ch): Debug:
> sieve: Loading script
> /home/vmail/brain-force.ch/tobster/dovecot-crypt-sent.sieve
>> Apr  6 08:20:26 mbox2 dovecot: imap(tobs...@brain-force.ch): Debug:
> sieve: Script binary
> /home/vmail/brain-force.ch/tobster/dovecot-crypt-sent.svbin successfully
> loaded
>> Apr  6 08:20:26 mbox2 dovecot: imap(tobs...@brain-force.ch): Debug:
> sieve: binary save: not saving binary
> /home/vmail/brain-force.ch/tobster/dovecot-crypt-sent.svbin, because it
> is already stored
>> Apr  6 08:20:26 mbox2 dovecot: imap(tobs...@brain-force.ch): Debug:
> sieve: Executing script from
> `/home/vmail/brain-force.ch/tobster/dovecot-crypt-sent.svbin'
>> Apr  6 08:20:26 mbox2 dovecot: imap(tobs...@brain-force.ch): Debug:
> sieve: action filter: running program: gpgit
>> Apr  6 08:20:26 mbox2 dovecot: imap(tobs...@brain-force.ch): Debug:
> Mailbox Sent: Opened mail UID=3800 because: mail stream
>
> From my understanding the logs looks fine. Just here
>
>> Debug: sieve: action filter: running program: gpgit
> I wonder if the parameter given to gpgit should be logged as well?
> Calling gpgit without the userparameter would explain why the message
> appears unencrypted in sent mailbox.
>
>
> Best regards
>
>
> tobi
>
> Am 06.04.2017 um 00:58 schrieb Stephan Bosch:
>> Op 4/5/2017 om 11:48 AM schreef Tobi:
>>> Hello list
>>>
>>> I currently have an issue with an imapsieve script on my dovecot server
>>>
>>> CentOS Linux release 7.3.1611 (Core)
>>> Dovecot 2.2.26.0 (23d1de6)
>>> Pigeonhole 2.2.26.0
>>>
>>> The goal is to "fire" an imapsieve script upon mailclient saves message
>>> to sent folder
>>> I setup the following in 90-plugin.conf:
>>>
>>> plugin {

Re: IMAP hibernate and scalability in general

2017-04-06 Thread Christian Balzer

Hello,

On Wed, 5 Apr 2017 23:45:33 -0700 Mark Moseley wrote:

> We've been using hibernate for about half a year with no ill effects. There
> were various logged errors in earlier versions of dovecot, but even with
> those, we never heard a reported customer-side error (almost always when
> transitioning from hibernate back to regular imap; in the case of those
> errors, presumably the mail client just reconnected silently).
> 
This is my impression as well (silent reconnect w/o anything really bad
happening) with regard to the bug I just reported/saw. 

> For no particular reason besides wanting to start conservatively, we've got
> client_limit set to 50 on the hibernate procs (with 1100 total hibernated
> connections on the box I'm looking at). At only a little over a meg each,
> I'm fine with those extra processes.
> 
Yeah, but 50 would be a tad too conservative for our purposes here.
I'll keep an eye on it and see how it goes, first checkpoint would be at
1k hibernated sessions. ^_^

Christian

> On Wed, Apr 5, 2017 at 11:17 PM, Aki Tuomi  wrote:
> 
> >
> >
> > On 06.04.2017 06:15, Christian Balzer wrote:  
> > > Hello,
> > >
> > > as some may remember, we're running very dense IMAP cluster here, in
> > > excess of 50k IMAP sessions per node (current record holder is 68k,  
> > design  
> > > is for 200k+).
> > >
> > > The first issue we ran into was that the dovecot master process (which is
> > > single thread and thus a bottleneck) was approaching 100% CPU usage (aka
> > > using a full core) when trying to spawn off new IMAP processes.
> > >
> > > This was rectified by giving IMAP a service count of 200 to create a pool
> > > of "idling" processes eventually, reducing the strain for the master
> > > process dramatically. That of course required generous cranking up
> > > ulimits, FDs in particular.
> > >
> > > The next issues of course is (as mentioned before) the memory usage of  
> > all  
> > > those IMAP processes and the fact that quite a few things outside of
> > > dovecote (ps, etc) tend to get quite sedate when dealing with tens of
> > > thousands of processes.
> > >
> > > We just started to deploy a new mailbox cluster pair with 2.2.27 and
> > > having IMAP hibernate configured.
> > > Getting this work is a PITA though with regards to ownership and access
> > > rights to the various sockets, this part could definitely do with some
> > > better (I know, difficult) defaults or at least more documentation (none
> > > besides the source and this ML).
> > >
> > > Initial results are very promising, depending on what your clients are
> > > doing (are they well behaved, are your users constantly looking at other
> > > folders, etc) the vast majority of IDLE processes will be in hibernated
> > > at any given time and thus not only using a fraction of the RAM otherwise
> > > needed but also freeing up process space.
> > >
> > > Real life example:
> > > 240 users, 86 imap processes (80% of those not IDLE) and:
> > > dovecot   119157  0.0  0.0  10452  3236 ?SApr01   0:21  
> > dovecot/imap-hibernate [237 connections]  
> > > That's 237 hibernated connections and thus less processes than otherwise.
> > >
> > >
> > > I assume that given the silence on the ML what we are going to be the
> > > first hibernate users where the term "large scale" does apply.
> > > Despite that I have some questions, clarifications/confirmations:
> > >
> > > Our current default_client_limit is 16k, which can be seen by having 5
> > > config processes on our 65k+ session node. ^_-
> > >
> > > This would also apply to imap-hibernate, one wonders if that's fine
> > > (config certainly has no issues) or if something smaller would be
> > > appropriate here?
> > >
> > > Since we have idling IMAP processes around most of the time, the strain  
> > of  
> > > re-spawning proper processes from imap-hibernate should be just as  
> > reduced  
> > > as for the dovecot master, correct?
> > >
> > >
> > > I'll keep reporting our experiences here, that is if something blows up
> > > spectacularly. ^o^
> > >
> > > Christian  
> >
> > Hi!
> >
> > We have customers using it in larger deployments. A good idea is to have
> > as much of your clients hibernating as possible, as the hibernation
> > process is much smaller than actual IMAP process.
> >
> > You should probably also look at reusing the processes, as this will
> > probably help your performance,
> > https://wiki.dovecot.org/PerformanceTuning and
> > https://wiki.dovecot.org/LoginProcess are probably a good starting
> > point, although I suspect you've read these already.
> >
> > If you are running a dense server, cranking up various limits is rather
> > expected.
> >
> > Aki
> >  
> 


-- 
Christian BalzerNetwork/Systems Engineer
ch...@gol.com   Global OnLine Japan/Rakuten Communications
http://www.gol.com/


Re: IMAP hibernate and scalability in general

2017-04-06 Thread Mark Moseley
We've been using hibernate for about half a year with no ill effects. There
were various logged errors in earlier versions of dovecot, but even with
those, we never heard a reported customer-side error (almost always when
transitioning from hibernate back to regular imap; in the case of those
errors, presumably the mail client just reconnected silently).

For no particular reason besides wanting to start conservatively, we've got
client_limit set to 50 on the hibernate procs (with 1100 total hibernated
connections on the box I'm looking at). At only a little over a meg each,
I'm fine with those extra processes.

On Wed, Apr 5, 2017 at 11:17 PM, Aki Tuomi  wrote:

>
>
> On 06.04.2017 06:15, Christian Balzer wrote:
> > Hello,
> >
> > as some may remember, we're running very dense IMAP cluster here, in
> > excess of 50k IMAP sessions per node (current record holder is 68k,
> design
> > is for 200k+).
> >
> > The first issue we ran into was that the dovecot master process (which is
> > single thread and thus a bottleneck) was approaching 100% CPU usage (aka
> > using a full core) when trying to spawn off new IMAP processes.
> >
> > This was rectified by giving IMAP a service count of 200 to create a pool
> > of "idling" processes eventually, reducing the strain for the master
> > process dramatically. That of course required generous cranking up
> > ulimits, FDs in particular.
> >
> > The next issues of course is (as mentioned before) the memory usage of
> all
> > those IMAP processes and the fact that quite a few things outside of
> > dovecote (ps, etc) tend to get quite sedate when dealing with tens of
> > thousands of processes.
> >
> > We just started to deploy a new mailbox cluster pair with 2.2.27 and
> > having IMAP hibernate configured.
> > Getting this work is a PITA though with regards to ownership and access
> > rights to the various sockets, this part could definitely do with some
> > better (I know, difficult) defaults or at least more documentation (none
> > besides the source and this ML).
> >
> > Initial results are very promising, depending on what your clients are
> > doing (are they well behaved, are your users constantly looking at other
> > folders, etc) the vast majority of IDLE processes will be in hibernated
> > at any given time and thus not only using a fraction of the RAM otherwise
> > needed but also freeing up process space.
> >
> > Real life example:
> > 240 users, 86 imap processes (80% of those not IDLE) and:
> > dovecot   119157  0.0  0.0  10452  3236 ?SApr01   0:21
> dovecot/imap-hibernate [237 connections]
> > That's 237 hibernated connections and thus less processes than otherwise.
> >
> >
> > I assume that given the silence on the ML what we are going to be the
> > first hibernate users where the term "large scale" does apply.
> > Despite that I have some questions, clarifications/confirmations:
> >
> > Our current default_client_limit is 16k, which can be seen by having 5
> > config processes on our 65k+ session node. ^_-
> >
> > This would also apply to imap-hibernate, one wonders if that's fine
> > (config certainly has no issues) or if something smaller would be
> > appropriate here?
> >
> > Since we have idling IMAP processes around most of the time, the strain
> of
> > re-spawning proper processes from imap-hibernate should be just as
> reduced
> > as for the dovecot master, correct?
> >
> >
> > I'll keep reporting our experiences here, that is if something blows up
> > spectacularly. ^o^
> >
> > Christian
>
> Hi!
>
> We have customers using it in larger deployments. A good idea is to have
> as much of your clients hibernating as possible, as the hibernation
> process is much smaller than actual IMAP process.
>
> You should probably also look at reusing the processes, as this will
> probably help your performance,
> https://wiki.dovecot.org/PerformanceTuning and
> https://wiki.dovecot.org/LoginProcess are probably a good starting
> point, although I suspect you've read these already.
>
> If you are running a dense server, cranking up various limits is rather
> expected.
>
> Aki
>


Re: [BUG] client state / Message count mismatch with imap-hibernate and mixed POP3/IMAP access

2017-04-06 Thread Christian Balzer

Hello,

On Thu, 6 Apr 2017 09:24:23 +0300 Aki Tuomi wrote:

> On 06.04.2017 07:02, Christian Balzer wrote:
> >
> > Hello,
> >
> > this is on Debian Jessie box, dovecot 2.2.27 from backports,
> > imap-hibernate obviously enabled. 
> >
> > I've been seeing a few of these since starting this cluster (see previous
> > mail), they all follow the same pattern, a user who accesses their mailbox
> > with both POP3 and IMAP deletes mails with POP3 and the IMAP
> > (imap-hibernate really) is getting confused and upset about this:
> >
> > ---
> >
> > Apr  6 09:55:49 mbx11 dovecot: imap-login: Login: user=, 
> > method=PLAIN, rip=xxx.xxx.x.46, lip=xxx.xxx.x.113, mpid=951561, secured, 
> > session=<2jBV+HRM1Pbc9w8u>
> > Apr  6 10:01:06 mbx11 dovecot: pop3-login: Login: user=, 
> > method=PLAIN, rip=xxx.xxx.x.46, lip=xxx.xxx.x.41, mpid=35447, secured, 
> > session=
> > Apr  6 10:01:07 mbx11 dovecot: pop3(redac...@gol.com): Disconnected: Logged 
> > out top=0/0, retr=0/0, del=1/1, size=20674 session=
> > Apr  6 10:01:07 mbx11 dovecot: imap(redac...@gol.com): Error: imap-master: 
> > Failed to import client state: Message count mismatch after handling 
> > expunges (0 != 1)
> > Apr  6 10:01:07 mbx11 dovecot: imap(redac...@gol.com): Client state 
> > initialization failed in=0 out=0 head=<0> del=<0> exp=<0> trash=<0> 
> > session=<2jBV+HRM1Pbc9w8u>
> > Apr  6 10:01:15 mbx11 dovecot: imap-login: Login: user=, 
> > method=PLAIN, rip=xxx.xxx.x.46, lip=xxx.xxx.x.113, mpid=993303, secured, 
> > session=<6QC6C3VMF/jc9w8u>
> > Apr  6 10:07:42 mbx11 dovecot: imap-hibernate(redac...@gol.com): Connection 
> > closed in=85 out=1066 head=<0> del=<0> exp=<0> trash=<0> 
> > session=<6QC6C3VMF/jc9w8u>
> >
> > ---
> >
> > Didn't see these errors before introducing imap-hibernate, but then again
> > this _could_ be something introduced between 2.2.13 (previous generation
> > of servers) and .27, but I highly doubt it.
> >
> > My reading of the log is that the original IMAP session
> > (<2jBV+HRM1Pbc9w8u>) fell over and terminated, resulting in the client to
> > start up a new session.
> > If so and with no false data/state transmitted to the client it would be
> > not ideal, but not a critical problem either. 
> >
> > Would be delighted if Aki or Timo could comment on this.
> >
> >
> > If you need any further data, let me know.
> >
> > Christian  
> 
> Hi!
> 
> You could try updating to 2.2.28, if possible. I believe this bug is
> fixed in 2.2.28, with
> https://github.com/dovecot/core/commit/1fd44e0634ac312d0960f39f9518b71e08248b65
> 
Ah yes, that looks like the culprit indeed.

I shall poke the powers that be over at Debian bugs to expedite the 2.2.28
release and backport.

Christian
-- 
Christian BalzerNetwork/Systems Engineer
ch...@gol.com   Global OnLine Japan/Rakuten Communications
http://www.gol.com/


Re: Using filter in an imapsieve script?

2017-04-06 Thread Tobi
Hi Stephan

yes the imap_sieve plugin is added to the mail_plugins for imap.
Thanks for the hint with mail_debug. After enabling it I can see that
the program seems to be called, so filter should not be the problem.
But the result is that the message appears unencrypted in my sent folder

> Apr  6 08:20:26 mbox2 dovecot: imap(tobs...@brain-force.ch): Debug:
imapsieve: mailbox Sent: APPEND event
> Apr  6 08:20:26 mbox2 dovecot: imap(tobs...@brain-force.ch): Debug:
sieve: Pigeonhole version 0.4.16 (fed8554) initializing
> Apr  6 08:20:26 mbox2 dovecot: imap(tobs...@brain-force.ch): Debug:
sieve: include: sieve_global is not set; it is currently not possible to
include `:global' scripts.
> Apr  6 08:20:26 mbox2 dovecot: imap(tobs...@brain-force.ch): Debug:
sieve: Sieve imapsieve plugin for Pigeonhole version 0.4.16 (fed8554) loaded
> Apr  6 08:20:26 mbox2 dovecot: imap(tobs...@brain-force.ch): Debug:
sieve: Sieve Extprograms plugin for Pigeonhole version 0.4.16 (fed8554)
loaded
> Apr  6 08:20:26 mbox2 dovecot: imap(tobs...@brain-force.ch): Debug:
imapsieve: Static mailbox rule [1]: mailbox=`Spam' from=`*'
causes=(COPY) =>
before=`file:/home/vmail/brain-force.ch/tobster/dovecot-mail-filter.sieve'
after=(none)
> Apr  6 08:20:26 mbox2 dovecot: imap(tobs...@brain-force.ch): Debug:
imapsieve: Static mailbox rule [2]: mailbox=`Sent' from=`*' causes=(COPY
APPEND) =>
before=`file:/home/vmail/brain-force.ch/tobster/dovecot-crypt-sent.sieve'
after=(none)
> Apr  6 08:20:26 mbox2 dovecot: imap(tobs...@brain-force.ch): Debug:
imapsieve: Matched static mailbox rule [2]
> Apr  6 08:20:26 mbox2 dovecot: imap(tobs...@brain-force.ch): Debug:
sieve: file storage: Using active Sieve script path:
/home/vmail/brain-force.ch/tobster/.dovecot.sieve
> Apr  6 08:20:26 mbox2 dovecot: imap(tobs...@brain-force.ch): Debug:
sieve: file storage: Using script storage path:
/home/vmail/brain-force.ch/tobster/sieve
> Apr  6 08:20:26 mbox2 dovecot: imap(tobs...@brain-force.ch): Debug:
sieve: file storage: Relative path to sieve storage in active link: sieve/
> Apr  6 08:20:26 mbox2 dovecot: imap(tobs...@brain-force.ch): Debug:
sieve: file storage: Using Sieve script path:
/home/vmail/brain-force.ch/tobster/dovecot-crypt-sent.sieve
> Apr  6 08:20:26 mbox2 dovecot: imap(tobs...@brain-force.ch): Debug:
sieve: file script: Opened script `dovecot-crypt-sent' from
`/home/vmail/brain-force.ch/tobster/dovecot-crypt-sent.sieve'
> Apr  6 08:20:26 mbox2 dovecot: imap(tobs...@brain-force.ch): Debug:
sieve: Opening script 1 of 1 from
`/home/vmail/brain-force.ch/tobster/dovecot-crypt-sent.sieve'
> Apr  6 08:20:26 mbox2 dovecot: imap(tobs...@brain-force.ch): Debug:
sieve: Loading script
/home/vmail/brain-force.ch/tobster/dovecot-crypt-sent.sieve
> Apr  6 08:20:26 mbox2 dovecot: imap(tobs...@brain-force.ch): Debug:
sieve: Script binary
/home/vmail/brain-force.ch/tobster/dovecot-crypt-sent.svbin successfully
loaded
> Apr  6 08:20:26 mbox2 dovecot: imap(tobs...@brain-force.ch): Debug:
sieve: binary save: not saving binary
/home/vmail/brain-force.ch/tobster/dovecot-crypt-sent.svbin, because it
is already stored
> Apr  6 08:20:26 mbox2 dovecot: imap(tobs...@brain-force.ch): Debug:
sieve: Executing script from
`/home/vmail/brain-force.ch/tobster/dovecot-crypt-sent.svbin'
> Apr  6 08:20:26 mbox2 dovecot: imap(tobs...@brain-force.ch): Debug:
sieve: action filter: running program: gpgit
> Apr  6 08:20:26 mbox2 dovecot: imap(tobs...@brain-force.ch): Debug:
Mailbox Sent: Opened mail UID=3800 because: mail stream

From my understanding the logs looks fine. Just here

> Debug: sieve: action filter: running program: gpgit

I wonder if the parameter given to gpgit should be logged as well?
Calling gpgit without the userparameter would explain why the message
appears unencrypted in sent mailbox.


Best regards


tobi

Am 06.04.2017 um 00:58 schrieb Stephan Bosch:
> Op 4/5/2017 om 11:48 AM schreef Tobi:
>> Hello list
>>
>> I currently have an issue with an imapsieve script on my dovecot server
>>
>> CentOS Linux release 7.3.1611 (Core)
>> Dovecot 2.2.26.0 (23d1de6)
>> Pigeonhole 2.2.26.0
>>
>> The goal is to "fire" an imapsieve script upon mailclient saves message
>> to sent folder
>> I setup the following in 90-plugin.conf:
>>
>> plugin {
>> sieve_plugins = sieve_imapsieve sieve_extprograms
>> sieve_extensions = +vnd.dovecot.filter +vnd.dovecot.pipe
>> +vnd.dovecot.execute
>> sieve_filter_bin_dir = /etc/dovecot/sieve-filters
>> sieve_pipe_bin_dir = /etc/dovecot/sieve-filters
>> sieve_execute_bin_dir = /etc/dovecot/sieve-filters
>> sieve_filter_exec_timeout = 1
>> sieve_pipe_exec_timeout = 1
>> sieve_execute_exec_timeout = 1
>> imapsieve_mailbox1_name = Sent
>> imapsieve_mailbox1_causes = COPY APPEND
>> imapsieve_mailbox1_before =
>> file:/home/vmail/domain/user/dovecot-crypt-sent.sieve
>> }
>>
>> and the content of the sieve script is:
>>
>> require ["environment", "vnd.dovecot.filter", "variables", "imapsieve",
>> 

Re: [BUG] client state / Message count mismatch with imap-hibernate and mixed POP3/IMAP access

2017-04-06 Thread Aki Tuomi


On 06.04.2017 07:02, Christian Balzer wrote:
>
> Hello,
>
> this is on Debian Jessie box, dovecot 2.2.27 from backports,
> imap-hibernate obviously enabled. 
>
> I've been seeing a few of these since starting this cluster (see previous
> mail), they all follow the same pattern, a user who accesses their mailbox
> with both POP3 and IMAP deletes mails with POP3 and the IMAP
> (imap-hibernate really) is getting confused and upset about this:
>
> ---
>
> Apr  6 09:55:49 mbx11 dovecot: imap-login: Login: user=, 
> method=PLAIN, rip=xxx.xxx.x.46, lip=xxx.xxx.x.113, mpid=951561, secured, 
> session=<2jBV+HRM1Pbc9w8u>
> Apr  6 10:01:06 mbx11 dovecot: pop3-login: Login: user=, 
> method=PLAIN, rip=xxx.xxx.x.46, lip=xxx.xxx.x.41, mpid=35447, secured, 
> session=
> Apr  6 10:01:07 mbx11 dovecot: pop3(redac...@gol.com): Disconnected: Logged 
> out top=0/0, retr=0/0, del=1/1, size=20674 session=
> Apr  6 10:01:07 mbx11 dovecot: imap(redac...@gol.com): Error: imap-master: 
> Failed to import client state: Message count mismatch after handling expunges 
> (0 != 1)
> Apr  6 10:01:07 mbx11 dovecot: imap(redac...@gol.com): Client state 
> initialization failed in=0 out=0 head=<0> del=<0> exp=<0> trash=<0> 
> session=<2jBV+HRM1Pbc9w8u>
> Apr  6 10:01:15 mbx11 dovecot: imap-login: Login: user=, 
> method=PLAIN, rip=xxx.xxx.x.46, lip=xxx.xxx.x.113, mpid=993303, secured, 
> session=<6QC6C3VMF/jc9w8u>
> Apr  6 10:07:42 mbx11 dovecot: imap-hibernate(redac...@gol.com): Connection 
> closed in=85 out=1066 head=<0> del=<0> exp=<0> trash=<0> 
> session=<6QC6C3VMF/jc9w8u>
>
> ---
>
> Didn't see these errors before introducing imap-hibernate, but then again
> this _could_ be something introduced between 2.2.13 (previous generation
> of servers) and .27, but I highly doubt it.
>
> My reading of the log is that the original IMAP session
> (<2jBV+HRM1Pbc9w8u>) fell over and terminated, resulting in the client to
> start up a new session.
> If so and with no false data/state transmitted to the client it would be
> not ideal, but not a critical problem either. 
>
> Would be delighted if Aki or Timo could comment on this.
>
>
> If you need any further data, let me know.
>
> Christian

Hi!

You could try updating to 2.2.28, if possible. I believe this bug is
fixed in 2.2.28, with
https://github.com/dovecot/core/commit/1fd44e0634ac312d0960f39f9518b71e08248b65

Aki


Re: IMAP hibernate and scalability in general

2017-04-06 Thread Aki Tuomi


On 06.04.2017 06:15, Christian Balzer wrote:
> Hello,
>
> as some may remember, we're running very dense IMAP cluster here, in
> excess of 50k IMAP sessions per node (current record holder is 68k, design
> is for 200k+).
>
> The first issue we ran into was that the dovecot master process (which is
> single thread and thus a bottleneck) was approaching 100% CPU usage (aka
> using a full core) when trying to spawn off new IMAP processes.
>
> This was rectified by giving IMAP a service count of 200 to create a pool
> of "idling" processes eventually, reducing the strain for the master
> process dramatically. That of course required generous cranking up
> ulimits, FDs in particular. 
>
> The next issues of course is (as mentioned before) the memory usage of all
> those IMAP processes and the fact that quite a few things outside of
> dovecote (ps, etc) tend to get quite sedate when dealing with tens of
> thousands of processes. 
>
> We just started to deploy a new mailbox cluster pair with 2.2.27 and
> having IMAP hibernate configured.
> Getting this work is a PITA though with regards to ownership and access
> rights to the various sockets, this part could definitely do with some
> better (I know, difficult) defaults or at least more documentation (none
> besides the source and this ML).
>
> Initial results are very promising, depending on what your clients are
> doing (are they well behaved, are your users constantly looking at other
> folders, etc) the vast majority of IDLE processes will be in hibernated
> at any given time and thus not only using a fraction of the RAM otherwise
> needed but also freeing up process space.
>
> Real life example:
> 240 users, 86 imap processes (80% of those not IDLE) and:
> dovecot   119157  0.0  0.0  10452  3236 ?SApr01   0:21 
> dovecot/imap-hibernate [237 connections]
> That's 237 hibernated connections and thus less processes than otherwise.
>
>
> I assume that given the silence on the ML what we are going to be the
> first hibernate users where the term "large scale" does apply. 
> Despite that I have some questions, clarifications/confirmations:
>
> Our current default_client_limit is 16k, which can be seen by having 5
> config processes on our 65k+ session node. ^_-
>
> This would also apply to imap-hibernate, one wonders if that's fine
> (config certainly has no issues) or if something smaller would be
> appropriate here?
>
> Since we have idling IMAP processes around most of the time, the strain of
> re-spawning proper processes from imap-hibernate should be just as reduced
> as for the dovecot master, correct?
>
>
> I'll keep reporting our experiences here, that is if something blows up
> spectacularly. ^o^
>
> Christian

Hi!

We have customers using it in larger deployments. A good idea is to have
as much of your clients hibernating as possible, as the hibernation
process is much smaller than actual IMAP process.

You should probably also look at reusing the processes, as this will
probably help your performance,
https://wiki.dovecot.org/PerformanceTuning and
https://wiki.dovecot.org/LoginProcess are probably a good starting
point, although I suspect you've read these already.

If you are running a dense server, cranking up various limits is rather
expected.

Aki


Re: Problem with Dovecot and BlackBerry

2017-04-06 Thread Aki Tuomi


On 06.04.2017 07:50, li...@lazygranch.com wrote:
> On Tue, 04 Apr 2017 11:07:26 +
> Luca Bertoncello  wrote:
>
>> Hi all,
>>
>> i've got a strange behaviour with a BlackBerry Classic Phone (BBOS  
>> 10.3.2.2876) in combination with Dovecot 2.2.13 while trying to
>> fetch mails.
>>
>> Before burying myself into debugging sessions, i try to get an  
>> understanding if the following is a Client- or a Server-specific
>> error in the behaviour:
>>
>> CIGA8 UID FETCH 10009:10035 (UID FLAGS) (CHANGEDSINCE NOMODSEQ)
>> CIGA8 BAD Error in IMAP command UID FETCH: Invalid CHANGEDSINCE
>> modseq.
>>
>> Following the full conversation.
>> 
>> * OK [CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID
>> ENABLE IDLE STARTTLS AUTH=PLAIN AUTH=LOGIN AUTH=DIGEST-MD5
>> AUTH=CRAM-MD5] Dovecot on xxx ready.
>> CIGA1 CAPABILITY
>> * CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE
>> IDLE STARTTLS AUTH=PLAIN AUTH=LOGIN AUTH=DIGEST-MD5 AUTH=CRAM-MD5
>> CIGA1 OK Pre-login capabilities listed, post-login capabilities have
>> more. CIGA2 ID ("os" "BlackBerry 10" "os-version" "10.3.2.2876"
>> "vendor" "rim" "device" "Classic" "name" "bbimap")
>> * ID ("name" "Dovecot")
>> CIGA2 OK ID completed.
>> CIGA3 LOGIN xxx xxx
>> * CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE
>> IDLE SORT SORT=DISPLAY THREAD=REFERENCES THREAD=REFS
>> THREAD=ORDEREDSUBJECT MULTIAPPEND URL-PARTIAL CATENATE UNSELECT
>> CHILDREN NAMESPACE UIDPLUS LIST-EXTENDED I18NLEVEL=1 CONDSTORE
>> QRESYNC ESEARCH ESORT SEARCHRES WITHIN CONTEXT=SEARCH LIST-STATUS
>> SPECIAL-USE BINARY MOVE CIGA3 OK Logged in
>> CIGA4 CAPABILITY
>> * CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE
>> IDLE SORT SORT=DISPLAY THREAD=REFERENCES THREAD=REFS
>> THREAD=ORDEREDSUBJECT MULTIAPPEND URL-PARTIAL CATENATE UNSELECT
>> CHILDREN NAMESPACE UIDPLUS LIST-EXTENDED I18NLEVEL=1 CONDSTORE
>> QRESYNC ESEARCH ESORT SEARCHRES WITHIN CONTEXT=SEARCH LIST-STATUS
>> SPECIAL-USE BINARY MOVE CIGA4 OK Capability completed.
>> CIGA5 LIST "" ""
>> * LIST (\Noselect) "." ""
>> CIGA5 OK List completed.
>> CIGA6 LIST "" "*"
>> * LIST (\HasNoChildren) "." folder_a
>> * LIST (\HasNoChildren) "." folder_b
>> * LIST (\HasNoChildren) "." folder_c
>> * LIST (\HasNoChildren) "." sent-mail
>> * LIST (\HasNoChildren) "." folder_d
>> * LIST (\HasNoChildren) "." folder_e
>> * LIST (\HasNoChildren \Trash) "." Trash
>> * LIST (\HasNoChildren \Drafts) "." Drafts
>> * LIST (\HasNoChildren) "." folder_f
>> * LIST (\HasNoChildren) "." folder_g
>> * LIST (\HasNoChildren) "." INBOX
>> CIGA6 OK List completed.
>> CIGA7 SELECT INBOX (CONDSTORE)
>> * FLAGS (\Answered \Flagged \Deleted \Seen \Draft)
>> * OK [PERMANENTFLAGS (\Answered \Flagged \Deleted \Seen \Draft \*)]  
>> Flags permitted.
>> * 7 EXISTS
>> * 0 RECENT
>> * OK [UIDVALIDITY 1391686038] UIDs valid
>> * OK [UIDNEXT 10036] Predicted next UID
>> * OK [HIGHESTMODSEQ 1608] Highest
>> CIGA7 OK [READ-WRITE] Select completed (0.000 secs).
>> CIGA8 UID FETCH 10009:10035 (UID FLAGS) (CHANGEDSINCE NOMODSEQ)
>> CIGA8 BAD Error in IMAP command UID FETCH: Invalid CHANGEDSINCE
>> modseq. CIGA9 LOGOUT
>> * BYE Logging out
>> CIGA9 OK Logout completed.
>> 
>>
>> Thanks in advance!
>>
>> Luca Bertoncello
>> (lucab...@lucabert.de)
> Is that the dovecot.log file?
>
> Here is what  I get:
>
> # dovecot --version
> 2.2.28 (bed8434)
>
> bbos 10.3.3.2205
>
>
> Sanitized log file below. I'd appreciate the moderator removing my post
> if I let something slip.
>
> Apr 06 04:01:02 imap-login: Info: Login: user=, 
> method=PLAIN, rip=myip, lip=myserver, mpid=77887, TLS, session=
> Apr 06 04:01:02 imap(m...@mydomain.com): Debug: Added userdb setting: 
> plugin/=yes
> Apr 06 04:01:02 imap(m...@mydomain.com): Debug: Effective uid=1003, gid=1003, 
> home=/var/mail/vhosts/mydomain.com/me
> Apr 06 04:01:02 imap(m...@mydomain.com): Debug: Namespace inbox: 
> type=private, prefix=, sep=, inbox=yes, hidden=no, list=yes, 
> subscriptions=yes location=maildir:~
> Apr 06 04:01:02 imap(m...@mydomain.com): Debug: maildir++: 
> root=/var/mail/vhosts/mydomain.com/me, index=, indexpvt=, control=, 
> inbox=/var/mail/vhosts/mydomain.com/me, alt=
> Apr 06 04:01:04 auth: Debug: client in: AUTH1   PLAIN   service=imap  
>   secured session=differentcharslip=myserver rip=myip   
> lport=143
> rport=47037 local_name=www.mydomain.com   resp=lotsofchars= (previous 
> base64 data may contain sensitive data)

It would seem like a bug in your Blackberry email client, it should not
do this.

CIGA8 UID FETCH 10009:10035 (UID FLAGS) (CHANGEDSINCE NOMODSEQ)

The CHANGEDSINCE expects some numeric value, not NOMODSEQ.

Aki