Re: Host ... is being updated before previous update had finished

2017-04-24 Thread Timo Sirainen
It's usually very troublesome to try to reproduce and debug these.. It looks 
like the directors got into some kind of a broken state that wouldn't fix 
itself. Some host update going around and around in the director ring without 
being detected as a duplicate/obsolete update. For now I think the only 
workaround is to do what you did: stop all directors (or maybe a single one 
could have been left running) and start them up again.

> On 21 Apr 2017, at 18.50, Mark Moseley  wrote:
> 
> Timo/Aki/Docecot guys, any hints here? Is this a bug? Design issue?
> 
> On Fri, Apr 7, 2017 at 10:10 AM Mark Moseley  wrote:
> 
>> On Mon, Apr 3, 2017 at 6:04 PM, Mark Moseley 
>> wrote:
>> 
>>> We just had a bunch of backend boxes go down due to a DDoS in our
>>> director cluster. When the DDoS died down, our director ring was a mess.
>>> 
>>> Each box had thousands (and hundreds per second, which is a bit much) of
>>> log lines like the following:
>>> 
>>> Apr 03 19:59:29 director: Warning: director(10.1.20.10:9090/left): Host
>>> 10.1.17.15 is being updated before previous update had finished (up ->
>>> down) - setting to state=down vhosts=100
>>> Apr 03 19:59:29 director: Warning: director(10.1.20.10:9090/left): Host
>>> 10.1.17.15 is being updated before previous update had finished (down ->
>>> up) - setting to state=up vhosts=100
>>> Apr 03 19:59:29 director: Warning: director(10.1.20.10:9090/left): Host
>>> 10.1.17.15 is being updated before previous update had finished (up ->
>>> down) - setting to state=down vhosts=100
>>> Apr 03 19:59:29 director: Warning: director(10.1.20.10:9090/left): Host
>>> 10.1.17.15 is being updated before previous update had finished (down ->
>>> up) - setting to state=up vhosts=100
>>> Apr 03 19:59:29 director: Warning: director(10.1.20.10:9090/left): Host
>>> 10.1.17.15 is being updated before previous update had finished (up ->
>>> down) - setting to state=down vhosts=100
>>> Apr 03 19:59:29 director: Warning: director(10.1.20.2:9090/right): Host
>>> 10.1.17.15 is being updated before previous update had finished (down ->
>>> up) - setting to state=up vhosts=100
>>> Apr 03 19:59:29 director: Warning: director(10.1.20.2:9090/right): Host
>>> 10.1.17.15 is being updated before previous update had finished (up ->
>>> down) - setting to state=down vhosts=100
>>> Apr 03 19:59:29 director: Warning: director(10.1.20.2:9090/right): Host
>>> 10.1.17.15 is being updated before previous update had finished (down ->
>>> up) - setting to state=up vhosts=100
>>> Apr 03 19:59:29 director: Warning: director(10.1.20.2:9090/right): Host
>>> 10.1.17.15 is being updated before previous update had finished (up ->
>>> down) - setting to state=down vhosts=100
>>> Apr 03 19:59:29 director: Warning: director(10.1.20.2:9090/right): Host
>>> 10.1.17.15 is being updated before previous update had finished (down ->
>>> up) - setting to state=up vhosts=100
>>> 
>>> This was on every director box and the status of all of the directors in
>>> 'doveadm director ring status' was 'handshaking'.
>>> 
>>> Here's a sample packet between directors:
>>> 
>>> 19:51:23.552280 IP 10.1.20.10.56670 > 10.1.20.1.9090: Flags [P.], seq
>>> 4147:5128, ack 0, win 0, options [nop,nop,TS val 1373505883 ecr
>>> 1721203906], length 981
>>> 
>>> Q.  [f.|.HOST   10.1.20.10  90901006732 10.1.17.15
>>> 100 D1491260800
>>> HOST10.1.20.10  90901006733 10.1.17.15  100
>>> U1491260800
>>> HOST10.1.20.10  90901006734 10.1.17.15  100
>>> D1491260800
>>> HOST10.1.20.10  90901006735 10.1.17.15  100
>>> U1491260800
>>> HOST10.1.20.10  90901006736 10.1.17.15  100
>>> D1491260800
>>> HOST10.1.20.10  90901006737 10.1.17.15  100
>>> U1491260800
>>> HOST10.1.20.10  90901006738 10.1.17.15  100
>>> D1491260800
>>> HOST10.1.20.10  90901006739 10.1.17.15  100
>>> U1491260800
>>> HOST10.1.20.10  90901006740 10.1.17.15  100
>>> D1491260800
>>> HOST10.1.20.10  90901006741 10.1.17.15  100
>>> U1491260800
>>> HOST10.1.20.10  90901006742 10.1.17.15  100
>>> D1491260800
>>> HOST10.1.20.10  90901006743 10.1.17.15  100
>>> U1491260800
>>> HOST10.1.20.10  90901006744 10.1.17.15  100
>>> D1491260800
>>> HOST10.1.20.10  90901006745 10.1.17.15  100
>>> U1491260800
>>> HOST10.1.20.10  90901006746 10.1.17.15  100
>>> D1491260800
>>> HOST10.1.20.10  90901006747 10.1.17.15  100
>>> U1491260800
>>> SYNC10.1.20.10  90901011840 7   1491263483  3377546382
>>> 
>>> I'm guessing that D1491260800 is the user hash (with D for down), and the
>>> U version is for 'up'.
>>> 
>>> I'm happy to provide the full tcpdump (and/or doveconf -a), though the
>>> tcpdump is basically all identical the one I pasted (same hash, same host).
>>> 
>>> 

Re: System load spike on dovecot reload

2017-04-24 Thread dove...@mtfbwy.cz

Hi,

enabling 'login_proxy_max_disconnect_delay' on IMAP proxy did the trick. 
I should have mentioned I use proxy servers, sorry


Thanks,

Dave


Dne 22.4.2017 v 06:25 Christian Balzer napsal(a):

Hello,

On Fri, 21 Apr 2017 10:43:47 +0200 d...@evilcigi.eu wrote:


Hi everyone,

I'm running dovecot with quite a lot of users and lots of active imap
connections (like 20'000). I'm using different user IDs for users, so I
need to have imap {service_count=1} - i.e. I have a lots of imap
processes running.


We peaked out at 65k imap processes before upgrading to a version where
imap-hibernate more or less works, but we're using a common ID.
---
dovecot   119157  0.1  0.0  59364 52216 ?SApr01  48:25 
dovecot/imap-hibernate [15137 connections]
---

The service_count parameter in this context is not doing what you think it
does, I have it at 200 these days and that will allow imap (or pop3)
processes to be recycled (they are labeled with "idling" when waiting for a
new client), not having one imap process serve multiple clients.
---
mail  591307  0.0  0.0  29876  4712 ?SApr20   0:00 dovecot/imap 
[idling]
mail  735323  0.0  0.0  27396  4196 ?S13:20   0:00 dovecot/pop3 
[idling]
---

The advantage (for me at least) is that the dovecot master process doesn't
have to to spin up a new mail processes each time during logins.

Since this process is quite single-threaded, it becomes a bottleneck
eventually.
   

Everything works fine, until I reload dovecot configuration. When that
happen, every client is forced to relogin in the same time and that
causes a huge system load spike (2-3000 5 min load).


Unless you're making a change that affects the dovecot master process,
restarting everything isn't needed and you should set
"shutdown_clients = no".
You could still kick users with "dovecot kick" at a leisurely pace, but
security problems with the mail processes are rare.


I was thinking that it would be great, if dovecot wouldn't kick all the
users in the same time during reload, but somehow gradually, during
specified interval. I'm aware of the shutdown_clients directive that
could help, but I don't like it -

I've very much gotten to like it, once things got huge and busy.


I do want the clients get disconnected
on dovecot shutdown and also I want them to relogin in reasonably short
time after reload.

Is something like that possible with dovecot or does it make sense to
implement that in the future versions?


Run a dovecot proxy (if you have single box with all these users on it,
Mr. Murphy would like a word with you) and set
"login_proxy_max_disconnect_delay" to something that suits you.

Christian


Re: IMAP hibernate and scalability in general

2017-04-24 Thread Timo Sirainen
On 24 Apr 2017, at 4.04, Christian Balzer  wrote:
> 
> 
> Hello,
> 
> Just to follow up on this, we've hit over 16k (default client limit here)
> hibernated sessions:
> ---
> dovecot   119157  0.1  0.0  63404 56140 ?SApr01  62:05 
> dovecot/imap-hibernate [11291 connections]
> dovecot   877825  0.2  0.0  28512 21224 ?SApr23   1:34 
> dovecot/imap-hibernate [5420 connections]
> ---
> 
> No issues other than the minor bug I reported, CPU usage is slight (at
> most 2% of a CPU core), memory savings are immense, so I'm a happy camper.
> 
> Just out of curiosity, how does dovecot decide to split and spread out
> sessions between hibernate processes? 
> It's clearly something more involved than "fill up one and then fill up
> the next" or we would see 16k on the old one and a few on the new one.


New processes aren't created until client_limit is reached in all the existing 
processes. When there are multiple processes they're all listening for new 
connections and whichever happens to be fastest gets it. Related to this, I'm 
thinking about implementing SO_REUSEPORT (https://lwn.net/Articles/542629/ 
) soon that would change the behavior a bit. 
Although its main purposes would be as a workaround to allow Dovecot restarts 
to work even though some of the old processes are still keeping the listener 
port open.


Re: Feature Request - Director Balance

2017-04-24 Thread Webert de Souza Lima
Shrinking director_user_expire might be a workaround but not as good as a
solution, as also the user can end up mapped to the same server again.
Director flush is both manual and aggressive, so not a good solution too.

The possibility to move users between backends without killing existing
connections is a good solution, yes! It can be scripted. =]

What I suggested was more automated, but that can be left for a future
future. If you have a command to be manually issued like:  "doveadm
director rebalance" it would be great.

Thanks for your feedback.

Att,

Webert de Souza Lima
MAV Tecnologia.


On Fri, Apr 21, 2017 at 4:52 AM, Timo Sirainen  wrote:

> On 20 Apr 2017, at 17.35, Webert de Souza Lima 
> wrote:
> >
> > Hi,
> >
> > often I run into the situation where a dovecot server goes down for
> > maintenance, and all users get concentrated in the remaining dovecot
> server
> > (considering I have 2 dovecot servers only).
> >
> > When that dovecot server comes back online, director server will send new
> > users to it, but the dovecot server that was up all the time will still
> > have tons of clients mapped to it.
> >
> > I suggest the director servers to always try to balance load between
> > servers, in the way:
> >
> > - if a server has several more connections than other, mark it to
> > re-balance
> > - when a user connected to this loaded server disconnects, map it to
> > another server (that is per definition not the same server) immediately.
> >
> > that way it would gracefully re-balance, not killing existing
> connections,
> > just waiting for them to finish.
>
> You could effectively do this by shrinking the director_user_expire time.
> But if it's too low, it causes director to be a bit more inefficient when
> assigning users to backends. Also if backends are doing any background work
> (e.g. full text search indexing) director might move the user away too
> early. But setting it to e.g. 5 minutes would likely help a lot.
>
> There's of course also the doveadm director flush, which can be used to
> move users between backends, but that requires killing the connections for
> now. I've some future plans to make it possible to move connections between
> backends without disconnecting the IMAP client.
>


Re: V1 Dovecot POP3 index migrate to V3

2017-04-24 Thread Christian Balzer

Hello,

On Mon, 24 Apr 2017 14:48:01 +0300 Umut Arus wrote:

> Hi,
> 
> I've very old Dovecot version so I'm trying to migrate new disk and current
> version of dovecot. But POP3 accounts have "version 1"
> .INBOX/dovecot-uidlist files so clients download the messages again.
> 
> How can achieve not to download to mailbox after copy to new disk? Is there
> any script or something to migrate index v1 to v3?
> 
> thanks for your suggestions.
> 
Have you actually looked at the configuration file in question
(20-pop3.conf)?
Setting "pop3_uidl_format = %v.%u" should do the trick and if you're using
maildir (we do) you can in addition set "pop3_save_uidl = yes" and in the
future change to different UID format.


Christian
-- 
Christian BalzerNetwork/Systems Engineer
ch...@gol.com   Global OnLine Japan/Rakuten Communications
http://www.gol.com/


V1 Dovecot POP3 index migrate to V3

2017-04-24 Thread Umut Arus
Hi,

I've very old Dovecot version so I'm trying to migrate new disk and current
version of dovecot. But POP3 accounts have "version 1"
.INBOX/dovecot-uidlist files so clients download the messages again.

How can achieve not to download to mailbox after copy to new disk? Is there
any script or something to migrate index v1 to v3?

thanks for your suggestions.


Re: Dovecot 2.3 ?

2017-04-24 Thread Reuben Farrelly

Whoops.   I meant from -git.

Reuben


On 24/04/2017 7:54 PM, Aki Tuomi wrote:



On 24.04.2017 12:30, Ralf Hildebrandt wrote:

* Reuben Farrelly :

Hi,

Is anyone here running dovecot-2.3 from hg?

I'm using the daily builds on a low traffic machine. It's proxying
traffic to a Exchange IMAP server.



Please do not run it from hg, as we no longer provide hg repository.

Aki



Re: Dovecot 2.3 ?

2017-04-24 Thread Ralf Hildebrandt
* Aki Tuomi :

> > I'm using the daily builds on a low traffic machine. It's proxying
> > traffic to a Exchange IMAP server.
> >
> 
> Please do not run it from hg, as we no longer provide hg repository.

What I meant to say: I use the daily builds. Fair enough :)

-- 
Ralf Hildebrandt
  Geschäftsbereich IT | Abteilung Netzwerk
  Charité - Universitätsmedizin Berlin
  Campus Benjamin Franklin
  Hindenburgdamm 30 | D-12203 Berlin
  Tel. +49 30 450 570 155 | Fax: +49 30 450 570 962
  ralf.hildebra...@charite.de | https://www.charite.de



Re: Dovecot 2.3 ?

2017-04-24 Thread Aki Tuomi


On 24.04.2017 12:30, Ralf Hildebrandt wrote:
> * Reuben Farrelly :
>> Hi,
>>
>> Is anyone here running dovecot-2.3 from hg?
> I'm using the daily builds on a low traffic machine. It's proxying
> traffic to a Exchange IMAP server.
>

Please do not run it from hg, as we no longer provide hg repository.

Aki


Re: Public Folder Problems

2017-04-24 Thread Kevin
No volunteers? Maybe no one understands the cpanel dovecot setup. I 
certainly don't!


atb

On 10/04/2017 14:08 +0700, Kevin wrote:

Hi,

I've been trying to get a Public Folder namespace working, so far 
without any luck. I've tried in my home folder, outside my home 
folder, with and without acl. I can see the namespace (PUBLIC) in 
Roundcube, but that's as far as I get. I've put a .FirstFolder under 
the location, but it never appears. I've added dirs cur, new, and tmp 
but no difference. No hints from the maillog.


Could some kind soul please put me out of my misery!

# 2.2.28 (bed8434): /etc/dovecot/dovecot.conf
# OS: Linux 2.6.32-642.15.1.el6.x86_64 x86_64 CentOS release 6.9 (Final)
auth_cache_size = 8 k
auth_mechanisms = plain login
auth_username_chars = 
"abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!#$-=?^_{}~./@+%"

dict {
  expire = 
sqlite:/usr/local/cpanel/etc/dovecot/dovecot-dict-expire.conf.ext

}
first_valid_uid = 201
lda_mailbox_autocreate = yes
lmtp_save_to_detail_mailbox = yes
mail_access_groups = dovecot
mail_location = maildir:~/mail
mail_plugins = quota quota_clone
mail_prefetch_count = 20
mailbox_list_index = yes
maildir_very_dirty_syncs = yes
namespace inbox {
  inbox = yes
  location =
  mailbox Archives {
auto = subscribe
special_use = \Archive
  }
  mailbox "Deleted Messages" {
auto = no
special_use = \Trash
  }
  mailbox Drafts {
auto = subscribe
special_use = \Drafts
  }
  mailbox Sent {
auto = subscribe
special_use = \Sent
  }
  mailbox "Sent Messages" {
auto = no
special_use = \Sent
  }
  mailbox Spam {
auto = create
special_use = \Junk
  }
  mailbox Trash {
auto = no
special_use = \Trash
  }
  mailbox not-Spam {
auto = create
  }
  prefix = INBOX.
  separator = .
  type = private
}
namespace one {
  list = yes
  location = maildir:/var/spool/pubmail
  prefix = PUBLIC.
  separator = .
  type = public
}
passdb {
  args = /usr/local/cpanel/bin/dovecot-wrap
  driver = checkpassword
}
plugin {
  acl = vfile
  expire = Trash
  expire2 = Deleted Messages
  expire3 = INBOX.Deleted Messages
  expire4 = INBOX.Trash
  expire_cache = yes
  expire_dict = proxy::expire
  quota_exceeded_message = Mailbox is full / Blocks limit exceeded / 
Inode limit exceeded

  zlib_save = gz
}
protocols = lmtp imap
service auth {
  unix_listener auth-client {
mode = 0666
  }
}
service config {
  vsz_limit = 2 G
}
service dict {
  unix_listener dict {
group = dovecot
mode = 0660
  }
}
service imap-login {
  client_limit = 500
  inet_listener imap {
address = *,::
  }
  inet_listener imaps {
address = *,::
  }
  process_limit = 50
  process_min_avail = 2
  service_count = 0
  vsz_limit = 128 M
}
service imap {
  process_limit = 512
  vsz_limit = 512 M
}
service lmtp {
  client_limit = 1
  process_limit = 500
  unix_listener lmtp {
group = mail
mode = 0660
user = mailnull
  }
  vsz_limit = 512 M
}
service managesieve-login {
  client_limit = 500
  process_limit = 50
  process_min_avail = 2
  service_count = 0
  vsz_limit = 128 M
}
service managesieve {
  process_limit = 512
  vsz_limit = 512 M
}
service pop3-login {
  client_limit = 500
  inet_listener pop3 {
address = *,::
  }
  inet_listener pop3s {
address = *,::
  }
  process_limit = 50
  process_min_avail = 2
  service_count = 0
  vsz_limit = 128 M
}
service pop3 {
  process_limit = 512
  vsz_limit = 512 M
}
service quota-status {
  executable = quota-status -p postfix
  unix_listener quota-status {
mode = 0666
  }
}
ssl_cert = ssl_cipher_list = 
ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS

ssl_key =  # hidden, use -P to show it
ssl_protocols = !SSLv2 !SSLv3
userdb {
  driver = prefetch
}
userdb {
  args = /usr/local/cpanel/etc/dovecot/cpauthd-dict.conf
  driver = dict
}
userdb {
  args = /usr/local/cpanel/bin/dovecot-wrap
  driver = checkpassword
}
protocol imap {
  imap_capability = +NAMESPACE
  imap_idle_notify_interval = 24 mins
  imap_logout_format = in=%i, out=%o, bytes=%i/%o
  mail_max_userip_connections = 20
  mail_plugins = acl quota imap_quota expire imap_zlib quota_clone 
virtual

  namespace sent {
hidden = yes
list = no
location = 
virtual:/usr/local/cpanel/etc/dovecot/virtual/sent:INDEX=~/mail/virtual/%u

prefix = sent
separator = .
  }
  namespace spam {
hidden = yes
list = 

Re: Dovecot 2.3 ?

2017-04-24 Thread Ralf Hildebrandt
* Reuben Farrelly :
> Hi,
> 
> Is anyone here running dovecot-2.3 from hg?

I'm using the daily builds on a low traffic machine. It's proxying
traffic to a Exchange IMAP server.

-- 
Ralf Hildebrandt
  Geschäftsbereich IT | Abteilung Netzwerk
  Charité - Universitätsmedizin Berlin
  Campus Benjamin Franklin
  Hindenburgdamm 30 | D-12203 Berlin
  Tel. +49 30 450 570 155 | Fax: +49 30 450 570 962
  ralf.hildebra...@charite.de | https://www.charite.de



Dovecot 2.3 ?

2017-04-24 Thread Reuben Farrelly

Hi,

Is anyone here running dovecot-2.3 from hg?

I'm just interested in finding out how people are going with it, how 
stable is it and are there any interesting features that make it worth 
spending time looking at.


I'm in a position where I can test with semi-live data (backed up and 
moderately important but not mission critical) but before I take the 
plunge I'd like to find out what the general consensus is.


I see pretty much all bugfixes from 2.2 are going into 2.3 as well...

Is there any timeframe on a beta release?

Thanks,
Reuben


Re: Issue with POP3s TLS/SSL on port 995 on Outlook 2016

2017-04-24 Thread Benny Pedersen

Aki Tuomi skrev den 2017-04-24 10:42:


We consider 2.2.28 stable release.

Timo reported one bug in 2.2.28, so not so stable

Do you only consider bug-free software as stable?


dovecot still works for me, version 2.2.29

i know nothing more


Re: Issue with POP3s TLS/SSL on port 995 on Outlook 2016

2017-04-24 Thread Aki Tuomi


On 24.04.2017 11:05, Benny Pedersen wrote:
> Aki Tuomi skrev den 2017-04-24 09:16:
>
>> We consider 2.2.28 stable release.
>
> Timo reported one bug in 2.2.28, so not so stable

Do you only consider bug-free software as stable?

Aki


Re: Issue with POP3s TLS/SSL on port 995 on Outlook 2016

2017-04-24 Thread Benny Pedersen

Aki Tuomi skrev den 2017-04-24 09:16:


We consider 2.2.28 stable release.


Timo reported one bug in 2.2.28, so not so stable


Re: Issue with POP3s TLS/SSL on port 995 on Outlook 2016

2017-04-24 Thread Aki Tuomi


On 21.04.2017 12:29, Bhushan Bhosale wrote:
> Dear Team,
>
> I'm facing issue with POP3s TLS/SSL on port 995 only for outlook2016. It's 
> working fine with dovecot v2.2.28 on test environment.
> Is the dovecot v2.2.28 is stable released? I can  upgrade the version from 
> v2.1.17 to v2.2.28 on production if its stable version.
> Kindly confirm and provide the proper solution.
>
> Thanks and Regards,
>
> Bhushan
> Previous Mail:==I have faced issue with email downloading in the 
> email client by using pop3s SSL port 995 in dovecot v2.1.17 for outlook 
> client 2016 on production environment. 
> As per my troubleshooting on my test environment, I have upgraded dovecot 
> version v2.2.28, and changed paramer "ssl_dh_parameters_length = 2048" and 
> "verbose_ssl = yes", The issue seems to be resolved in dovecot v2.2.28.
> What can i do to resolve this issue in dovecot v2.1.17? Kindly help.

We consider 2.2.28 stable release.

Aki


Re: [BUG] client state / Message count mismatch with imap-hibernate and mixed POP3/IMAP access

2017-04-24 Thread Aki Tuomi


On 24.04.2017 04:05, Christian Balzer wrote:
> Hello,
>
> On Thu, 6 Apr 2017 20:04:38 +0900 Christian Balzer wrote:
>
>> Hello Aki, Timo,
>>
>> according to git this fix should be in 2.2.27, which I'm running, so I
>> guess this isn't it or something else is missing.
>> See:
>> https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=859700
>>
> Any update on this?
>
> Christian
>
>> Regards,
>>
>> Christian
>>
>> On Thu, 6 Apr 2017 15:37:33 +0900 Christian Balzer wrote:
>>
>>> Hello,
>>>
>>> On Thu, 6 Apr 2017 09:24:23 +0300 Aki Tuomi wrote:
>>>   
 On 06.04.2017 07:02, Christian Balzer wrote:
> Hello,
>
> this is on Debian Jessie box, dovecot 2.2.27 from backports,
> imap-hibernate obviously enabled. 
>
> I've been seeing a few of these since starting this cluster (see previous
> mail), they all follow the same pattern, a user who accesses their mailbox
> with both POP3 and IMAP deletes mails with POP3 and the IMAP
> (imap-hibernate really) is getting confused and upset about this:
>
> ---
>
> Apr  6 09:55:49 mbx11 dovecot: imap-login: Login: 
> user=, method=PLAIN, rip=xxx.xxx.x.46, 
> lip=xxx.xxx.x.113, mpid=951561, secured, session=<2jBV+HRM1Pbc9w8u>
> Apr  6 10:01:06 mbx11 dovecot: pop3-login: Login: 
> user=, method=PLAIN, rip=xxx.xxx.x.46, 
> lip=xxx.xxx.x.41, mpid=35447, secured, session=
> Apr  6 10:01:07 mbx11 dovecot: pop3(redac...@gol.com): Disconnected: 
> Logged out top=0/0, retr=0/0, del=1/1, size=20674 
> session=
> Apr  6 10:01:07 mbx11 dovecot: imap(redac...@gol.com): Error: 
> imap-master: Failed to import client state: Message count mismatch after 
> handling expunges (0 != 1)
> Apr  6 10:01:07 mbx11 dovecot: imap(redac...@gol.com): Client state 
> initialization failed in=0 out=0 head=<0> del=<0> exp=<0> trash=<0> 
> session=<2jBV+HRM1Pbc9w8u>
> Apr  6 10:01:15 mbx11 dovecot: imap-login: Login: 
> user=, method=PLAIN, rip=xxx.xxx.x.46, 
> lip=xxx.xxx.x.113, mpid=993303, secured, session=<6QC6C3VMF/jc9w8u>
> Apr  6 10:07:42 mbx11 dovecot: imap-hibernate(redac...@gol.com): 
> Connection closed in=85 out=1066 head=<0> del=<0> exp=<0> trash=<0> 
> session=<6QC6C3VMF/jc9w8u>
>
> ---
>
> Didn't see these errors before introducing imap-hibernate, but then again
> this _could_ be something introduced between 2.2.13 (previous generation
> of servers) and .27, but I highly doubt it.
>
> My reading of the log is that the original IMAP session
> (<2jBV+HRM1Pbc9w8u>) fell over and terminated, resulting in the client to
> start up a new session.
> If so and with no false data/state transmitted to the client it would be
> not ideal, but not a critical problem either. 
>
> Would be delighted if Aki or Timo could comment on this.
>
>
> If you need any further data, let me know.
>
> Christian  
 Hi!

 You could try updating to 2.2.28, if possible. I believe this bug is
 fixed in 2.2.28, with
 https://github.com/dovecot/core/commit/1fd44e0634ac312d0960f39f9518b71e08248b65
 
>>> Ah yes, that looks like the culprit indeed.
>>>
>>> I shall poke the powers that be over at Debian bugs to expedite the 2.2.28
>>> release and backport.
>>>
>>> Christian  
>>
>

It seems we still have some of these unsquashed, we'll look into this
further.

Aki