Re: Director+NFS Experiences

2017-02-24 Thread Zhang Huangbin

> On Feb 25, 2017, at 3:28 AM, Mark Moseley  wrote:
> 
> Attached. No claims are made on the quality of my code :)

Thank you for sharing. :)

Some suggestions:

- should replace log() by the standard logging module like "logging.debug(xx)”
- add managesieve support
- add lmtp support
- how about store command line options in a config file? remove the ‘optparse’ 
module.
- email notification support when server is up/down
- lots of PEP8 style issue :)

Would you like to publish this code in github/bitbucket/…?


Zhang Huangbin, founder of iRedMail project: http://www.iredmail.org/
Time zone: GMT+8 (China/Beijing).
Available on Telegram: https://t.me/iredmail


Re: sieve_imapsieve centos 7

2017-02-24 Thread Christian Kivalo

On 2017-02-24 23:16, Matthew Broadhead wrote:

i am using CentOS 7 centos-release-7-3.1611.el7.centos.x86_64 with
dovecot dovecot-2.2.10-7.el7.x86_64.  i am trying to set up AntiSpam
with IMAPSieve but the package seems to be lacking sieve_imapsieve. is
there anything i can do?  i am not really interested in compiling from
source because i like to receive security updates automatically.
The imapsieve plugin for pigeonhole was introduced in version 0.4.14 for 
dovecot 2.2.24, with your current packages i'd say there is nothing you 
can do except to find some sort of extra packages (i'm not familiar with 
centos...)


http://dovecot.markmail.org/message/mggbfw6vxhs2upa7?q=imapsieve=2


2017-02-24 21:57:00auth: Error: net_connect_unix(anvil-auth-penalty)
failed: Permission denied
2017-02-24 21:57:00master: Warning: Killed with signal 15 (by pid=1
uid=0 code=kill)
2017-02-24 21:57:00managesieve: Fatal: Plugin 'sieve_imapsieve' not
found from directory /usr/lib64/dovecot/sieve
2017-02-24 21:57:00config: Error: managesieve-login: dump-capability
process returned 89
2017-02-24 21:57:05imap: Fatal: Plugin 'imap_sieve' not found from
directory /usr/lib64/dovecot
2017-02-24 21:57:05imap: Fatal: Plugin 'imap_sieve' not found from
directory /usr/lib64/dovecot
2017-02-24 21:57:08imap: Fatal: Plugin 'imap_sieve' not found from
directory /usr/lib64/dovecot
2017-02-24 21:57:08imap: Fatal: Plugin 'imap_sieve' not found from
directory /usr/lib64/dovecot
2017-02-24 21:57:10imap: Fatal: Plugin 'imap_sieve' not found from
directory /usr/lib64/dovecot
2017-02-24 21:57:10imap: Fatal: Plugin 'imap_sieve' not found from
directory /usr/lib64/dovecot


--
 Christian Kivalo


sieve_imapsieve centos 7

2017-02-24 Thread Matthew Broadhead
i am using CentOS 7 centos-release-7-3.1611.el7.centos.x86_64 with 
dovecot dovecot-2.2.10-7.el7.x86_64.  i am trying to set up AntiSpam 
with IMAPSieve but the package seems to be lacking sieve_imapsieve. is 
there anything i can do?  i am not really interested in compiling from 
source because i like to receive security updates automatically.


2017-02-24 21:57:00auth: Error: net_connect_unix(anvil-auth-penalty) 
failed: Permission denied
2017-02-24 21:57:00master: Warning: Killed with signal 15 (by pid=1 
uid=0 code=kill)
2017-02-24 21:57:00managesieve: Fatal: Plugin 'sieve_imapsieve' not 
found from directory /usr/lib64/dovecot/sieve
2017-02-24 21:57:00config: Error: managesieve-login: dump-capability 
process returned 89
2017-02-24 21:57:05imap: Fatal: Plugin 'imap_sieve' not found from 
directory /usr/lib64/dovecot
2017-02-24 21:57:05imap: Fatal: Plugin 'imap_sieve' not found from 
directory /usr/lib64/dovecot
2017-02-24 21:57:08imap: Fatal: Plugin 'imap_sieve' not found from 
directory /usr/lib64/dovecot
2017-02-24 21:57:08imap: Fatal: Plugin 'imap_sieve' not found from 
directory /usr/lib64/dovecot
2017-02-24 21:57:10imap: Fatal: Plugin 'imap_sieve' not found from 
directory /usr/lib64/dovecot
2017-02-24 21:57:10imap: Fatal: Plugin 'imap_sieve' not found from 
directory /usr/lib64/dovecot


Re: Quota usage value shows 140% of actual disk usage

2017-02-24 Thread Karsten Heiken
Am 24.02.2017 um 16:00 schrieb Steffen Kaiser:
> 
> Quota does not count physical useage, but the amount of bytes allocated by 
> the messages. Maildir may hardlink messages, hence, they count multiple times 
> for the quota, but once for du.

And in your case dovecot even compressed the mails:
According to your doveconf, you are using mail_plugins = [...] zlib.

Dovecot's quota is calculated using the uncompressed size, whereas du shows you 
the space actually allocated.



signature.asc
Description: OpenPGP digital signature


Re: Director+NFS Experiences

2017-02-24 Thread Mark Moseley
On Fri, Feb 24, 2017 at 11:41 AM, Francisco Wagner C. Freire <
wgrcu...@gmail.com> wrote:

> In our experience. A ring with more of 4 servers is bad, we have sync
> problems everyone.  Using 4 or less works perfect.
>
> Em 24 de fev de 2017 4:30 PM, "Mark Moseley" 
> escreveu:
>
>> >
>> > On Thu, Feb 23, 2017 at 3:15 PM, Timo Sirainen  wrote:
>> >
>> >> On 24 Feb 2017, at 0.08, Mark Moseley  wrote:
>> >> >
>> >> > As someone who is about to begin the process of moving from maildir
>> to
>> >> > mdbox on NFS (and therefore just about to start the
>> 'director-ization'
>> >> of
>> >> > everything) for ~6.5m mailboxes, I'm curious if anyone can share any
>> >> > experiences with it. The list is surprisingly quiet about this
>> subject,
>> >> and
>> >> > articles on google are mainly just about setting director up. I've
>> yet
>> >> to
>> >> > stumble across an article about someone's experiences with it.
>> >> >
>> >> > * How big of a director cluster do you use? I'm going to have
>> millions
>> >> of
>> >> > mailboxes behind 10 directors.
>> >>
>> >> I wouldn't use more than 10.
>> >>
>> >>
>> > Cool
>>
>
Interesting. That's good feedback. One of the things I wondered about is
whether it'd be better to deploy a 10-node ring or split it into 2x 5-node
rings. Sounds like splitting it up might not be a bad idea. How often would
you see those sync problems (and were they the same errors as I posted or
something else)? And were you running poolmon from every node when you were
seeing sync errors?


Re: Director+NFS Experiences

2017-02-24 Thread Timo Sirainen
On 24 Feb 2017, at 21.29, Mark Moseley  wrote:
> 
> Feb 12 06:25:03 director: Warning: director(10.1.20.3:9090/left): Host
> 10.1.17.3 is being updated before previous update had finished (up -> down)
> - setting to state=down vhosts=0
> Feb 12 06:25:03 director: Warning: director(10.1.20.3:9090/left): Host
> 10.1.17.3 is being updated before previous update had finished (down -> up)
> - setting to state=up vhosts=0
> 
> any idea what would cause that? Is my guess that multiple directors tried
> to update the status simultaneously correct?

Most likely, yes. I'm not sure if it might happen also if the same server 
issues conflicting commands rapidly.


Re: Director+NFS Experiences

2017-02-24 Thread Francisco Wagner C. Freire
In our experience. A ring with more of 4 servers is bad, we have sync
problems everyone.  Using 4 or less works perfect.

Em 24 de fev de 2017 4:30 PM, "Mark Moseley" 
escreveu:

> >
> > On Thu, Feb 23, 2017 at 3:15 PM, Timo Sirainen  wrote:
> >
> >> On 24 Feb 2017, at 0.08, Mark Moseley  wrote:
> >> >
> >> > As someone who is about to begin the process of moving from maildir to
> >> > mdbox on NFS (and therefore just about to start the 'director-ization'
> >> of
> >> > everything) for ~6.5m mailboxes, I'm curious if anyone can share any
> >> > experiences with it. The list is surprisingly quiet about this
> subject,
> >> and
> >> > articles on google are mainly just about setting director up. I've yet
> >> to
> >> > stumble across an article about someone's experiences with it.
> >> >
> >> > * How big of a director cluster do you use? I'm going to have millions
> >> of
> >> > mailboxes behind 10 directors.
> >>
> >> I wouldn't use more than 10.
> >>
> >>
> > Cool
> >
> >
> >
> >> > I'm guessing that's plenty. It's actually split over two datacenters.
> >>
> >> Two datacenters in the same director ring? This is dangerous. if there's
> >> a network connectivity problem between them, they split into two
> separate
> >> rings and start redirecting users to different backends.
> >>
> >
> > I was unclear. The two director rings are unrelated and won't ever need
> to
> > talk to each other. I only mentioned the two rings to point out that all
> > 6.5m mailboxes weren't behind one ring, but rather split between two
> >
> >
> >
> >>
> >> > * Do you have consistent hashing turned on? I can't think of any
> reason
> >> not
> >> > to have it turned on, but who knows
> >>
> >> Definitely turn it on. The setting only exists because of backwards
> >> compatibility and will be removed at some point.
> >>
> >>
> > Out of curiosity (and possibly extremely naive), unless you've moved a
> > mailbox via 'doveadm director', if someone is pointed to a box via
> > consistent hashing, why would the directors need to share that mailbox
> > mapping? Again, assuming they're not moved (I'm also assuming that the
> > mailbox would always, by default, hash to the same value in the
> consistent
> > hash), isn't their hashing all that's needed to get to the right backend?
> > I.e. "I know what the mailbox hashes to, and I know what backend that
> hash
> > points at, so I'm done", in which case, no need to communicate to the
> other
> > directors. I could see that if you moved someone, it *would* need to
> > communicate that mapping. Then the only maps traded by directors would be
> > the consistent hash boundaries *plus* any "moved" mailboxes. Again, just
> > curious.
> >
> >
> Timo,
> Incidentally, on that error I posted:
>
> Feb 12 06:25:03 director: Warning: director(10.1.20.3:9090/left): Host
> 10.1.17.3 is being updated before previous update had finished (up -> down)
> - setting to state=down vhosts=0
> Feb 12 06:25:03 director: Warning: director(10.1.20.3:9090/left): Host
> 10.1.17.3 is being updated before previous update had finished (down -> up)
> - setting to state=up vhosts=0
>
> any idea what would cause that? Is my guess that multiple directors tried
> to update the status simultaneously correct?
>


Re: Director+NFS Experiences

2017-02-24 Thread Mark Moseley
>
> On Thu, Feb 23, 2017 at 3:15 PM, Timo Sirainen  wrote:
>
>> On 24 Feb 2017, at 0.08, Mark Moseley  wrote:
>> >
>> > As someone who is about to begin the process of moving from maildir to
>> > mdbox on NFS (and therefore just about to start the 'director-ization'
>> of
>> > everything) for ~6.5m mailboxes, I'm curious if anyone can share any
>> > experiences with it. The list is surprisingly quiet about this subject,
>> and
>> > articles on google are mainly just about setting director up. I've yet
>> to
>> > stumble across an article about someone's experiences with it.
>> >
>> > * How big of a director cluster do you use? I'm going to have millions
>> of
>> > mailboxes behind 10 directors.
>>
>> I wouldn't use more than 10.
>>
>>
> Cool
>
>
>
>> > I'm guessing that's plenty. It's actually split over two datacenters.
>>
>> Two datacenters in the same director ring? This is dangerous. if there's
>> a network connectivity problem between them, they split into two separate
>> rings and start redirecting users to different backends.
>>
>
> I was unclear. The two director rings are unrelated and won't ever need to
> talk to each other. I only mentioned the two rings to point out that all
> 6.5m mailboxes weren't behind one ring, but rather split between two
>
>
>
>>
>> > * Do you have consistent hashing turned on? I can't think of any reason
>> not
>> > to have it turned on, but who knows
>>
>> Definitely turn it on. The setting only exists because of backwards
>> compatibility and will be removed at some point.
>>
>>
> Out of curiosity (and possibly extremely naive), unless you've moved a
> mailbox via 'doveadm director', if someone is pointed to a box via
> consistent hashing, why would the directors need to share that mailbox
> mapping? Again, assuming they're not moved (I'm also assuming that the
> mailbox would always, by default, hash to the same value in the consistent
> hash), isn't their hashing all that's needed to get to the right backend?
> I.e. "I know what the mailbox hashes to, and I know what backend that hash
> points at, so I'm done", in which case, no need to communicate to the other
> directors. I could see that if you moved someone, it *would* need to
> communicate that mapping. Then the only maps traded by directors would be
> the consistent hash boundaries *plus* any "moved" mailboxes. Again, just
> curious.
>
>
Timo,
Incidentally, on that error I posted:

Feb 12 06:25:03 director: Warning: director(10.1.20.3:9090/left): Host
10.1.17.3 is being updated before previous update had finished (up -> down)
- setting to state=down vhosts=0
Feb 12 06:25:03 director: Warning: director(10.1.20.3:9090/left): Host
10.1.17.3 is being updated before previous update had finished (down -> up)
- setting to state=up vhosts=0

any idea what would cause that? Is my guess that multiple directors tried
to update the status simultaneously correct?


Re: Director+NFS Experiences

2017-02-24 Thread Mark Moseley
On Thu, Feb 23, 2017 at 3:45 PM, Zhang Huangbin  wrote:

>
> > On Feb 24, 2017, at 6:08 AM, Mark Moseley  wrote:
> >
> > * Do you use the perl poolmon script or something else? The perl script
> was
> > being weird for me, so I rewrote it in python but it basically does the
> > exact same things.
>
> Would you mind sharing it? :)
>
> 
> Zhang Huangbin, founder of iRedMail project: http://www.iredmail.org/
> Time zone: GMT+8 (China/Beijing).
> Available on Telegram: https://t.me/iredmail
>
>

Attached. No claims are made on the quality of my code :)


poolmon
Description: Binary data


Re: Users with multiple password

2017-02-24 Thread Eirik Rye

On 24/02/2017 15:54, Steffen Kaiser wrote:
> Check out http://wiki2.dovecot.org/PasswordDatabase
> 
>   result_failure = continue
>   result_internalfail = continue
>   result_success = return-ok
> 
> - -- Steffen Kaiser

Thanks. I have looked at this, however it would still require the
secondary passdb to be passing the plain-text password on to the backend
in order to constrain the passdb-query to a single result, right?

- Eirik Rye


Re: Quota usage value shows 140% of actual disk usage

2017-02-24 Thread Steffen Kaiser

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Fri, 24 Feb 2017, Umut Erol Kaçar wrote:


investigated and found that Dovecot shows a quota usage value roughly
around 140% of actual disk usage. It's also valid on newly created
accounts. My test account for example:


Quota does not count physical useage, but the amount of bytes 
allocated by the messages. Maildir may hardlink messages, hence, they 
count multiple times for the quota, but once for du.



The ratio is mostly the same for almost all the other accounts. It can vary
between like 1,3-1,6. So, the gap gets insane when more disk space is used,
say like with 2GB disk usage, Dovecot thinks 3,5GB quota is used...


Hmm, are you sure?


dovecot quota recalc does not fix the issue, it only sets the same value
again (I've checked with tcpdump and saw the query with the same quota


if quota recalc re-creates the same value, please check the hard link 
stuff.


- -- 
Steffen Kaiser

-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQEVAwUBWLBKfnz1H7kL/d9rAQK0ugf9EmPnt5zWLiWeCxsRi9iFHAcEFW6qE4be
98G2N5gzkyyQZZsPkaSQ05fVj81zijeArFn0DwtnXJzhDHkCzI4C2yd1LgwbgnFg
/wakFkjW8jd7m58Xv45UEmKlZC83o7mOQPNYqp23v2M7k06y4XDnGpNf8V5Ipu/F
IaoK7ZS9DkaIrF6Rv6+WGtlLjjUj4By0ZJk6mGD3eNsTnQmM6j+tEcv75WXJMhNP
M7QkqwqPaj/oPVUfNut+e5VzB5Y/w1UAJpHMAnsfXK5MC9DnzU4OQtoxhWtcLzm/
HbTs7rfGbGtcnLh9tgMtMy43awqSHe9V4v0/DASL/0oCQbFFo0AEmg==
=71mb
-END PGP SIGNATURE-


Re: Users with multiple password

2017-02-24 Thread Steffen Kaiser

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Fri, 24 Feb 2017, Eirik Rye wrote:


2. Multiple passdbs(?)


Check out http://wiki2.dovecot.org/PasswordDatabase

  result_failure = continue
  result_internalfail = continue
  result_success = return-ok

- -- 
Steffen Kaiser

-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQEVAwUBWLBJDHz1H7kL/d9rAQLffwf/X7qZCiQ/a/dVYPVLC+Ie+RoFqwl/W96m
syyTSsrwPa9GnkAygSpuBUByBDNwsNCTaheao3kwkhD51hBtHxkXXZlozyaJy9q7
ZA7UAQAwWGZlZpnUNzM4nRyFRyBLsZAWpMWQAZLy868kjXR75M4fxX4YdsHCp0Jf
Ajp88Khcx04e11tmEpTRoDbcsWyoap8YKCblbgS6euKXYu4oQT2gV+iLQAkTBAPM
Yh8Od3M7i9xf/6iP3lfj3HJtLb7KhtsgcmLQbGd+PPdWIOc9geeF9222ssP5QyYj
OL5PlL3Mm7c/BrHbqKnFNILKcf31CHdahigDYNheGKeS43Zx89uRwA==
=t9Fy
-END PGP SIGNATURE-


Quota usage value shows 140% of actual disk usage

2017-02-24 Thread Umut Erol Kaçar
Hello everyone,

Our server has these installed:

dovecot-2.0.21-2.el6.x86_64
dovecot-pigeonhole-2.0.21-2.el6.x86_64
dovecot-mysql-2.0.21-2.el6.x86_64

...and has been running for quite a long time, with several hundred domains
and thousands of accounts on it. My colleagues reported that it's been
showing quota usage values that are more than actual disk usage, so I
investigated and found that Dovecot shows a quota usage value roughly
around 140% of actual disk usage. It's also valid on newly created
accounts. My test account for example:

doveadm quota get -u test@example.local
Quota name TypeValue  Limit %
User quota STORAGE  4359 512000 0
User quota MESSAGE 7  - 0

du -sc /home/vmail/example.local/test/Maildir/{*,.[!.]*}
1044/home/vmail/example.local/test/Maildir/cur
28  /home/vmail/example.local/test/Maildir/dovecot.index.cache
8   /home/vmail/example.local/test/Maildir/dovecot.index.log
4   /home/vmail/example.local/test/Maildir/dovecot.mailbox.log
4   /home/vmail/example.local/test/Maildir/dovecot-uidlist
4   /home/vmail/example.local/test/Maildir/dovecot-uidvalidity
0   /home/vmail/example.local/test/Maildir/dovecot-uidvalidity.56a4dc8e
4   /home/vmail/example.local/test/Maildir/new
4   /home/vmail/example.local/test/Maildir/subscriptions
4   /home/vmail/example.local/test/Maildir/tmp
24  /home/vmail/example.local/test/Maildir/.Junk
1932/home/vmail/example.local/test/Maildir/.Sent
44  /home/vmail/example.local/test/Maildir/.Trash
3104total

4359/3104=1,40431701

So it shows roughly around 1,4*actualDiskUsage.

The ratio is mostly the same for almost all the other accounts. It can vary
between like 1,3-1,6. So, the gap gets insane when more disk space is used,
say like with 2GB disk usage, Dovecot thinks 3,5GB quota is used...

dovecot quota recalc does not fix the issue, it only sets the same value
again (I've checked with tcpdump and saw the query with the same quota
usage value).

The method is Dictionary quota with SQL.

I'm attaching the dovecot -n output with some other config files.

I've tried setting messages and bytes value to -1 on the MariaDB database
to force recalculation. But as soon as I run doveadm quota recalc, it gets
the same wrong value again.

What can I do to fix this?

Thanks in advance.
##
## Quota configuration.
##

# Note that you also have to enable quota plugin in mail_plugins setting.
# 

##
## Quota limits
##

# Quota limits are set using "quota_rule" parameters. To get per-user quota
# limits, you can set/override them by returning "quota_rule" extra field
# from userdb. It's also possible to give mailbox-specific limits, for example
# to give additional 100 MB when saving to Trash:

plugin {
  quota_rule = *:storage=500M
  quota_rule2 = Trash:storage=+10%%
  quota_rule3 = Spam:storage=+20%%
}

##
## Quota warnings
##

# You can execute a given command when user exceeds a specified quota limit.
# Each quota root has separate limits. Only the command for the first
# exceeded limit is excecuted, so put the highest limit first.
# The commands are executed via script service by connecting to the named
# UNIX socket (quota-warning below).
# Note that % needs to be escaped as %%, otherwise "% " expands to empty.

plugin {
  quota_warning = storage=99%% quota-warning 99 %u
  quota_warning2 = storage=80%% quota-warning 80 %u
}

# Quota Warning service

service quota-warning {
  executable = script /usr/local/bin/quota-warning.sh
  user = vmail
  unix_listener quota-warning {
user = vmail
group = vmail
mode = 0660
  }
}


##
## Quota backends
##

# Multiple backends are supported:
#   dirsize: Find and sum all the files found from mail directory.
#Extremely SLOW with Maildir. It'll eat your CPU and disk I/O.
#   dict: Keep quota stored in dictionary (eg. SQL)
#   maildir: Maildir++ quota
#   fs: Read-only support for filesystem quota

plugin {
  #quota = dirsize:User quota
  #quota = maildir:User quota
  quota = dict:User quota::proxy::sqlquota
  #quota = dict:User quota::proxy::quota
  #quota = fs:User quota
}

# Multiple quota roots are also possible, for example this gives each user
# their own 100MB quota and one shared 1GB quota within the domain:
plugin {
  #quota = dict:user::proxy::quota
  #quota2 = dict:domain:%d:proxy::quota_domain
  #quota_rule = *:storage=102400
  #quota2_rule = *:storage=1048576
  quota_status_success = DUNNO
  quota_status_nouser = DUNNO
  quota_status_overquota = "552 5.2.2 Mailbox is full"
}

connect = host=192.168.95.8 dbname=postfix_masterdb user=postfix_user 
password=someStrongPassword

map {
  pattern = priv/quota/storage
  table = quota
  username_field = username
  value_field = bytes
}

map {
  pattern = priv/quota/messages
  table = quota
  username_field = username
  value_field = messages
}
# 2.0.21: /etc/dovecot/dovecot.conf
# OS: Linux 

Users with multiple password

2017-02-24 Thread Eirik Rye

Hi!

~ dovecot --version
2.2.22 (fe789d2)

I am wondering if there is a way to set up virtual users with multiple 
valid passwords. We want to be able to provide users with 
device/app-specific passwords for their email accounts, as well as being 
able to create temporary "access tokens" for technical support when 
required.


I quickly found out that passdb using passwd-file or an sql-backend does 
not support returning multiple entries ("Error: passwd-file

/etc/dovecot/virtual.passwd: User rye exists more than once").

The documentation mentions that you can pass the plain-text password on 
to the MySQL-server for verification, and I suppose multiple passwords 
could could work, given a query like this (pseudo-SQL):


`SELECT password FROM account WHERE user = '%u' AND domain = '%d' AND 
password = TO_BASE64((SHA2('%w', 512));`


However, having Dovecot pass the plain-text password and letting the 
database deal with the hashing and encoding doesn't seem like a very 
"clean" solution. Preferably, dovecot should be the only piece of 
software touching the plain-text.


Ideally, I would like the following behavior:

1. passdb results multiple possible hashed passwords for the user
2. dovecot attempts the passwords in order
3. login fails normally if none of the passdb results match

Does anyone have any experience, or tips for setting up this type of 
behavior?


Other ideas we have touched upon are:

1. Different usernames (eg. 'user_device' or 'user_application')
2. Multiple passdbs(?)

Best regards,
Eirik Rye


[Dovecot-news] v2.2.28 released

2017-02-24 Thread Timo Sirainen
http://dovecot.org/releases/2.2/dovecot-2.2.28.tar.gz
http://dovecot.org/releases/2.2/dovecot-2.2.28.tar.gz.sig

 * director: "doveadm director move" to same host now refreshes user's
   timeout. This allows keeping user constantly in the same backend by
   just periodically moving the user there.
 * When new mailbox is created, use initially INBOX's
   dovecot.index.cache caching decisions.
 * Expunging mails writes GUID to dovecot.index.log now only if the
   GUID is quickly available from index/cache.
 * pop3c: Increase timeout for PASS command to 5 minutes.
 * Mail access errors are no longer ignored when searching or sorting.
   With IMAP the untagged SEARCH/SORT reply is still sent the same as
   before, but NO reply is returned instead of OK.

 + Make dovecot.list.index's filename configurable. This is needed when
   there are multiple namespaces pointing to the same mail root
   (e.g. lazy_expunge namespace for mdbox).
 + Add size.virtual to dovecot.index when folder vsizes are accessed
   (e.g. quota=count). This is mainly a workaround to avoid slow quota
   recalculation performance when message sizes get lost from
   dovecot.index.cache due to corruption or some other reason.
 + auth: Support OAUTHBEARER and XOAUTH2 mechanisms. Also support them
   in lib-dsasl for client side.
 + auth: Support filtering by SASL mechanism: passdb { mechanisms }
 + Shrink the mail processes' memory usage by not storing settings
   duplicated unnecessarily many times.
 + imap: Add imap_fetch_failure setting to control what happens when
   FETCH fails for some mails (see example-config).
 + imap: Include info about last command in disconnection log line.
 + imap: Created new SEARCH=X-MIMEPART extension. It's currently not
   advertised by default, since it's not fully implemented.
 + fts-solr: Add support for basic authentication.
 + Cassandra: Support automatically retrying failed queries if
   execution_retry_interval and execution_retry_times are set.
 + doveadm: Added "mailbox path" command.
 + mail_log plugin: If plugin { mail_log_cached_only=yes }, log the
   wanted fields only if it doesn't require opening the email.
 + mail_vsize_bg_after_count setting added (see example-config).
 + mail_sort_max_read_count setting added (see example-config).
 + pop3c: Added pop3c_features=no-pipelining setting to prevent using
   PIPELINING extension even though it's advertised.

 - Index files: day_first_uid wasn't updated correctly since v2.2.26.
   This caused dovecot.index.cache to be non-optimal.
 - imap: SEARCH/SORT may have assert-crashed in
   client_check_command_hangs
 - imap: FETCH X-MAILBOX may have assert-crashed in virtual mailboxes.
 - imap: Running time in tagged command reply was often wrongly 0.
 - search: Using NOT n:* or NOT UID n:* wasn't handled correctly
 - director: doveadm director kick was broken
 - director: Fix crash when using director_flush_socket
 - director: Fix some bugs when moving users between backends
 - imapc: Various error handling fixes and improvements
 - master: doveadm process status output had a lot of duplicates.
 - autoexpunge: If mailbox's rename timestamp is newer than mail's
   save-timestamp, use it instead. This is useful when autoexpunging
   e.g. Trash/* and an entire mailbox is deleted by renaming it under
   Trash to prevent it from being autoexpunged too early.
 - autoexpunge: Multiple processes may have been trying to expunge the
   same mails simultaneously. This was problematic especially with
   lazy_expunge plugin.
 - auth: %{passdb:*} was empty in auth-worker processes
 - auth-policy: hashed_password was always sent empty.
 - dict-sql: Merge multiple UPDATEs to a single statement if possible.
 - fts-solr: Escape {} chars when sending queries
 - fts: fts_autoindex_exclude = \Special-use caused crashes
 - doveadm-server: Fix leaks and other problems when process is reused
   for multiple requests (service_count != 1)
 - sdbox: Fix assert-crash on mailbox create race
 - lda/lmtp: deliver_log_format values weren't entirely correct if Sieve
   was used. especially %{storage_id} was broken.
 - lmtp_user_concurrency_limit didn't work if userdb changed username

___
Dovecot-news mailing list
Dovecot-news@dovecot.org
http://dovecot.org/cgi-bin/mailman/listinfo/dovecot-news


v2.2.28 released

2017-02-24 Thread Timo Sirainen
http://dovecot.org/releases/2.2/dovecot-2.2.28.tar.gz
http://dovecot.org/releases/2.2/dovecot-2.2.28.tar.gz.sig

 * director: "doveadm director move" to same host now refreshes user's
   timeout. This allows keeping user constantly in the same backend by
   just periodically moving the user there.
 * When new mailbox is created, use initially INBOX's
   dovecot.index.cache caching decisions.
 * Expunging mails writes GUID to dovecot.index.log now only if the
   GUID is quickly available from index/cache.
 * pop3c: Increase timeout for PASS command to 5 minutes.
 * Mail access errors are no longer ignored when searching or sorting.
   With IMAP the untagged SEARCH/SORT reply is still sent the same as
   before, but NO reply is returned instead of OK.

 + Make dovecot.list.index's filename configurable. This is needed when
   there are multiple namespaces pointing to the same mail root
   (e.g. lazy_expunge namespace for mdbox).
 + Add size.virtual to dovecot.index when folder vsizes are accessed
   (e.g. quota=count). This is mainly a workaround to avoid slow quota
   recalculation performance when message sizes get lost from
   dovecot.index.cache due to corruption or some other reason.
 + auth: Support OAUTHBEARER and XOAUTH2 mechanisms. Also support them
   in lib-dsasl for client side.
 + auth: Support filtering by SASL mechanism: passdb { mechanisms }
 + Shrink the mail processes' memory usage by not storing settings
   duplicated unnecessarily many times.
 + imap: Add imap_fetch_failure setting to control what happens when
   FETCH fails for some mails (see example-config).
 + imap: Include info about last command in disconnection log line.
 + imap: Created new SEARCH=X-MIMEPART extension. It's currently not
   advertised by default, since it's not fully implemented.
 + fts-solr: Add support for basic authentication.
 + Cassandra: Support automatically retrying failed queries if
   execution_retry_interval and execution_retry_times are set.
 + doveadm: Added "mailbox path" command.
 + mail_log plugin: If plugin { mail_log_cached_only=yes }, log the
   wanted fields only if it doesn't require opening the email.
 + mail_vsize_bg_after_count setting added (see example-config).
 + mail_sort_max_read_count setting added (see example-config).
 + pop3c: Added pop3c_features=no-pipelining setting to prevent using
   PIPELINING extension even though it's advertised.

 - Index files: day_first_uid wasn't updated correctly since v2.2.26.
   This caused dovecot.index.cache to be non-optimal.
 - imap: SEARCH/SORT may have assert-crashed in
   client_check_command_hangs
 - imap: FETCH X-MAILBOX may have assert-crashed in virtual mailboxes.
 - imap: Running time in tagged command reply was often wrongly 0.
 - search: Using NOT n:* or NOT UID n:* wasn't handled correctly
 - director: doveadm director kick was broken
 - director: Fix crash when using director_flush_socket
 - director: Fix some bugs when moving users between backends
 - imapc: Various error handling fixes and improvements
 - master: doveadm process status output had a lot of duplicates.
 - autoexpunge: If mailbox's rename timestamp is newer than mail's
   save-timestamp, use it instead. This is useful when autoexpunging
   e.g. Trash/* and an entire mailbox is deleted by renaming it under
   Trash to prevent it from being autoexpunged too early.
 - autoexpunge: Multiple processes may have been trying to expunge the
   same mails simultaneously. This was problematic especially with
   lazy_expunge plugin.
 - auth: %{passdb:*} was empty in auth-worker processes
 - auth-policy: hashed_password was always sent empty.
 - dict-sql: Merge multiple UPDATEs to a single statement if possible.
 - fts-solr: Escape {} chars when sending queries
 - fts: fts_autoindex_exclude = \Special-use caused crashes
 - doveadm-server: Fix leaks and other problems when process is reused
   for multiple requests (service_count != 1)
 - sdbox: Fix assert-crash on mailbox create race
 - lda/lmtp: deliver_log_format values weren't entirely correct if Sieve
   was used. especially %{storage_id} was broken.
 - lmtp_user_concurrency_limit didn't work if userdb changed username


doveadm "-v" optoin doesn't do anything ?

2017-02-24 Thread Ben

Hi,

On 2.2.10, running the following

doveadm -v -o mail_fsync=never backup -R -u m...@example.com imapc:

There is zero output ?

Running -D instead of -v it spews out debug messages.

Any ideas ?

Ben


Re: Replacement for antispam plugin

2017-02-24 Thread Trever L. Adams
On 02/12/2017 05:28 PM, Stephan Bosch wrote:
>
> Actually, Pigeonhole should be able to do that too:
>
> https://github.com/dovecot/pigeonhole/blob/master/doc/plugins/sieve_extprograms.txt#L112
>
> Yes, I need to update the wiki.
>
>
> Regards,
>
> Stephan.
>
For DSPAM, with --client, one also needs a --user set.
http://hg.dovecot.org/dovecot-antispam-plugin/file/5ebc6aae4d7c/src/dspam.c
did this.

Is there a way to feed this into the scripts mentioned? I imagine this
is imap.user or imap.email, but how would one pass it to the script?

Thank you.

Trever




signature.asc
Description: OpenPGP digital signature


Re: Sieve removeflag Action

2017-02-24 Thread Thomas Leuxner
* Stephan Bosch  2017.02.24 10:20:

> Could you show me your full configuration (`dovecot -n`)?
> 
> Regards,
> 
> Stephan

Live configuration and scripts sent off-list.

Regards
Thomas


signature.asc
Description: Digital signature


Re: Sieve removeflag Action

2017-02-24 Thread Stephan Bosch
Op 2/22/2017 om 8:40 PM schreef Thomas Leuxner:
> * Thomas Leuxner  2017.02.20 12:56:
>
>> Feb 20 07:00:23 nihlus dovecot: master: Dovecot v2.2.devel (8f42a89) 
>> starting up for imap, lmtp
>>
>> This one processed the dovecot-news mail for 2.2.28.rc1 fine which uses a 
>> similar sieve rule. I will monitor global rules with this build and report 
>> back.
> The results seem to be arbitrary. Personal rules _always_ work for flag 
> actions, global rules included do _sometimes_. This means one day the same 
> rule may trigger, the next day it won't. I found no way to reproduce it for 
> global rules in order to narrow down the issue.

Could you show me your full configuration (`dovecot -n`)?

Regards,

Stephan