Re: The end of Dovecot Director?

2022-11-01 Thread Mark Moseley
TL;DR: 

Sure, this affects medium/large/Enterprise folks (that's where I was using
Director -- though currently retired, so no existing self-interest in this
email).

This will also affect *any* installation with a whopping two dovecot
servers with mdbox backends talking to a single linux NFS server as well.
That's not exactly "Enterprise". Replication is great, but it is not a
replacement for Director (nor is any sort of load balancing, regardless of
the confused comments in this thread about nginx).

I think the real issue here is that Dovecot is removing *existing,
long-standing, critical functionality* from the open source version. That
is a huge, huge red flag.

I'm also a little bewildered by the comment "Director never worked
especially well". Worked great for me, at scale, for years. Complex? Yup,
but that was the price of mdbox (worth it). And if you're setting up a
proxy cluster (instead of a full Director cluster) in front of your IMAP
servers, you've already tackled 90% of the complexity anyway (i.e. using
Director isn't the hard part).

This *feels" to me like a parent company looking to remove features from
the open source version in order to add feature differentiation to the paid
version.

I've loved the Dovecot project for over a decade and a half. And
incidentally I have a very warm spot in my heart for Timo and Aki, thanks
to Dovecot and especially this mailing list.

I've also loved the PowerDNS project for a decade and a half, so this
removal of *existing functionality* is doubly worrisome. I'd like both
projects to be monetisable and profitable enough to their parent so that
they continue on for a very, very long time.

But removing long-standing features is a bad look. Please reconsider this
decision.


On Thu, Oct 27, 2022 at 4:04 AM Jan Bramkamp  wrote:

> On 27.10.22 04:24, Timo Sirainen wrote:
> > Director never worked especially well, and for most use cases it's just
> unnecessarily complex. I think usually it could be replaced with:
> >
> >   * Database (sql/ldap/whatever) containing user -> backend table.
> >   * Configure Dovecot proxy to use this database as passdb.
> >   * For HA change dovemon to update the database if backend is down to
> move users elsewhere
> >   * When backend comes up, move users into it. Set delay_until extra
> field for user in passdb to 5 seconds into future and kick the user in its
> old backend (e.g. via doveadm HTTP API).
> >
> > All this can be done with existing Dovecot. Should be much easier to
> build a project doing this than forking director.
> Thank you for putting what is about to be lost to the community edition
> into an operational perspectiv: no reason to panic. Nobody is taking
> replicated active-passive pairs from small to medium scale operators.
> Neither are the hooks required for more fancy load balancing and
> steering on the chopping block.
>


Re: TLS connection closed unexpectedly

2022-01-07 Thread Mark Moseley
On Fri, Jan 7, 2022 at 1:34 PM John Fawcett  wrote:

> On 07/01/2022 21:03, Ken Wright wrote:
> > On Fri, 2022-01-07 at 18:50 +0100, John Fawcett wrote:
> >> it may or may not be related to the tls issue, but I think you will
> >> want to investigate that message about the SQL query syntax error.
> >> You are not going to be able to login if the query is giving errors.
> >> Check the log doesn't reveal the cause.
> > Know anything about SQL queries, John?  Here's the user query in
> > question:
> >
> > user_query = SELECT maildir, 2000 AS uid, 2000 AS gid FROM mailbox
> > WHERE username = '%u' AND active='1'
> >
> > I copied this directly from the tutorial I've been following and this
> > is the first time I've seen this error.
> >
> Hi Ken
>
> looks fine to me. However, mariadb is not accepting it. I suggest you
> run with auth_debug = yes and check the logs.
>
>

Does it help at all if you use backticks around the column names for uid
and gid? I.e.

from:
user_query = SELECT maildir, 2000 AS uid, 2000 AS gid FROM mailbox WHERE
username = '%u' AND active='1'

to:
user_query = SELECT maildir, 2000 AS `uid`, 2000 AS `gid` FROM mailbox
WHERE username = '%u' AND active='1'


Re: Why Last-login?

2021-03-03 Thread Mark Moseley
On Wed, Mar 3, 2021 at 11:16 AM @lbutlr  wrote:

> On 03 Mar 2021, at 05:33, Yassine Chaouche 
> wrote:
> >> Am I missing some reason I would need/want to keep track of that
> specific login time separately?
>
> > What about mbox files ?
>
> Is anyone foolish enough to use mbox in 2021?
>
> It's designed for dozens of kilobytes of mail. Perhaps hundreds of
> kilobytes/ It is a horrible horrible format for hundreds of megabyte of
> mail, it offers no advantages at all, and is fragile to corruption since it
> stores everything in a single file.
>
>

Specific to the 'why use last login' question, with millions of mailboxes,
walking the filesystem is more than a little onerous (having done it many
times over the years, and never remembering where I put the script from
'last time') and takes a good chunk of a day to run. We were doing
file-based last-login for a while (yeah, still needs a fs walk, but at
least is dead simple and requires no stat()'ing), till locking became an
issue (nfs). We moved to redis a couple of months ago, and now determining
things like "who hasn't logged into anything in 30 days" becomes a 1 minute
run of a python script using redis SCAN.

If you don't have a mountain of mailboxes and fs-walking isn't a problem,
then there's def less need. Which means you don't have management
repeatedly asking for 'active mailboxes' ;)


Feature Request: Redis support for username and TLS

2020-11-03 Thread Mark Moseley
I was wondering if there was any imminent support in 2.3.12+ for using a
username to log into Redis, as well as support for using TLS to connect to
Redis. And if not, I'd like to put in a feature request for those two
things (AUTH with username/password, and TLS connections to Redis).

Specifically, I was looking at using a username/password combo to log into
Redis for the quota_clone plugin. I found the 'password' param in the
source (not documented at https://wiki.dovecot.org/Dictionary). There's no
'username' param (the 'username' in the source seems to refer to the
mailbox, for the purpose of building the key name).

Redis 6 supports authenticating with a username and password, as well as
the ability to listen on a TLS-enabled port. Both of these significantly
improve security, combined with the new ACL system.

Obviously, these Redis 6 features are brand new, so I'd be shocked if they
were already supported. But it'd be awesome if those were added to Dovecot
:)

Currently, I've got a localhost Envoy proxy doing TCP proxying from
localhost+non-TLS to my Redis TLS port, which is a kludge at best. There's
a neat Envoy Redis proxy that *almost* does the trick but the Envoy Redis
proxy unfortunately doesn't support MULTI/EXEC, which Dovecot quota_clone
uses, or I'd be using that instead of a plain TCP proxy (since the Envoy
Redis proxy can use a username/password+tls to connect to the upstream
Redis).


Re: Sieve filter script EXECUTION FAILED

2020-10-30 Thread Mark Moseley
On Fri, Oct 30, 2020 at 11:34 AM @lbutlr  wrote:

> On 30 Oct 2020, at 11:57, Aki Tuomi  wrote:
> > But I think the sed here is missing 's' from start, so this does not
> actually do anything...
>
> Copy/paste/edit error. The s is there in the file.
>
> darkmode.sh:
> #!/bin/sh
> echo $1 | sed -e 's||* {color:white !important;
> background-color: black !important; } |'
>
> I am not sure about the $1. I think filter just pipes the message (or part
> of the message.
>
> I will see what happens without the echo I suppose.
>
> Nope, still the same.
>
>   32:   starting `:contains' match with `i;ascii-casemap' comparator:
>   32:   matching value ` lang="en">29-Oct-2020 "" 

Re: LMTP Authentication Error

2020-10-11 Thread Mark Moseley
On Sat, Oct 10, 2020 at 12:08 PM David Morsberger 
wrote:

> I wish someone could help me. I’m trying to track auth in the lmtp code.
> Nice code base but I’m having trouble tracking the call stack for the error
>
> Sent from my iPhone
>
> > On Oct 9, 2020, at 08:00, David Morsberger  wrote:
> >
> > Alexander,
> >
> > Do you see anything wrong in my config?
> >
> > David
> >
> > Sent from my iPhone
> >
> >> On Oct 7, 2020, at 18:19, David Morsberger 
> wrote:
> >> On 2020-10-07 12:43, Alexander Dalloz wrote:
> > Am 07.10.2020 um 18:20 schrieb da...@mmpcrofton.com:
> > Any ideas on how to resolve the Userdb connect/lookup problem? My
> users are pinging me on Sieve support.
> > Thanks,
> > David
> >>> Provide a full output of "doveconf -n"?
> >>> Alexander
> >> Alexandar,
> >> Thanks and here you go.
> >> # 2.3.7.2 (3c910f64b): /etc/dovecot/dovecot.conf
> >> # Pigeonhole version 0.5.7.2 ()
> >> # OS: Linux 5.4.0-48-generic x86_64 Ubuntu 20.04.1 LTS
> >> # Hostname: mmp-mail.mmpcrofton.com
> >> base_dir = /var/run/dovecot/
> >> first_valid_uid = 150
> >> login_greeting = Dovecot ready.
> >> mail_gid = 150
> >> mail_location = mbox:~/mail:INBOX=/var/mail/%u
> >> mail_privileged_group = mail
> >> mail_uid = 150
> >> managesieve_notify_capability = mailto
> >> managesieve_sieve_capability = fileinto reject envelope
> encoded-character vacation subaddress comparator-i;ascii-numeric relational
> regex imap4flags copy include variables body enotify environment mailbox
> date index ihave duplicate mime foreverypart extracttext
> >> namespace inbox {
> >> inbox = yes
> >> location =
> >> mailbox Drafts {
> >> auto = subscribe
> >> special_use = \Drafts
> >> }
> >> mailbox Junk {
> >> auto = subscribe
> >> special_use = \Junk
> >> }
> >> mailbox Sent {
> >> auto = subscribe
> >> special_use = \Sent
> >> }
> >> mailbox "Sent Messages" {
> >> auto = no
> >> special_use = \Sent
> >> }
> >> mailbox Spam {
> >> auto = create
> >> special_use = \Junk
> >> }
> >> mailbox Trash {
> >> auto = subscribe
> >> special_use = \Trash
> >> }
> >> prefix =
> >> }
> >> passdb {
> >> args = /etc/dovecot/dovecot-sql.conf.ext
> >> driver = sql
> >> }
> >> plugin {
> >> sieve =
> file:/home/mail/rules/%u/;active=/home/mail/rules/%u/.dovecot.sieve
> >> sieve_dir = /home/mail/rules/%u
> >> }
> >> protocols = " imap lmtp sieve pop3 sieve"
> >> service auth {
> >> unix_listener /var/spool/postfix/private/auth {
> >> group = postfix
> >> mode = 0660
> >> user = postfix
> >> }
> >> }
> >> service lmtp {
> >> unix_listener /var/spool/postfix/private/dovecot-lmtp {
> >> group = postfix
> >> mode = 0600
> >> user = postfix
> >> }
> >> }
> >> ssl = required
> >> ssl_cert =  >> ssl_client_ca_dir = /etc/ssl/certs
> >> ssl_dh = # hidden, use -P to show it
> >> ssl_key = # hidden, use -P to show it
> >> userdb {
> >> driver = prefetch
> >> }
> >> userdb {
> >> args = /etc/dovecot/dovecot-sql.conf.ext
> >> driver = sql
> >> }
> >> protocol lmtp {
> >> mail_plugins = " sieve"
> >> postmaster_address = da...@mmpcrofton.com
> >> }
> >> protocol imap {
> >> mail_max_userip_connections = 50
> >> }
>


Pretty sure you can set up multiple unix_listener's. What about creating
another one, inside the 'service auth' container? It'll need to have
unix_listener set to 'auth-userdb' (for dovecot's sake, which probably
means that you'll to leave it with default user/group/permissions) with a
'path' of /var/run/dovecot. And then rename the existing one to
auth-userdb-postfix (totally arbitrary), though note that that will change
the filename of the socket itself, so you'll need to change postfix to use
/var/spool/postfix/private/auth/auth-userdb-postfix (i.e. same last
component as the argument to 'unix_listener')

So you'd end up with something like:

service auth {
 unix_listener auth-userdb {
path = /var/run/dovecot
mode = 0660 (or whatever the default is)
user = $dovecot_auth_user_dunno_what
group = $dovecot_auth_group_dunno_what
  }
  unix_listener auth-userdb-postfix {
path = /var/spool/postfix/private/auth
mode = 0660
user = postfix
group = postfix
  }
}

And then postfix would have /var/spool/postfix/private/auth/auth-userdb-postfix
for its dovecot-related socket


Re: Dovecot permission denied errors on NFS after upgrade to 2.2.17

2020-07-13 Thread Mark Moseley
On Mon, Jul 13, 2020 at 7:36 AM Claudio Corvino 
wrote:

> Thanks Jochen,
>
> no mixups present at all, file assigned to UID 501.
>
> Since this problem started few hours after the Debian upgrade, I think
> it is related to it.
>
> I don't know if something has changed on the NFS client side on Debian,
> but I don't think so as aptlistchanges didn't notify me about it, nor if
> Dovecot 2.2.17 treat NFS in other way.
>
> I'm stuck.
>
> On 13/07/20 16:07, Jochen Bern wrote:
> > On 07/13/2020 03:45 PM, Claudio Corvino wrote:
> >> in addition the "permission denied" error is
> >> random, most of the time Dovecot works well.
> > In *that* case, I'd say that UID/GID mapping problems can be ruled out.
> >
> >> How can I check the mappings NFS uses?
> > You don't have any relevant options in the client's fstab entry, and
> > I'll assume that there are none in the server's /etc/exports, either.
> > That leaves only potential default mappings, which should be documented
> > in the corresponding manpages.
> >
> > Also, since there's only *one* user/group involved, you can always
> > "chown" a test file on one side and check with "ls -n" on the other to
> > verify whether there are mixups.
> >
> > *Intermittent* failures of an NFS mount over a well-functioning LAN ...
> > I'm thinking "file locking" now, but that's a *complicated* topic, to
> > say the least ...
> >
> > https://en.wikipedia.org/wiki/File_locking#Problems
> >
> https://unix.stackexchange.com/questions/553645/how-to-install-nfslock-daemon
> >
> > Regards,
>
>

This is just me throwing things out to look at, but did the client mount on
the old server use NFS3 and the new upgraded client uses NFS4? Sometimes
that can cause weirdness with id mapping.


Re: Cert for ip range?

2019-11-27 Thread Mark Moseley via dovecot
On Wed, Nov 27, 2019 at 11:31 AM Aki Tuomi 
wrote:

>
> > On 27/11/2019 21:28 Mark Moseley via dovecot 
> wrote:
> >
> >
> > On Tue, Nov 26, 2019 at 11:22 PM Aki Tuomi via dovecot <
> dovecot@dovecot.org> wrote:
> > >
> > >  On 21.11.2019 23.57, Marc Roos via dovecot wrote:
> > >  > Is it possible to configure a network for a cert instead of an ip?
> > >  >
> > >  > Something like this:
> > >  >
> > >  > local 192.0.2.0 {
> > >  > ssl_cert =  > >  > ssl_key =  > >  > }
> > >  >
> > >  > Or
> > >  >
> > >  > local 192.0.2.0/24 (http://192.0.2.0/24) {
> > >  > ssl_cert =  > >  > ssl_key =  > >  > }
> > >  >
> > >  > https://wiki.dovecot.org/SSL/DovecotConfiguration
> > >  >
> > >  >
> > >  >
> > >
> > >  Local part supports that.
> > >
> > >  Aki
> >
> >
> > On the same topic (though I can start a new thread if preferable), it
> doesn't appear that you can use wildcards/patterns in the 'local' name,
> unless I'm missing something--which is quite likely.
> >
> > If it's not possible currently, can I suggest adding that as a feature?
> That is, instead of having to list out all the various SNI hostnames that a
> cert should be used for (e.g. "local pop3.example.com (
> http://pop3.example.com) imap.example.com (http://imap.example.com)
> pops.example.com (http://pops.example.com) pop.example.com (
> http://pop.example.com)  {" -- and on and on), it'd be handy to be
> able to just say "local *.example.com (http://example.com) {" and call it
> a day. I imagine there'd be a bit of a slowdown, since you'd have to loop
> through patterns on each connection (instead of what I assume is a hash
> lookup), esp for people with significant amounts of 'local's.
> >
>
> Actually that is supported, but you need to use v2.2.35 or later.
>
>
Ha, it literally *never* fails (that there's some option I've overlooked 10
times, before asking on the list)

'local' vs 'local_name'. Never noticed the difference before in the docs.
Might be worth adding a blurb in
https://wiki.dovecot.org/SSL/DovecotConfiguration that 'local_name' takes
'*'-style wildcard (at least in the beginning of the hostname). I'll resume
my embarrassed silence now. :)


Re: Cert for ip range?

2019-11-27 Thread Mark Moseley via dovecot
On Tue, Nov 26, 2019 at 11:22 PM Aki Tuomi via dovecot 
wrote:

>
> On 21.11.2019 23.57, Marc Roos via dovecot wrote:
> > Is it possible to configure a network for a cert instead of an ip?
> >
> > Something like this:
> >
> > local 192.0.2.0 {
> > ssl_cert =  > ssl_key  =  > }
> >
> > Or
> >
> > local 192.0.2.0/24 {
> > ssl_cert =  > ssl_key  =  > }
> >
> > https://wiki.dovecot.org/SSL/DovecotConfiguration
> >
> >
> >
>
> Local part supports that.
>
> Aki
>


On the same topic (though I can start a new thread if preferable), it
doesn't appear that you can use wildcards/patterns in the 'local' name,
unless I'm missing something--which is quite likely.

If it's not possible currently, can I suggest adding that as a feature?
That is, instead of having to list out all the various SNI hostnames that a
cert should be used for (e.g. "local pop3.example.com imap.example.com
pops.example.com pop.example.com  {" -- and on and on), it'd be handy
to be able to just say "local *.example.com {" and call it a day. I imagine
there'd be a bit of a slowdown, since you'd have to loop through patterns
on each connection (instead of what I assume is a hash lookup), esp for
people with significant amounts of 'local's.


Re: Quota and maildir does not work with subfolders of INBOX

2019-09-10 Thread Mark Moseley via dovecot
On Mon, Sep 9, 2019 at 8:57 PM Niels Kobschätzki via dovecot <
dovecot@dovecot.org> wrote:

> On 9/9/19 6:18 PM, @lbutlr via dovecot wrote:
> > On 9 Sep 2019, at 09:27, Niels Kobschätzki 
> wrote:
> >> The moment I remove those folders, the size gets calculated correctly.
> Unfortunately those folders are generated by some clients automatically
> afaik (like .INBOX.Trash)
> >> That sounds like a misconfiguration of the IMAP client. Someone has
> gone in and improperly set INBOX as the IMAP path Prefix in their MUA.
>
> The thing is that it worked before. Even when the user misconfigured
> their client in such a way, the quota-plugin shouldn't just throw some
> dice to get to a arbitrarily high quota the user has used instead of the
> right amount.
>
> > I used to have this problem with some users until I implemented repeated
> and consistent application of a clue bat.
>
> Some users is in my case (as far as I guess) like 0.5%
>
> > I don’t know of a server-side setting to prevent users from screwing up
> this setting, but maybe?
>
> Wouldn't that break existing accounts?
>
>
Does it sound like this?
https://www.dovecot.nl/pipermail/dovecot/2019-March/115214.html

If so, in a direct email, Timo suggested using the 'count' quota (instead
of the Maildir++ quota). I've not yet been able to test that to verify, due
to the large amount of mailboxes and the reliance on maildirsize file for
some of our tools.


Re: [BUG?] Double quota calulation when special folder is present

2019-08-06 Thread Mark Moseley via dovecot
On Tue, Apr 9, 2019 at 9:52 PM Aki Tuomi  wrote:

>
> On 10 April 2019 05:00 Mark Moseley via dovecot 
> wrote:
>
>
> On Wed, Apr 3, 2019 at 9:37 PM Mark Moseley < moseleym...@gmail.com>
> wrote:
>
>
> On Wed, Mar 20, 2019 at 2:13 PM Mark Moseley < moseleym...@gmail.com>
> wrote:
>
> Just hoping to get some dev eyes on this. I'm incredibly reluctant to
> throw the word 'bug' around
> (since 99 times out of 100, it's not -- it's almost always the config),
> but I can't think of any way
> that this could be a config issue, esp when the pre-2.2.34 version works
> as expected.
>
> I noticed during troubleshooting that dovecot errors out if I try to
> create a subfolder called
> 'INBOX' but it'll happily create a subfolder called INBOX.SomethingElse
> (i.e. a folder called
> INBOX.INBOX.SomethingElse - resulting in a directory called
> .INBOX.SomethingElse on the
> filesystem, and leading to the problem described below). Is that
> sub-subfolder creation (where
> the top level subfolder matches the namespace name) supposed to be
> allowed? It seems
> odd that 'INBOX' (as a subfolder of INBOX) would be blocked but
> INBOX.SomethingElse (as
> a subfolder of INBOX) would be allowed. I'd expect INBOX.SomethingElse
> (i.e.
> INBOX.INBOX.SomethingElse) would be blocked as well.
>
>
> On Wed, Mar 13, 2019 at 4:46 AM Bernd Wurst via dovecot <
> dovecot@dovecot.org> wrote:
>
> Hello,
>
> we're operating dovecot on a small server. Some years ago, we migrated
> from courier IMAP to dovecot. Therefore, we defined our default
> Namespace "inbox" with prefix "INBOX." to have this compatible. I found
> this in some migration docs those days. Generally, everything worked as
> expected.
>
> Our only namespace is configured like this:
>
> namespace inbox {
>  separator = .
>   prefix = INBOX.
>   inbox = yes
> }
>
> Regularly, there is no folder named INBOX or .INBOX in the file system,
> I suppose this is correct.  But I found a special corner case today when
> it comes to quota calculation.
>
> When - for whatever reason - a folder .INBOX.foo (for arbitrary values
> of foo) exists, the whole mailbox is counted twice in quota
> recalculation. Just creating .INBOX does nothing but a subfolder
> triggers the problem.
>
> This is my shell view (replaced username and file path and deleted
> unnecessary debug output)
>
> $ cat maildirsize
> 268435456S
> 14697 17
> $ maildirmake .INBOX.foo
> $ sudo doveadm -D quota recalc -u 
> [...]
> doveadm(): Debug: Namespace inbox: type=private, prefix=INBOX.,
> sep=., inbox=yes, hidden=no, list=yes, subscriptions=yes
> location=maildir:/home/.../test
> doveadm(): Debug: maildir++: root=/home/.../test, index=,
> indexpvt=, control=, inbox=/home/.../test, alt=
> doveadm(): Debug: Namespace : type=private, prefix=, sep=,
> inbox=no, hidden=yes, list=no, subscriptions=no location=fail::LAYOUT=none
> doveadm(): Debug: none: root=, index=, indexpvt=, control=,
> inbox=, alt=
> doveadm(): Debug: quota: quota_over_flag check: quota_over_script
> unset - skipping
> doveadm(): Debug: Quota root User quota: Recalculated relative
> rules with bytes=268435456 count=0. Now grace=26843545
> doveadm(): Debug: Namespace INBOX.: Using permissions from
> /home/.../test: mode=0700 gid=default
>
> $ cat maildirsize
> 268435456S
> 29394 34
>
>
> So the used quota has exactly been doubled by just creating an empty
> subfolder.
>
> Do you have any pointers for fixing my configuration or is this a bug in
> dovecot?
>
>
> I coincidentally resurrected a months-old thread with this same issue a
> few days ago. I'm seeing the exact same after upgrading from 2.2.32 to
> 2.2.36.
>
> The original poster (who also narrowed it down to something in 2.2.34)
> mentioned a workaround that does indeed work, namely setting
> mailbox_list_index=no:
>
> > doveadm -o 'mailbox_list_index=no' quota recalc -u myuser
>
> I've been staring at diffs of 2.2.33 and 2.2.34 without anything jumping
> out at me (not a C guy, sadly). Maybe src/lib-storage/index/index-storage.c
> or src/lib-storage/list/mailbox-list-fs-iter.c or
> src/lib-storage/list/mailbox-list-index-iter.c
> or src/lib-storage/list/mailbox-list-index.c?
>
> The latter few have some added strcmp's against "INBOX". Then again,
> there's a lot of new code in the diffs under src/lib-storage that
> references INBOX specifically.
>
>
> Can the Dovecot team confirm whether this is indeed a bug or not?  I've
> not yet been able to test 2.3.x to see if the problem exists there as well.
>
>
> I've bisected this down to this commit:
>
> git diff
> 76

Re: Using userdb/passdb data in director_username_hash

2019-04-12 Thread Mark Moseley via dovecot
On Fri, Apr 12, 2019 at 11:14 AM Aki Tuomi 
wrote:

>
> > On 12 April 2019 21:09 Mark Moseley via dovecot 
> wrote:
> >
> >
> > TL;DR:
> >
> > Can director_username_hash use %{userdb:...} or %{passdb:...} ?
> >
> > 
> >
> > This is on Ubuntu Precise, running dovecot 2.2.36. It's a fully
> production, director-ized env, so assume everything is working correctly.
> Happy to post doveconf if it's relevant but wanted to ask a general
> question first.
> >
> > I was curious if there's a way to get userdb/passdb data
> into director_username_hash. Currently, we've got default hashing (on %u).
> I'm returning a SQL field called 'real_username' (the owner of the mailbox,
> so almost never the same as %u). I'd like (for mdbox reasons) to hash on
> that rather than %u.
> >
> > My test SQL is returning (this is just a chunk -- it's duplicated for
> testing):
> > UserName AS userdb_real_username, UserName AS real_username
> >
> > I can see in my director boxes that it's at least picking up the latter:
> >
> > passdb out: PASS1user=tesbox@mailbox.comproxy=yreal_username=testuser
> >
> > Is it possible to inject 'real_username' into director_username_hash?
> That is, I'd rather hash on 'testuser' instead of 'testbed'.
> >
> > I've been trying different permutations on my director boxes with no
> luck.
> >
> > director_username_hash = %{userdb:real_username}
> > director_username_hash = %{passdb:real_username}
> > director_username_hash = %{userdb:userdb_real_username}
> > director_username_hash = %{passdb:userdb_real_username}
> >
> > With any of those settings, every mailbox gets hashed to the same
> backend, so I'm guessing it's not getting anything useful (i.e. probably
> resolving to the same empty string and hashing on that -- or perhaps is
> just hashing on the literal string, e.g. "%{userdb:real_username}" ).
> >
> > And I'm not even sure if director_username_hash has access to any
> passdb/userdb data. Is there a debug setting that would show what string
> director is using to do the hashing?
> >
> > Current debug settings are:
> >
> > auth_debug = yes
> > auth_debug_passwords = yes
> > mail_debug = yes
> >
> > but not a peep as to the string that director is hashing on.
>
> Hi!
>
> The only variables usable on director_username_hashing are (u)ser,
> user(n)ame and (d)omain.
>
>
Ok, thanks for the info!


Using userdb/passdb data in director_username_hash

2019-04-12 Thread Mark Moseley via dovecot
TL;DR:

Can director_username_hash use %{userdb:...} or %{passdb:...} ?



This is on Ubuntu Precise, running dovecot 2.2.36. It's a fully production,
director-ized env, so assume everything is working correctly. Happy to post
doveconf if it's relevant but wanted to ask a general question first.

I was curious if there's a way to get userdb/passdb data
into director_username_hash. Currently, we've got default hashing (on %u).
I'm returning a SQL field called 'real_username' (the owner of the mailbox,
so almost never the same as %u). I'd like (for mdbox reasons) to hash on
that rather than %u.

My test SQL is returning (this is just a chunk -- it's duplicated for
testing):
UserName AS userdb_real_username, UserName AS real_username

I can see in my director boxes that it's at least picking up the latter:

passdb out: PASS 1 user=tes...@mailbox.com proxy=y real_username=testuser

Is it possible to inject 'real_username' into director_username_hash? That
is, I'd rather hash on 'testuser' instead of 'testbed'.

I've been trying different permutations on my director boxes with no luck.

director_username_hash = %{userdb:real_username}
director_username_hash = %{passdb:real_username}
director_username_hash = %{userdb:userdb_real_username}
director_username_hash = %{passdb:userdb_real_username}

With any of those settings, every mailbox gets hashed to the same backend,
so I'm guessing it's not getting anything useful (i.e. probably resolving
to the same empty string and hashing on that -- or perhaps is just hashing
on the literal string, e.g. "%{userdb:real_username}" ).

And I'm not even sure if director_username_hash has access to any
passdb/userdb data. Is there a debug setting that would show what string
director is using to do the hashing?

Current debug settings are:

auth_debug = yes
auth_debug_passwords = yes
mail_debug = yes

but not a peep as to the string that director is hashing on.


Re: [BUG?] Double quota calulation when special folder is present

2019-04-09 Thread Mark Moseley via dovecot
On Wed, Apr 3, 2019 at 9:37 PM Mark Moseley  wrote:

>
> On Wed, Mar 20, 2019 at 2:13 PM Mark Moseley 
> wrote:
>
>> Just hoping to get some dev eyes on this. I'm incredibly reluctant to
>> throw the word 'bug' around
>> (since 99 times out of 100, it's not -- it's almost always the config),
>> but I can't think of any way
>> that this could be a config issue, esp when the pre-2.2.34 version works
>> as expected.
>>
>> I noticed during troubleshooting that dovecot errors out if I try to
>> create a subfolder called
>> 'INBOX' but it'll happily create a subfolder called INBOX.SomethingElse
>> (i.e. a folder called
>> INBOX.INBOX.SomethingElse - resulting in a directory called
>> .INBOX.SomethingElse on the
>> filesystem, and leading to the problem described below). Is that
>> sub-subfolder creation (where
>> the top level subfolder matches the namespace name) supposed to be
>> allowed? It seems
>> odd that 'INBOX' (as a subfolder of INBOX) would be blocked but
>> INBOX.SomethingElse (as
>> a subfolder of INBOX) would be allowed. I'd expect INBOX.SomethingElse
>> (i.e.
>> INBOX.INBOX.SomethingElse) would be blocked as well.
>>
>>
>> On Wed, Mar 13, 2019 at 4:46 AM Bernd Wurst via dovecot <
>> dovecot@dovecot.org> wrote:
>>
>>> Hello,
>>>
>>> we're operating dovecot on a small server. Some years ago, we migrated
>>> from courier IMAP to dovecot. Therefore, we defined our default
>>> Namespace "inbox" with prefix "INBOX." to have this compatible. I found
>>> this in some migration docs those days. Generally, everything worked as
>>> expected.
>>>
>>> Our only namespace is configured like this:
>>>
>>> namespace inbox {
>>>  separator = .
>>>   prefix = INBOX.
>>>   inbox = yes
>>> }
>>>
>>> Regularly, there is no folder named INBOX or .INBOX in the file system,
>>> I suppose this is correct.  But I found a special corner case today when
>>> it comes to quota calculation.
>>>
>>> When - for whatever reason - a folder .INBOX.foo (for arbitrary values
>>> of foo) exists, the whole mailbox is counted twice in quota
>>> recalculation. Just creating .INBOX does nothing but a subfolder
>>> triggers the problem.
>>>
>>> This is my shell view (replaced username and file path and deleted
>>> unnecessary debug output)
>>>
>>> $ cat maildirsize
>>> 268435456S
>>> 14697 17
>>> $ maildirmake .INBOX.foo
>>> $ sudo doveadm -D quota recalc -u 
>>> [...]
>>> doveadm(): Debug: Namespace inbox: type=private, prefix=INBOX.,
>>> sep=., inbox=yes, hidden=no, list=yes, subscriptions=yes
>>> location=maildir:/home/.../test
>>> doveadm(): Debug: maildir++: root=/home/.../test, index=,
>>> indexpvt=, control=, inbox=/home/.../test, alt=
>>> doveadm(): Debug: Namespace : type=private, prefix=, sep=,
>>> inbox=no, hidden=yes, list=no, subscriptions=no
>>> location=fail::LAYOUT=none
>>> doveadm(): Debug: none: root=, index=, indexpvt=, control=,
>>> inbox=, alt=
>>> doveadm(): Debug: quota: quota_over_flag check: quota_over_script
>>> unset - skipping
>>> doveadm(): Debug: Quota root User quota: Recalculated relative
>>> rules with bytes=268435456 count=0. Now grace=26843545
>>> doveadm(): Debug: Namespace INBOX.: Using permissions from
>>> /home/.../test: mode=0700 gid=default
>>>
>>> $ cat maildirsize
>>> 268435456S
>>> 29394 34
>>>
>>>
>>> So the used quota has exactly been doubled by just creating an empty
>>> subfolder.
>>>
>>> Do you have any pointers for fixing my configuration or is this a bug in
>>> dovecot?
>>>
>>>
>> I coincidentally resurrected a months-old thread with this same issue a
>> few days ago. I'm seeing the exact same after upgrading from 2.2.32 to
>> 2.2.36.
>>
>> The original poster (who also narrowed it down to something in 2.2.34)
>> mentioned a workaround that does indeed work, namely setting
>> mailbox_list_index=no:
>>
>> > doveadm -o 'mailbox_list_index=no' quota recalc -u myuser
>>
>> I've been staring at diffs of 2.2.33 and 2.2.34 without anything jumping
>> out at me (not a C guy, sadly). Maybe src/lib-storage/index/index-storage.c
>> or src/lib-storage/list/mailbox-list-fs-iter.c or
>> src/lib-storage/list/mailbox-list-index-iter.c
>> or src/lib-storage/list/

Re: Where to report (potential) Dovecot bugs

2019-04-09 Thread Mark Moseley via dovecot
On Tue, Apr 9, 2019 at 2:35 PM John Fawcett via dovecot 
wrote:

> On 09/04/2019 22:03, Mark Moseley via dovecot wrote:
> > I'm curious if this is still the right place to report potential bugs
> > with Dovecot.
> >
> > Is there a Dovecot bug tracker somewhere?
>
> https://www.dovecot.org/bugreport.html
>
>
Yes, I was aware of that. I probably should've said: I'm curious if this is
still the right place to report potential bugs, or is the wiki out-of-date?


Where to report (potential) Dovecot bugs

2019-04-09 Thread Mark Moseley via dovecot
I'm curious if this is still the right place to report potential bugs with
Dovecot.

Is there a Dovecot bug tracker somewhere?


Re: [BUG?] Double quota calulation when special folder is present

2019-04-03 Thread Mark Moseley via dovecot
On Wed, Mar 20, 2019 at 2:13 PM Mark Moseley  wrote:

> Just hoping to get some dev eyes on this. I'm incredibly reluctant to
> throw the word 'bug' around
> (since 99 times out of 100, it's not -- it's almost always the config),
> but I can't think of any way
> that this could be a config issue, esp when the pre-2.2.34 version works
> as expected.
>
> I noticed during troubleshooting that dovecot errors out if I try to
> create a subfolder called
> 'INBOX' but it'll happily create a subfolder called INBOX.SomethingElse
> (i.e. a folder called
> INBOX.INBOX.SomethingElse - resulting in a directory called
> .INBOX.SomethingElse on the
> filesystem, and leading to the problem described below). Is that
> sub-subfolder creation (where
> the top level subfolder matches the namespace name) supposed to be
> allowed? It seems
> odd that 'INBOX' (as a subfolder of INBOX) would be blocked but
> INBOX.SomethingElse (as
> a subfolder of INBOX) would be allowed. I'd expect INBOX.SomethingElse
> (i.e.
> INBOX.INBOX.SomethingElse) would be blocked as well.
>
>
> On Wed, Mar 13, 2019 at 4:46 AM Bernd Wurst via dovecot <
> dovecot@dovecot.org> wrote:
>
>> Hello,
>>
>> we're operating dovecot on a small server. Some years ago, we migrated
>> from courier IMAP to dovecot. Therefore, we defined our default
>> Namespace "inbox" with prefix "INBOX." to have this compatible. I found
>> this in some migration docs those days. Generally, everything worked as
>> expected.
>>
>> Our only namespace is configured like this:
>>
>> namespace inbox {
>>  separator = .
>>   prefix = INBOX.
>>   inbox = yes
>> }
>>
>> Regularly, there is no folder named INBOX or .INBOX in the file system,
>> I suppose this is correct.  But I found a special corner case today when
>> it comes to quota calculation.
>>
>> When - for whatever reason - a folder .INBOX.foo (for arbitrary values
>> of foo) exists, the whole mailbox is counted twice in quota
>> recalculation. Just creating .INBOX does nothing but a subfolder
>> triggers the problem.
>>
>> This is my shell view (replaced username and file path and deleted
>> unnecessary debug output)
>>
>> $ cat maildirsize
>> 268435456S
>> 14697 17
>> $ maildirmake .INBOX.foo
>> $ sudo doveadm -D quota recalc -u 
>> [...]
>> doveadm(): Debug: Namespace inbox: type=private, prefix=INBOX.,
>> sep=., inbox=yes, hidden=no, list=yes, subscriptions=yes
>> location=maildir:/home/.../test
>> doveadm(): Debug: maildir++: root=/home/.../test, index=,
>> indexpvt=, control=, inbox=/home/.../test, alt=
>> doveadm(): Debug: Namespace : type=private, prefix=, sep=,
>> inbox=no, hidden=yes, list=no, subscriptions=no location=fail::LAYOUT=none
>> doveadm(): Debug: none: root=, index=, indexpvt=, control=,
>> inbox=, alt=
>> doveadm(): Debug: quota: quota_over_flag check: quota_over_script
>> unset - skipping
>> doveadm(): Debug: Quota root User quota: Recalculated relative
>> rules with bytes=268435456 count=0. Now grace=26843545
>> doveadm(): Debug: Namespace INBOX.: Using permissions from
>> /home/.../test: mode=0700 gid=default
>>
>> $ cat maildirsize
>> 268435456S
>> 29394 34
>>
>>
>> So the used quota has exactly been doubled by just creating an empty
>> subfolder.
>>
>> Do you have any pointers for fixing my configuration or is this a bug in
>> dovecot?
>>
>>
> I coincidentally resurrected a months-old thread with this same issue a
> few days ago. I'm seeing the exact same after upgrading from 2.2.32 to
> 2.2.36.
>
> The original poster (who also narrowed it down to something in 2.2.34)
> mentioned a workaround that does indeed work, namely setting
> mailbox_list_index=no:
>
> > doveadm -o 'mailbox_list_index=no' quota recalc -u myuser
>
> I've been staring at diffs of 2.2.33 and 2.2.34 without anything jumping
> out at me (not a C guy, sadly). Maybe src/lib-storage/index/index-storage.c
> or src/lib-storage/list/mailbox-list-fs-iter.c or
> src/lib-storage/list/mailbox-list-index-iter.c
> or src/lib-storage/list/mailbox-list-index.c?
>
> The latter few have some added strcmp's against "INBOX". Then again,
> there's a lot of new code in the diffs under src/lib-storage that
> references INBOX specifically.
>

Can the Dovecot team confirm whether this is indeed a bug or not?  I've not
yet been able to test 2.3.x to see if the problem exists there as well.


[BUG?] Double quota calulation when special folder is present

2019-03-20 Thread Mark Moseley via dovecot
Just hoping to get some dev eyes on this. I'm incredibly reluctant to throw
the word 'bug' around
(since 99 times out of 100, it's not -- it's almost always the config), but
I can't think of any way
that this could be a config issue, esp when the pre-2.2.34 version works as
expected.

I noticed during troubleshooting that dovecot errors out if I try to create
a subfolder called
'INBOX' but it'll happily create a subfolder called INBOX.SomethingElse
(i.e. a folder called
INBOX.INBOX.SomethingElse - resulting in a directory called
.INBOX.SomethingElse on the
filesystem, and leading to the problem described below). Is that
sub-subfolder creation (where
the top level subfolder matches the namespace name) supposed to be allowed?
It seems
odd that 'INBOX' (as a subfolder of INBOX) would be blocked but
INBOX.SomethingElse (as
a subfolder of INBOX) would be allowed. I'd expect INBOX.SomethingElse
(i.e.
INBOX.INBOX.SomethingElse) would be blocked as well.


On Wed, Mar 13, 2019 at 4:46 AM Bernd Wurst via dovecot 
wrote:

> Hello,
>
> we're operating dovecot on a small server. Some years ago, we migrated
> from courier IMAP to dovecot. Therefore, we defined our default
> Namespace "inbox" with prefix "INBOX." to have this compatible. I found
> this in some migration docs those days. Generally, everything worked as
> expected.
>
> Our only namespace is configured like this:
>
> namespace inbox {
>  separator = .
>   prefix = INBOX.
>   inbox = yes
> }
>
> Regularly, there is no folder named INBOX or .INBOX in the file system,
> I suppose this is correct.  But I found a special corner case today when
> it comes to quota calculation.
>
> When - for whatever reason - a folder .INBOX.foo (for arbitrary values
> of foo) exists, the whole mailbox is counted twice in quota
> recalculation. Just creating .INBOX does nothing but a subfolder
> triggers the problem.
>
> This is my shell view (replaced username and file path and deleted
> unnecessary debug output)
>
> $ cat maildirsize
> 268435456S
> 14697 17
> $ maildirmake .INBOX.foo
> $ sudo doveadm -D quota recalc -u 
> [...]
> doveadm(): Debug: Namespace inbox: type=private, prefix=INBOX.,
> sep=., inbox=yes, hidden=no, list=yes, subscriptions=yes
> location=maildir:/home/.../test
> doveadm(): Debug: maildir++: root=/home/.../test, index=,
> indexpvt=, control=, inbox=/home/.../test, alt=
> doveadm(): Debug: Namespace : type=private, prefix=, sep=,
> inbox=no, hidden=yes, list=no, subscriptions=no location=fail::LAYOUT=none
> doveadm(): Debug: none: root=, index=, indexpvt=, control=,
> inbox=, alt=
> doveadm(): Debug: quota: quota_over_flag check: quota_over_script
> unset - skipping
> doveadm(): Debug: Quota root User quota: Recalculated relative
> rules with bytes=268435456 count=0. Now grace=26843545
> doveadm(): Debug: Namespace INBOX.: Using permissions from
> /home/.../test: mode=0700 gid=default
>
> $ cat maildirsize
> 268435456S
> 29394 34
>
>
> So the used quota has exactly been doubled by just creating an empty
> subfolder.
>
> Do you have any pointers for fixing my configuration or is this a bug in
> dovecot?
>
>
I coincidentally resurrected a months-old thread with this same issue a few
days ago. I'm seeing the exact same after upgrading from 2.2.32 to 2.2.36.

The original poster (who also narrowed it down to something in 2.2.34)
mentioned a workaround that does indeed work, namely setting
mailbox_list_index=no:

> doveadm -o 'mailbox_list_index=no' quota recalc -u myuser

I've been staring at diffs of 2.2.33 and 2.2.34 without anything jumping
out at me (not a C guy, sadly). Maybe src/lib-storage/index/index-storage.c
or src/lib-storage/list/mailbox-list-fs-iter.c or
src/lib-storage/list/mailbox-list-index-iter.c
or src/lib-storage/list/mailbox-list-index.c?

The latter few have some added strcmp's against "INBOX". Then again,
there's a lot of new code in the diffs under src/lib-storage that
references INBOX specifically.


Re: Double quota calulation when special folder is present

2019-03-15 Thread Mark Moseley via dovecot
On Wed, Mar 13, 2019 at 4:46 AM Bernd Wurst via dovecot 
wrote:

> Hello,
>
> we're operating dovecot on a small server. Some years ago, we migrated
> from courier IMAP to dovecot. Therefore, we defined our default
> Namespace "inbox" with prefix "INBOX." to have this compatible. I found
> this in some migration docs those days. Generally, everything worked as
> expected.
>
> Our only namespace is configured like this:
>
> namespace inbox {
>  separator = .
>   prefix = INBOX.
>   inbox = yes
> }
>
> Regularly, there is no folder named INBOX or .INBOX in the file system,
> I suppose this is correct.  But I found a special corner case today when
> it comes to quota calculation.
>
> When - for whatever reason - a folder .INBOX.foo (for arbitrary values
> of foo) exists, the whole mailbox is counted twice in quota
> recalculation. Just creating .INBOX does nothing but a subfolder
> triggers the problem.
>
> This is my shell view (replaced username and file path and deleted
> unnecessary debug output)
>
> $ cat maildirsize
> 268435456S
> 14697 17
> $ maildirmake .INBOX.foo
> $ sudo doveadm -D quota recalc -u 
> [...]
> doveadm(): Debug: Namespace inbox: type=private, prefix=INBOX.,
> sep=., inbox=yes, hidden=no, list=yes, subscriptions=yes
> location=maildir:/home/.../test
> doveadm(): Debug: maildir++: root=/home/.../test, index=,
> indexpvt=, control=, inbox=/home/.../test, alt=
> doveadm(): Debug: Namespace : type=private, prefix=, sep=,
> inbox=no, hidden=yes, list=no, subscriptions=no location=fail::LAYOUT=none
> doveadm(): Debug: none: root=, index=, indexpvt=, control=,
> inbox=, alt=
> doveadm(): Debug: quota: quota_over_flag check: quota_over_script
> unset - skipping
> doveadm(): Debug: Quota root User quota: Recalculated relative
> rules with bytes=268435456 count=0. Now grace=26843545
> doveadm(): Debug: Namespace INBOX.: Using permissions from
> /home/.../test: mode=0700 gid=default
>
> $ cat maildirsize
> 268435456S
> 29394 34
>
>
> So the used quota has exactly been doubled by just creating an empty
> subfolder.
>
> Do you have any pointers for fixing my configuration or is this a bug in
> dovecot?
>
>
I coincidentally resurrected a months-old thread with this same issue a few
days ago. I'm seeing the exact same after upgrading from 2.2.32 to 2.2.36.

The original poster (who also narrowed it down to something in 2.2.34)
mentioned a workaround that does indeed work, namely setting
mailbox_list_index=no:

> doveadm -o 'mailbox_list_index=no' quota recalc -u myuser

I've been staring at diffs of 2.2.33 and 2.2.34 without anything jumping
out at me (not a C guy, sadly). Maybe src/lib-storage/index/index-storage.c
or src/lib-storage/list/mailbox-list-fs-iter.c or
src/lib-storage/list/mailbox-list-index-iter.c
or src/lib-storage/list/mailbox-list-index.c?

The latter few have some added strcmp's against "INBOX". Then again,
there's a lot of new code in the diffs under src/lib-storage that
references INBOX specifically.


Re: Inbox quota usage doubled when mailbox_list_index enabled, under some circumstances

2019-03-12 Thread Mark Moseley via dovecot
On Tue, Aug 14, 2018 at 12:31 PM Chris Dillon 
wrote:

> I’ve had the opportunity to test the same configuration with a fresh build
> of the git master branch (2.4.devel) and the issue also occurs there.  I
> see that "mailbox_list_index = yes" is now enabled by default.  It can
> still be disabled via "mailbox_list_index = no" which allows the quota to
> be calculated correctly.
>
> ==
> root@ubuntu1804:~# dovecot -n
> # 2.4.devel (44282aeeb): /usr/local/etc/dovecot/dovecot.conf
> # OS: Linux 4.15.0-30-generic x86_64 Ubuntu 18.04.1 LTS
> # Hostname: ubuntu1804
> mail_location = maildir:~/Maildir
> mail_plugins = quota
> namespace inbox {
>   inbox = yes
>   location =
>   prefix = INBOX.
>   separator = .
> }
> passdb {
>   driver = pam
> }
> plugin {
>   quota = maildir:Mailbox
> }
> userdb {
>   driver = passwd
> }
> ==
>
> (To summarize from my previous message -- other than "mailbox_list_index =
> yes", second most important part of replication is that there is at least
> one email in the real inbox and at least one sub-folder named "INBOX" in
> maildir format)
>
> root@ubuntu1804:~# ls -ld
> /home/myuser/Maildir/cur/1532529376.M543965P58007.centos7.local\,S\=12712627\,W\=12877782\:2\,S
> /home/myuser/Maildir/.INBOX.Test/
> -rw-rw-r-- 1 myuser myuser 12712627 Aug 14 18:28
> '/home/myuser/Maildir/cur/1532529376.M543965P58007.centos7.local,S=12712627,W=12877782:2,S'
> drwxrwxr-x 5 myuser myuser   87 Aug 14 18:56
> /home/myuser/Maildir/.INBOX.Test/
> =
>
> (In the following example usage is doubled, there is only one email)
>
> root@ubuntu1804:~# doveadm quota recalc -u myuser; doveadm quota get -u
> myuser
> Quota name TypeValue Limit
>   %
> MailboxSTORAGE 24830 -
>   0
> MailboxMESSAGE 2 -
>   0
> ==
>
> (In the following example it works correctly with mailbox_list_index
> disabled)
>
> root@ubuntu1804:~# doveadm -o 'mailbox_list_index=no' quota recalc -u
> myuser; doveadm quota get -u myuser
> Quota name TypeValue Limit
>   %
> MailboxSTORAGE 12415 -
>   0
> MailboxMESSAGE 1 -
>   0
> ==
>
> Best Regards




We recently upgraded from 2.2.32 to 2.2.36.1 and ran into the same issue as
above. I was about to start compiling all the intermediate versions to
pinpoint where it started and happened upon this post first. We are in the
same boat where some users have *sub*folders starting with INBOX, resulting
in directory names like ".INBOX.Event" (and our namespace root is INBOX)
and quota calculation double-counts INBOX's vsize.

Just to be clear too: At least in my case, it's *not* double counting,
e.g., INBOX.Event. But if the above conditions are met, it's
double-counting *INBOX*, because it now sees a folder called INBOX.INBOX
(which does *not* exist on the filesystem).

I hadn't gotten as far as Chris did (this just bubbled up today), but his
solution works here too, i.e. passing -o 'mailbox_list_index=no'  to
doveadm quota recalc.

Also, if I rename the directories to something else (e.g. from above,
rename ".INBOX.Event" to ".notINBOX.Event"), a quota recalc works just
fine. The presence of a directory called .INBOX. is triggering
this.

I'm able to create new subfolders called INBOX. with clients
like Apple Mail, which was a bit surprising (Apple Mail however choked when
I tried to just create a subfolder called 'INBOX'). We've got millions of
mailboxes, so educating users is a non-starter :)

Any fix for this from the dovecot devs?


Re: doveadm import with subfolder oddity

2019-02-12 Thread Mark Moseley via dovecot
On Mon, Feb 4, 2019 at 1:59 PM Mark Moseley  wrote:

> This has got to be something weird in my config. And the standard
> disclaimer of '"happy to post doveconf -n, but wanted to see if this is
> normal first" :)
>
> Background: Ubuntu Xenial, running 2.2.36. Mailbox type is mdbox and I've
> got a period separator in my inbox namespace:
>
> namespace {
>   hidden = no
>   inbox = yes
>   list = yes
>   location =
>   mailbox Spam {
> auto = no
> autoexpunge = 1 weeks
> special_use = \Junk
>   }
>   mailbox Trash {
> auto = no
> special_use = \Trash
>   }
>   prefix = INBOX.
>   separator = .
>   subscriptions = yes
>   type = private
> }
>
> If I do a import for a regular folder under INBOX, it works just fine:
>
> doveadm import -u testbox2@testing.local -U testbox1@testing.local
> mdbox:~/mdbox INBOX all mailbox Sent
>
> ... returns happily, message count gets incremented
>
> If I try to do the same with a subfolder (and a subfolder that most
> definitely exists on both source and destination side), I get an error:
>
> doveadm import -u testbox2@testing.local -U testbox1@testing.local
> mdbox:~/mdbox INBOX all mailbox Sub.Sub1
> doveadm(testbox2@testing.local): Error: remote(10.1.17.98:4000): Mailbox
> Sub.Sub1: Mailbox sync failed: Mailbox doesn't exist: Sub.Sub1
>
> If I use / instead of . in my query, it works:
>
> doveadm import -u testbox2@testing.local -U testbox1@testing.local
> mdbox:~/mdbox INBOX all mailbox Sub/Sub1
>
> ... returns happily and message count gets incremented.
>
> Since we're using '.' as our separator, that was a bit unexpected :)
>
> Ironically, if I'm doing a IMAPc 'import', it works just fine with a query
> of 'all mailbox Sub.Sub1'. It's only when importing from a local src and
> local dest (i.e. source_location == mdbox:~/mdbox) that it fails. With
> source_location set to 'imapc:', it works. I imagine that's due to using
> straight IMAP on the source side.
>
> Likely a misconfig on my part? Expected behavior?
>
> I can see in the strace that the error is triggered when doveadm is
> looking at the source mailbox. It looks
> for mdbox/mailboxes/Sub.Sub1/dbox-Mails first, then falls back
> to mdbox/mailboxes/Sub/Sub1/dbox-Mails (which it finds). Then a little bit
> later in the strace, it again looks for mdbox/mailboxes/Sub.Sub1/dbox-Mails
> (which it doesn't find) but doesn't try mdbox/mailboxes/Sub/Sub1/dbox-Mails
> this time, and then spits out 'Mailbox Sub.Sub1: Mailbox sync failed:
> Mailbox doesn't exist: Sub.Sub1'. With a query of 'all mailbox Sub/Sub1',
> the stat() is for mdbox/mailboxes/Sub/Sub1/dbox-Mails which it finds and
> uses happily.
>
> Having to substitute the '.'s for '/'s in the 'mailbox' part of the query
> isn't an awful workaround, but it very much feels like I'm doing something
> wrong. This is a production setup, so everything else is otherwise working
> fine. But I've only just begun working with 'doveadm import', so I might be
> turning up some issues with my config.
>
> Thanks! Sorry I'm so verbose :)
>

Has anyone else seen similar behavior? It's hardly a tough kludge to regex
's/\./\//g' (even if it makes for an ugly regex), but it seems like
something's not quite right.


Re: Doveadm service as non-root user

2019-02-12 Thread Mark Moseley via dovecot
On Mon, Feb 4, 2019 at 12:04 PM Mark Moseley  wrote:

>
> On Fri, Feb 1, 2019 at 11:37 PM Aki Tuomi 
> wrote:
>
>>
>> On 01 February 2019 at 23:16 Mark Moseley < moseleym...@gmail.com>
>> wrote:
>>
>>
>> Running: Ubuntu xenial, dovecot 2.2.36
>>
>> I've been working on moving our user base from maildir to mdbox and
>> trying
>> to come up with solutions for things like moving emails around. In the
>> past, with maildir, our support guys could just mv the files around and
>> done. For mdbox, I've been working on getting things set up to use
>> doveadm.
>>
>> One weirdness I've seen is that in imports (i.e. doveadm import), mail
>> gets
>> copied correctly but the resulting files are left with root ownership (I
>> don't have 'service doveadm' 'user' set, so I guess it defaults to root).
>> It's typically new m.* files as well as the dovecot.list.index
>> and dovecot.list.index.log files.
>>
>> Looking at strace, no chown is done on them, nor was there setuid. The
>> import had no trouble finding the correct user in the db, so I know that
>> it
>> knows the correct UID (I can see it just fine in debug logs too). And it
>> will happily import to existing m.* files with no permissions issues (but
>> considering it's running as root, I wouldn't expect it to).
>>
>> I've seen this using 'import' via IMAPc as well as with both src and dest
>> on the same server. I can see this behavior in both scenarios. We have a
>> single shared UID for mail, so especially in that "src/dest on same
>> server"
>> case, it's not a matter of UID-mismatch.
>>
>> It's a director setup, so all doveadm commands are coming through the
>> director. If I run the import directly on the backend (which obviously
>> would be a bad idea in real life), the ownership of new m.* files seems
>> to
>> be correct (I can see it setuid'ing to the correct UID from userdb in
>> strace). If I run the import on the director, I can get a new root-owned
>> file every time it rolls over to the next m.* file.
>>
>> Two questions:
>>
>> * Is that a bug? Is this expected behavior? Seems like the expected thing
>> would be to use the UID from userdb and either do a setuid (just like
>> running 'doveadm import' locally did) or chown'ing any new files to the
>> correct UID. I always always assume misconfiguration (vs bug, since it's
>> almost never a bug) but I'm baffled on this one.
>>
>> * I see that it's possible to set a user for service doveadm and the wiki
>> even suggests that it's a good idea in a single UID setup. If there are
>> no
>> mailboxes with any other UIDs, *will setting 'service doveadm' to the
>> same
>> UID possibly break anything*? I can't think of why it would, but I want
>> to
>> be duly diligent. Plus I'm a little leery about closing the door to ever
>> having additional UIDs for mailboxes.
>>
>> Happy to provide 'doveconf -n' but wanted to check first, before spending
>> 15 minutes gently obfuscating it :)
>>
>>
>> Can you try
>>
>> doveadm import -U victim -u victim ... ?
>> ---
>> Aki Tuomi
>>
>
>
> Is that to test a generic 'import from sourceUser to dest user' (i.e.
> victim isn't literally the same in both -u and -U) or are you looking for a
> test where 'sourceUser' is the same email account as the destination?
>
> I just want to make sure I'm understanding right. The original tests (that
> result in the root-owned files) were all -U userA -u userB (i.e. different
> email accounts for src and dest), if you're asking about the former.
>
> If you're asking about the latter, I ran that and got the same result, a
> root-owned dovecot.list.index.log and dovecot.list.index and freshly
> created m.* files. The message count in the destination mailbox increases
> by the right number (no surprise since it's running as root), so the import
> itself is working.
>
> I should add that in both cases (different src/dest email account and same
> src/dest), the import works ok -- or at least increments the count in the
> index. It just leaves the email account in a broken state. Re-chown'ing it
> to the current permissions makes it happy again and the newly imported
> messages show up.
>


Any chance Aki's hit-the-nail-on-the-head answer got lost in the ether due
to the DMARC snafu? :)

I'm going forward for now with running doveadm as the unix user that owns
all the mailbox, so no urgency, but it's still a bit perplexing (and if
it's a bug, good to stomp out).


doveadm import with subfolder oddity

2019-02-04 Thread Mark Moseley
This has got to be something weird in my config. And the standard
disclaimer of '"happy to post doveconf -n, but wanted to see if this is
normal first" :)

Background: Ubuntu Xenial, running 2.2.36. Mailbox type is mdbox and I've
got a period separator in my inbox namespace:

namespace {
  hidden = no
  inbox = yes
  list = yes
  location =
  mailbox Spam {
auto = no
autoexpunge = 1 weeks
special_use = \Junk
  }
  mailbox Trash {
auto = no
special_use = \Trash
  }
  prefix = INBOX.
  separator = .
  subscriptions = yes
  type = private
}

If I do a import for a regular folder under INBOX, it works just fine:

doveadm import -u testbox2@testing.local -U testbox1@testing.local
mdbox:~/mdbox INBOX all mailbox Sent

... returns happily, message count gets incremented

If I try to do the same with a subfolder (and a subfolder that most
definitely exists on both source and destination side), I get an error:

doveadm import -u testbox2@testing.local -U testbox1@testing.local
mdbox:~/mdbox INBOX all mailbox Sub.Sub1
doveadm(testbox2@testing.local): Error: remote(10.1.17.98:4000): Mailbox
Sub.Sub1: Mailbox sync failed: Mailbox doesn't exist: Sub.Sub1

If I use / instead of . in my query, it works:

doveadm import -u testbox2@testing.local -U testbox1@testing.local
mdbox:~/mdbox INBOX all mailbox Sub/Sub1

... returns happily and message count gets incremented.

Since we're using '.' as our separator, that was a bit unexpected :)

Ironically, if I'm doing a IMAPc 'import', it works just fine with a query
of 'all mailbox Sub.Sub1'. It's only when importing from a local src and
local dest (i.e. source_location == mdbox:~/mdbox) that it fails. With
source_location set to 'imapc:', it works. I imagine that's due to using
straight IMAP on the source side.

Likely a misconfig on my part? Expected behavior?

I can see in the strace that the error is triggered when doveadm is looking
at the source mailbox. It looks for mdbox/mailboxes/Sub.Sub1/dbox-Mails
first, then falls back to mdbox/mailboxes/Sub/Sub1/dbox-Mails (which it
finds). Then a little bit later in the strace, it again looks
for mdbox/mailboxes/Sub.Sub1/dbox-Mails (which it doesn't find) but doesn't
try mdbox/mailboxes/Sub/Sub1/dbox-Mails this time, and then spits out
'Mailbox Sub.Sub1: Mailbox sync failed: Mailbox doesn't exist: Sub.Sub1'.
With a query of 'all mailbox Sub/Sub1', the stat() is
for mdbox/mailboxes/Sub/Sub1/dbox-Mails which it finds and uses happily.

Having to substitute the '.'s for '/'s in the 'mailbox' part of the query
isn't an awful workaround, but it very much feels like I'm doing something
wrong. This is a production setup, so everything else is otherwise working
fine. But I've only just begun working with 'doveadm import', so I might be
turning up some issues with my config.

Thanks! Sorry I'm so verbose :)


Re: Doveadm service as non-root user

2019-02-04 Thread Mark Moseley
On Fri, Feb 1, 2019 at 11:37 PM Aki Tuomi 
wrote:

>
> On 01 February 2019 at 23:16 Mark Moseley < moseleym...@gmail.com> wrote:
>
>
> Running: Ubuntu xenial, dovecot 2.2.36
>
> I've been working on moving our user base from maildir to mdbox and trying
> to come up with solutions for things like moving emails around. In the
> past, with maildir, our support guys could just mv the files around and
> done. For mdbox, I've been working on getting things set up to use
> doveadm.
>
> One weirdness I've seen is that in imports (i.e. doveadm import), mail
> gets
> copied correctly but the resulting files are left with root ownership (I
> don't have 'service doveadm' 'user' set, so I guess it defaults to root).
> It's typically new m.* files as well as the dovecot.list.index
> and dovecot.list.index.log files.
>
> Looking at strace, no chown is done on them, nor was there setuid. The
> import had no trouble finding the correct user in the db, so I know that
> it
> knows the correct UID (I can see it just fine in debug logs too). And it
> will happily import to existing m.* files with no permissions issues (but
> considering it's running as root, I wouldn't expect it to).
>
> I've seen this using 'import' via IMAPc as well as with both src and dest
> on the same server. I can see this behavior in both scenarios. We have a
> single shared UID for mail, so especially in that "src/dest on same
> server"
> case, it's not a matter of UID-mismatch.
>
> It's a director setup, so all doveadm commands are coming through the
> director. If I run the import directly on the backend (which obviously
> would be a bad idea in real life), the ownership of new m.* files seems to
> be correct (I can see it setuid'ing to the correct UID from userdb in
> strace). If I run the import on the director, I can get a new root-owned
> file every time it rolls over to the next m.* file.
>
> Two questions:
>
> * Is that a bug? Is this expected behavior? Seems like the expected thing
> would be to use the UID from userdb and either do a setuid (just like
> running 'doveadm import' locally did) or chown'ing any new files to the
> correct UID. I always always assume misconfiguration (vs bug, since it's
> almost never a bug) but I'm baffled on this one.
>
> * I see that it's possible to set a user for service doveadm and the wiki
> even suggests that it's a good idea in a single UID setup. If there are no
> mailboxes with any other UIDs, *will setting 'service doveadm' to the same
> UID possibly break anything*? I can't think of why it would, but I want to
> be duly diligent. Plus I'm a little leery about closing the door to ever
> having additional UIDs for mailboxes.
>
> Happy to provide 'doveconf -n' but wanted to check first, before spending
> 15 minutes gently obfuscating it :)
>
>
> Can you try
>
> doveadm import -U victim -u victim ... ?
> ---
> Aki Tuomi
>


Is that to test a generic 'import from sourceUser to dest user' (i.e.
victim isn't literally the same in both -u and -U) or are you looking for a
test where 'sourceUser' is the same email account as the destination?

I just want to make sure I'm understanding right. The original tests (that
result in the root-owned files) were all -U userA -u userB (i.e. different
email accounts for src and dest), if you're asking about the former.

If you're asking about the latter, I ran that and got the same result, a
root-owned dovecot.list.index.log and dovecot.list.index and freshly
created m.* files. The message count in the destination mailbox increases
by the right number (no surprise since it's running as root), so the import
itself is working.

I should add that in both cases (different src/dest email account and same
src/dest), the import works ok -- or at least increments the count in the
index. It just leaves the email account in a broken state. Re-chown'ing it
to the current permissions makes it happy again and the newly imported
messages show up.


Doveadm service as non-root user

2019-02-01 Thread Mark Moseley
Running: Ubuntu xenial, dovecot 2.2.36

I've been working on moving our user base from maildir to mdbox and trying
to come up with solutions for things like moving emails around. In the
past, with maildir, our support guys could just mv the files around and
done. For mdbox, I've been working on getting things set up to use doveadm.

One weirdness I've seen is that in imports (i.e. doveadm import), mail gets
copied correctly but the resulting files are left with root ownership (I
don't have 'service doveadm' 'user' set, so I guess it defaults to root).
It's typically new m.* files as well as the dovecot.list.index
and dovecot.list.index.log files.

Looking at strace, no chown is done on them, nor was there setuid. The
import had no trouble finding the correct user in the db, so I know that it
knows the correct UID (I can see it just fine in debug logs too). And it
will happily import to existing m.* files with no permissions issues (but
considering it's running as root, I wouldn't expect it to).

I've seen this using 'import' via IMAPc as well as with both src and dest
on the same server. I can see this behavior in both scenarios. We have a
single shared UID for mail, so especially in that "src/dest on same server"
case, it's not a matter of UID-mismatch.

It's a director setup, so all doveadm commands are coming through the
director. If I run the import directly on the backend (which obviously
would be a bad idea in real life), the ownership of new m.* files seems to
be correct (I can see it setuid'ing to the correct UID from userdb in
strace). If I run the import on the director, I can get a new root-owned
file every time it rolls over to the next m.* file.

Two questions:

* Is that a bug? Is this expected behavior? Seems like the expected thing
would be to use the UID from userdb and either do a setuid (just like
running 'doveadm import' locally did) or chown'ing any new files to the
correct UID. I always always assume misconfiguration (vs bug, since it's
almost never a bug) but I'm baffled on this one.

* I see that it's possible to set a user for service doveadm and the wiki
even suggests that it's a good idea in a single UID setup. If there are no
mailboxes with any other UIDs, *will setting 'service doveadm' to the same
UID possibly break anything*? I can't think of why it would, but I want to
be duly diligent. Plus I'm a little leery about closing the door to ever
having additional UIDs for mailboxes.

Happy to provide 'doveconf -n' but wanted to check first, before spending
15 minutes gently obfuscating it :)


Re: "unknown user - trying the next userdb" Info in log

2019-01-29 Thread Mark Moseley
On Tue, Jan 29, 2019 at 9:58 PM James Brown via dovecot 
wrote:

> On 30 Jan 2019, at 4:35 pm, Aki Tuomi  wrote:
>
>
>
> On 30 January 2019 at 07:12 James Brown < jlbr...@bordo.com.au> wrote:
>
>
> >> My settings:
> ...
> >> userdb {
> >> driver = passwd
> >> }
> >> userdb {
> >> driver = prefetch
> >> }
> >> userdb {
> >> args = /usr/local/etc/dovecot/dovecot-sql.conf.ext
> >> driver = sql
> >> }
>
> Well... there is that usetdb passwd which seems bit extraneous.
> ---
> Aki Tuomi
>
>
> I'd remove the
>
> userdb {
> driver = passwd
> }
>
> section
> ---
> Aki Tuomi
>
>
> Thanks Aki - the trick was finding where that setting was! Found it in
> auth-system.conf.ext.
>
> Commented it out and all works perfectly now.
>
> Thanks again Aki,
>
> James.
>


I'll throw in my 2 cents that it'd be great for a passdb/userdb block to
have a setting to suppress that message. I've actually changed my mind and
*not* used extra userdbs in the past (despite there being good reasons to
use them) entirely due to that log entry. We've got millions of mailboxes
and the noise in the logs would be unreal. And it would cause no end of
confusion for ops people looking at logs. I weighed the likelihood that I'd
end up being asked a million times why a login failed, when in reality it
hadn't, and decided it wasn't worth the headache.

Or alternatively only log that error when *all* of the passdbs/userdbs have
failed. Anything would be better than logging what *looks* like an error
but isn't. And, yes, i see 'info' in the log entry, but 'unknown user' is
far more eye-catching.

This is the part where someone points out that there already *is* a setting
for this and I stop talking :)


Re: limit pop login per user and per minute

2018-03-22 Thread Mark Moseley
On Thu, Mar 22, 2018 at 1:41 PM, Joseph Tam  wrote:

> On Thu, 22 Mar 2018, Markus Eckerl wrote:
>
> The problem is, that he misconfigured the servers of these customers. In
>> detail: their servers are trying to fetch email every 2 - 5 seconds. For
>> every email address.
>>
>> In the past I contacted the technician and told him about his mistake.
>> He was not very helpful and simply told me that he is doing the same
>> configuration since several years at all of his customer servers.
>> Without problems. It is up to me to fix my problem myself.
>>
>
> Seems to me you're bending over backwards to fix someone else's problem,
> and what you really need is an "attitude adjustment" tool for obnoxious
> clients who use your service like they're the only ones that matter.
>
> Apart from what others can suggest (I think dovecot allows delegation
> of usage to a separate policyd service), you can perhaps use firewall
> throttling e.g.
>
> https://making.pusher.com/per-ip-rate-limiting-with-iptables/
>
> It can't do it per user, but perhaps it is better to set a global limit
> and let your downstream client better manage and conserve a limited
> resource.
>
>
Might be a good use of the new authpolicy stuff. You could run a local
weakforced with 1 minute windows and break auth for certain IPs that do
more than one login per minute.


Re: Dovecot 2.3.0 TLS

2018-01-23 Thread Mark Moseley
On Tue, Jan 23, 2018 at 10:05 AM, Aki Tuomi  wrote:

>
> > On January 23, 2018 at 7:09 PM Arkadiusz Miśkiewicz 
> wrote:
> >
> >
> > On Thursday 11 of January 2018, Aki Tuomi wrote:
> >
> > > Seems we might've made a unexpected change here when we revamped the
> ssl
> > > code.
> >
> > Revamped, interesting, can it support milions certs now on single
> machine? (so
> > are certs loaded by demand and not wasting memory)
> >
> > > Aki
> >
> >
> > --
> > Arkadiusz Miśkiewicz, arekm / ( maven.pl | pld-linux.org )
>
> Unfortunately not. This time round it was about putting the ssl code
> mostly in one place, so that we use same code for all SSL connections.
>
>

Just to chime in, having some way of supporting SSL certs dynamically would
be tremendously useful. Like splitting out the retrieval of certs/key to a
socket, that would typically just be a built-in regular dovecot service
("go and get the certs that are configured in dovecot configs"), but could
also be a custom unix listener that could return certs/keys. Dovecot would
send in the local IP/port and/or SNI name (if there was one) to the socket
and then use whatever comes back. A perl/python/etc script doing the unix
listener could then grab the appropriate cert/key from wherever (and
dovecot would presumably have a time-based cache for certs/keys).  This is
just wish-listing :)

Currently, I've got a million different domains on my dovecot boxes, so
allowing them all to use per-domain SSL is a bit challenging. I've been
searching for an SSL proxy that supports something like nginx/openresty's
"ssl_certificate_by_lua_file" (and can communicate the remote IP to dovecot
like haproxy does) to put in front of dovecot, to no avail. Having
something like that built directly into dovecot would be a dream -- or that
can at least farm that functionality out to a custom daemon).


Re: Locks directory change

2018-01-07 Thread Mark Moseley
On Thu, Oct 26, 2017 at 7:30 AM, Aki Tuomi  wrote:

>
> > On October 26, 2017 at 4:30 PM Federico Bartolucci 
> wrote:
> >
> >
> > Hello,
> >
> > it's the first time for me writing to the list, I'm trying to change the
> > location into which the Dovecot's locks are done reserving a special
> > temporary directory on an other partition, then adding to the
> > dovecont.conf the line:
> >
> > mail_location = maildir:~/Maildir:VOLATILEDIR=/tmp_lock/%2.256Nu/%u
> >
> > so that through the VOLATILEDIR directive the locks should be written in
> > this path.
> > We observe though that the locks for many users are still done in the
> > real maildir (NFS mounted filesystem) as if in some situations this
> > instruction is not effective. Anybody knows if are there other things to
> > change or to do or what could be the reason? (for instance to login in a
> > specific way or doing a particular operation).
> >
> > Regards,
> > Federico
>
> Hi, VOLATILEDIR currently only affects vsize.lock and autoexpunge.lock.
>
> Aki
>

Are there plans to expand that in 2.3? Without knowing the ramifications,
it'd be nice to have lastlogin use it, at least with director enabled.


Re: Lua Auth

2017-12-22 Thread Mark Moseley
On Fri, Dec 22, 2017 at 5:18 AM, <aki.tu...@dovecot.fi> wrote:

>
> > On December 22, 2017 at 8:20 AM Mark Moseley <moseleym...@gmail.com>
> wrote:
> >
> >
> > On Thu, Dec 21, 2017 at 9:51 PM, Aki Tuomi <aki.tu...@dovecot.fi> wrote:
> >
> > >
> > > > On December 22, 2017 at 6:43 AM Mark Moseley <moseleym...@gmail.com>
> > > wrote:
> > > >
> > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > > 2) Is there an appropriate way to return data with spaces in it (or
> > > > > presumably other non-alphanum chars. My quota name had a space in
> it,
> > > > > which
> > > > > somehow got interpreted as 'yes' , i.e.:
> > > > >
> > > > > imap: Error: Failed to initialize quota: Invalid quota root quota:
> > > Unknown
> > > > > quota backend: yes
> > > > >
> > > > > I simply changed the space to an underscore as a workaround, but
> I'm
> > > > > curious if there's a better way. I tried various quoting without
> > > success.
> > > > > Didn't try escaping yet.
> > > > >
> > > > >
> > > > > 2) Instead of string, return a key value table. you can have
> spaces in
> > > > > values.
> > > > >
> > > > >
> > > > >
> > > > Does this work for auth_passdb_lookup too, or just
> auth_userdb_lookup?
> > > I've
> > > > been returning a table with auth_userdb_lookup just fine. But when I
> try
> > > > using it with passdb (and despite being very very sure that a
> 'password'
> > > > key exists in the table I'm returning from auth_passdb_lookup() --
> I'm
> > > > logging it one line above the return), the passdb auth fails with
> this
> > > log
> > > > entry:
> > > >
> > > > Dec 21 23:29:22 auth-worker(7779): Info:
> > > > lua(te...@test.com,10.20.103.32,<dSvLQuZg+uIKFGcg>):
> > > > No password returned (and no nopassword)
> > > >
> > > > I guess it's not seeing the password key in the table I'm returning.
> If I
> > > > return a concat'd string ("password=... user=...") from
> > > > auth_passdb_lookup(), it works just fine.
> > > >
> > > > I was also curious if there's a way to pass info between
> > > auth_userdb_lookup
> > > > and auth_passdb_lookup. I was trying to use a table with
> > > > auth_passdb_lookup() so I could take advantage of prefetch and
> thought
> > > that
> > > > if auth_passdb_lookup didn't take a table, I could stash data away
> and
> > > then
> > > > un-stash it in auth_userdb_lookup
> > > >
> > > > Thanks!
> > > >
> > > >
> > >
> > > Yeah, this is a bug we have fixed =)
> > >
> > > https://github.com/dovecot/core/commit/c86575ac9776d0995355d03719c82e
> > > 7ceac802e6#diff-83374eeaee91d90e848390ba3c7b264a
> > >
> > >
> >
> > I'm on rc1, so I appear to already have that git commit (as part of rc1).
> >
> > # /usr/sbin/dovecot  --version
> > 2.3.0.rc1 (12aba5948)
> >
> > For testing this, I tried replacing my passdb lookup with this:
> >
> > function auth_passdb_lookup(req)
> > passdb_table = {}
> > passdb_table[ 'password' ] = 'test'
> > passdb_table[ 'user' ] = 'te...@test.com'
> >
> > return dovecot.auth.PASSDB_RESULT_OK, passdb_table
> > end
> >
> > and still get:
> >
> > Dec 22 01:17:17 auth-worker(9711): Info:
> > lua(te...@test.com,10.20.103.32,):
> > No password returned (and no nopassword)
> >
> > Replacing that return statement with this:
> >
> > return dovecot.auth.PASSDB_RESULT_OK, 'password=test user=te...@test.com
> '
> >
> > authenticates successfully.
>
> Fixed in https://github.com/dovecot/core/commit/
> e5fb6b3b7d4e79475b451823ea6c0a02955ba06b
>
>
>
Works like a charm now, thanks!

As a matter of 'best practices', in my current iteration of Lua auth, I
moved all my lookups to passdb (thus yesterday's emails to the list), so
that it could be used with prefetch. Belatedly realizing that LMTP doesn't
touch passdb, I rewrote the userdb lookup to call the same passdb lookup
(which only happens for non-passdb/prefetch things) and then it copies the
return table (but strips the 'userdb_' prefix). It's all working currently.
BUT, does that sound sane? Or is there some gotcha I'm heading towards
(yes, I realize the question is a bit vague -- just looking for very
general "No, don't do that").

I'm curious too if I can set vars in the passdb lookup and then access then
in userdb. Or is it random which auth-worker will handle the userdb lookup,
relative to which one handled the passdb lookup? I tried dropping things in
the req.userdb table in the passdb phase, but it was unset during the
userdb phase.


Re: Lua Auth

2017-12-21 Thread Mark Moseley
On Thu, Dec 21, 2017 at 9:51 PM, Aki Tuomi <aki.tu...@dovecot.fi> wrote:

>
> > On December 22, 2017 at 6:43 AM Mark Moseley <moseleym...@gmail.com>
> wrote:
> >
> >
> > >
> > >
> > >
> > >
> > > 2) Is there an appropriate way to return data with spaces in it (or
> > > presumably other non-alphanum chars. My quota name had a space in it,
> > > which
> > > somehow got interpreted as 'yes' , i.e.:
> > >
> > > imap: Error: Failed to initialize quota: Invalid quota root quota:
> Unknown
> > > quota backend: yes
> > >
> > > I simply changed the space to an underscore as a workaround, but I'm
> > > curious if there's a better way. I tried various quoting without
> success.
> > > Didn't try escaping yet.
> > >
> > >
> > > 2) Instead of string, return a key value table. you can have spaces in
> > > values.
> > >
> > >
> > >
> > Does this work for auth_passdb_lookup too, or just auth_userdb_lookup?
> I've
> > been returning a table with auth_userdb_lookup just fine. But when I try
> > using it with passdb (and despite being very very sure that a 'password'
> > key exists in the table I'm returning from auth_passdb_lookup() -- I'm
> > logging it one line above the return), the passdb auth fails with this
> log
> > entry:
> >
> > Dec 21 23:29:22 auth-worker(7779): Info:
> > lua(te...@test.com,10.20.103.32,<dSvLQuZg+uIKFGcg>):
> > No password returned (and no nopassword)
> >
> > I guess it's not seeing the password key in the table I'm returning. If I
> > return a concat'd string ("password=... user=...") from
> > auth_passdb_lookup(), it works just fine.
> >
> > I was also curious if there's a way to pass info between
> auth_userdb_lookup
> > and auth_passdb_lookup. I was trying to use a table with
> > auth_passdb_lookup() so I could take advantage of prefetch and thought
> that
> > if auth_passdb_lookup didn't take a table, I could stash data away and
> then
> > un-stash it in auth_userdb_lookup
> >
> > Thanks!
> >
> >
>
> Yeah, this is a bug we have fixed =)
>
> https://github.com/dovecot/core/commit/c86575ac9776d0995355d03719c82e
> 7ceac802e6#diff-83374eeaee91d90e848390ba3c7b264a
>
>

I'm on rc1, so I appear to already have that git commit (as part of rc1).

# /usr/sbin/dovecot  --version
2.3.0.rc1 (12aba5948)

For testing this, I tried replacing my passdb lookup with this:

function auth_passdb_lookup(req)
passdb_table = {}
passdb_table[ 'password' ] = 'test'
passdb_table[ 'user' ] = 'te...@test.com'

return dovecot.auth.PASSDB_RESULT_OK, passdb_table
end

and still get:

Dec 22 01:17:17 auth-worker(9711): Info:
lua(te...@test.com,10.20.103.32,):
No password returned (and no nopassword)

Replacing that return statement with this:

return dovecot.auth.PASSDB_RESULT_OK, 'password=test user=te...@test.com'

authenticates successfully.


Re: Lua Auth

2017-12-21 Thread Mark Moseley
>
>
>
>
> 2) Is there an appropriate way to return data with spaces in it (or
> presumably other non-alphanum chars. My quota name had a space in it,
> which
> somehow got interpreted as 'yes' , i.e.:
>
> imap: Error: Failed to initialize quota: Invalid quota root quota: Unknown
> quota backend: yes
>
> I simply changed the space to an underscore as a workaround, but I'm
> curious if there's a better way. I tried various quoting without success.
> Didn't try escaping yet.
>
>
> 2) Instead of string, return a key value table. you can have spaces in
> values.
>
>
>
Does this work for auth_passdb_lookup too, or just auth_userdb_lookup? I've
been returning a table with auth_userdb_lookup just fine. But when I try
using it with passdb (and despite being very very sure that a 'password'
key exists in the table I'm returning from auth_passdb_lookup() -- I'm
logging it one line above the return), the passdb auth fails with this log
entry:

Dec 21 23:29:22 auth-worker(7779): Info:
lua(te...@test.com,10.20.103.32,):
No password returned (and no nopassword)

I guess it's not seeing the password key in the table I'm returning. If I
return a concat'd string ("password=... user=...") from
auth_passdb_lookup(), it works just fine.

I was also curious if there's a way to pass info between auth_userdb_lookup
and auth_passdb_lookup. I was trying to use a table with
auth_passdb_lookup() so I could take advantage of prefetch and thought that
if auth_passdb_lookup didn't take a table, I could stash data away and then
un-stash it in auth_userdb_lookup

Thanks!




> 3) response_from_template expands a key=value string into table by var
> expanding values.
>
>
> var_expand can be used to interpolation for any purposes. it returns a
> string. see https://wiki.dovecot.org/Variables for details on how to use
> it.
>
>
> Individual variable access is more efficient to do directly.
>
>
> ---
> Aki Tuomi
>


Re: v2.3.0 release candidate released

2017-12-18 Thread Mark Moseley
On Mon, Dec 18, 2017 at 2:32 PM, Timo Sirainen <t...@iki.fi> wrote:

> On 18 Dec 2017, at 23.16, Mark Moseley <moseleym...@gmail.com> wrote:
> >
> > doveadm(test1-sha...@test.com): Panic: file buffer.c: line 97
> > (buffer_check_limits): assertion failed: (buf->used <= buf->alloc)
> ..
> > /usr/lib/dovecot/modules/doveadm/lib10_doveadm_sieve_plugin.so(+0x43fe)
> > [0x6ba6997c33fe] ->
>
> Since the panic is coming from pigeonhole, did you recompile it also? And
> what version of it?
>
>
The previous version (that was running happily with
ecfca41e9d998a0f21ce7a4bce1dc78c58c3e015) was 0.4.21. I had compiled 0.4.21
against ecfca41e9d998a0f21ce7a4bce1dc78c58c3e015. RC1 + 0.5.0rc1 stopped
backtracing on me and works ok (minus the 'read' thing I mentioned).


Re: v2.3.0 release candidate released

2017-12-18 Thread Mark Moseley
On Mon, Dec 18, 2017 at 1:16 PM, Mark Moseley <moseleym...@gmail.com> wrote:

> On Mon, Dec 18, 2017 at 7:23 AM, Timo Sirainen <t...@iki.fi> wrote:
>
>> https://dovecot.org/releases/2.3/rc/dovecot-2.3.0.rc1.tar.gz
>> https://dovecot.org/releases/2.3/rc/dovecot-2.3.0.rc1.tar.gz.sig
>>
>> It's finally time for v2.3 release branch! There are several new and
>> exciting features in it. I'm especially happy about the new logging and
>> statistics code, which will allow us to generate statistics for just about
>> everything. We didn't have time to implement everything we wanted for them
>> yet, and there especially aren't all that many logging events yet that can
>> be used for statistics. We'll implement those to v2.3.1, which might also
>> mean that some of the APIs might still change in v2.3.1 if that's required.
>>
>> We also have new lib-smtp server code, which was used to implement SMTP
>> submission server and do a partial rewrite for LMTP server. Please test
>> these before v2.3.0 to make sure we don't have any bad bugs left!
>>
>> BTW. The v2.3.0 will most likely be signed with a new PGP key ED409DA1.
>>
>> Some of the larger changes:
>>
>>  * Various setting changes, see https://wiki2.dovecot.org/Upgrading/2.3
>>  * Logging rewrite started: Logging is now based on hierarchical events.
>>This makes it possible to do various things, like: 1) giving
>>consistent log prefixes, 2) enabling debug logging with finer
>>granularity, 3) provide logs in more machine readable formats
>>(e.g. json). Everything isn't finished yet, especially a lot of the
>>old logging code still needs to be translated to the new way.
>>  * Statistics rewrite started: Stats are now based on (log) events.
>>It's possible to gather statistics about any event that is logged.
>>See http://wiki2.dovecot.org/Statistics for details
>>  * ssl_dh setting replaces the old generated ssl-parameters.dat
>>  * IMAP: When BINARY FETCH finds a broken mails, send [PARSE] error
>>instead of [UNKNOWNCTE]
>>  * Linux: core dumping via PR_SET_DUMPABLE is no longer enabled by
>>default due to potential security reasons (found by cPanel Security
>>Team).
>>
>>  + Added support for SMTP submission proxy server, which includes
>>support for BURL and CHUNKING extension.
>>  + LMTP rewrite. Supports now CHUNKING extension and mixing of
>>local/proxy recipients.
>>  + auth: Support libsodium to add support for ARGON2I and ARGON2ID
>>password schemes.
>>  + auth: Support BLF-CRYPT password scheme in all platforms
>>  + auth: Added LUA scripting support for passdb/userdb.
>>See https://wiki2.dovecot.org/AuthDatabase/Lua
>>  - Input streams are more reliable now when there are errors or when
>>the maximum buffer size is reached. Previously in some situations
>>this could have caused Dovecot to try to read already freed memory.
>>  - Output streams weren't previously handling failures when writing a
>>trailer at the end of the stream. This mainly affected encrypt and
>>zlib compress ostreams, which could have silently written truncated
>>files if the last write happened to fail (which shouldn't normally
>>have ever happened).
>>  - virtual plugin: Fixed panic when fetching mails from virtual
>>mailboxes with IMAP BINARY extension.
>>  - Many other smaller fixes
>>
>>
>
> No issue compilng (and very very excited about this release, esp the Lua
> code, which is already super useful).
>
> I did have this one issue so far with the RC. I was previously using a git
> checkout of ecfca41e9d998a0f21ce7a4bce1dc78c58c3e015 with some of the Lua
> patches attached. That was working just fine (except for one thing I'll
> mention below). I rolled the RC and got this (and I was actually testing
> for the issue I had with ecfca41e9d998a0f21ce7a4bce1dc78c58c3e015):
>
> # doveadm -D acl set -u test1-sha...@test.com INBOX user=te...@test.com
> read  list
> Debug: Loading modules from directory: /usr/lib/dovecot/modules
> Debug: Module loaded: /usr/lib/dovecot/modules/lib01_acl_plugin.so
> Debug: Module loaded: /usr/lib/dovecot/modules/
> lib02_lazy_expunge_plugin.so
> Debug: Module loaded: /usr/lib/dovecot/modules/lib10_quota_plugin.so
> Debug: Module loaded: /usr/lib/dovecot/modules/lib20_fts_plugin.so
> Debug: Module loaded: /usr/lib/dovecot/modules/lib20_virtual_plugin.so
> Debug: Module loaded: /usr/lib/dovecot/modules/lib20_zlib_plugin.so
> Debug: Module loaded: /usr/lib/dovecot/modules/lib21_fts_lucene_plugin.so
> Debug: Loading modules from directory: /usr/lib/dovecot/

Re: v2.3.0 release candidate released

2017-12-18 Thread Mark Moseley
On Mon, Dec 18, 2017 at 7:23 AM, Timo Sirainen  wrote:

> https://dovecot.org/releases/2.3/rc/dovecot-2.3.0.rc1.tar.gz
> https://dovecot.org/releases/2.3/rc/dovecot-2.3.0.rc1.tar.gz.sig
>
> It's finally time for v2.3 release branch! There are several new and
> exciting features in it. I'm especially happy about the new logging and
> statistics code, which will allow us to generate statistics for just about
> everything. We didn't have time to implement everything we wanted for them
> yet, and there especially aren't all that many logging events yet that can
> be used for statistics. We'll implement those to v2.3.1, which might also
> mean that some of the APIs might still change in v2.3.1 if that's required.
>
> We also have new lib-smtp server code, which was used to implement SMTP
> submission server and do a partial rewrite for LMTP server. Please test
> these before v2.3.0 to make sure we don't have any bad bugs left!
>
> BTW. The v2.3.0 will most likely be signed with a new PGP key ED409DA1.
>
> Some of the larger changes:
>
>  * Various setting changes, see https://wiki2.dovecot.org/Upgrading/2.3
>  * Logging rewrite started: Logging is now based on hierarchical events.
>This makes it possible to do various things, like: 1) giving
>consistent log prefixes, 2) enabling debug logging with finer
>granularity, 3) provide logs in more machine readable formats
>(e.g. json). Everything isn't finished yet, especially a lot of the
>old logging code still needs to be translated to the new way.
>  * Statistics rewrite started: Stats are now based on (log) events.
>It's possible to gather statistics about any event that is logged.
>See http://wiki2.dovecot.org/Statistics for details
>  * ssl_dh setting replaces the old generated ssl-parameters.dat
>  * IMAP: When BINARY FETCH finds a broken mails, send [PARSE] error
>instead of [UNKNOWNCTE]
>  * Linux: core dumping via PR_SET_DUMPABLE is no longer enabled by
>default due to potential security reasons (found by cPanel Security
>Team).
>
>  + Added support for SMTP submission proxy server, which includes
>support for BURL and CHUNKING extension.
>  + LMTP rewrite. Supports now CHUNKING extension and mixing of
>local/proxy recipients.
>  + auth: Support libsodium to add support for ARGON2I and ARGON2ID
>password schemes.
>  + auth: Support BLF-CRYPT password scheme in all platforms
>  + auth: Added LUA scripting support for passdb/userdb.
>See https://wiki2.dovecot.org/AuthDatabase/Lua
>  - Input streams are more reliable now when there are errors or when
>the maximum buffer size is reached. Previously in some situations
>this could have caused Dovecot to try to read already freed memory.
>  - Output streams weren't previously handling failures when writing a
>trailer at the end of the stream. This mainly affected encrypt and
>zlib compress ostreams, which could have silently written truncated
>files if the last write happened to fail (which shouldn't normally
>have ever happened).
>  - virtual plugin: Fixed panic when fetching mails from virtual
>mailboxes with IMAP BINARY extension.
>  - Many other smaller fixes
>
>

No issue compilng (and very very excited about this release, esp the Lua
code, which is already super useful).

I did have this one issue so far with the RC. I was previously using a git
checkout of ecfca41e9d998a0f21ce7a4bce1dc78c58c3e015 with some of the Lua
patches attached. That was working just fine (except for one thing I'll
mention below). I rolled the RC and got this (and I was actually testing
for the issue I had with ecfca41e9d998a0f21ce7a4bce1dc78c58c3e015):

# doveadm -D acl set -u test1-sha...@test.com INBOX user=te...@test.com
read  list
Debug: Loading modules from directory: /usr/lib/dovecot/modules
Debug: Module loaded: /usr/lib/dovecot/modules/lib01_acl_plugin.so
Debug: Module loaded: /usr/lib/dovecot/modules/lib02_lazy_expunge_plugin.so
Debug: Module loaded: /usr/lib/dovecot/modules/lib10_quota_plugin.so
Debug: Module loaded: /usr/lib/dovecot/modules/lib20_fts_plugin.so
Debug: Module loaded: /usr/lib/dovecot/modules/lib20_virtual_plugin.so
Debug: Module loaded: /usr/lib/dovecot/modules/lib20_zlib_plugin.so
Debug: Module loaded: /usr/lib/dovecot/modules/lib21_fts_lucene_plugin.so
Debug: Loading modules from directory: /usr/lib/dovecot/modules/doveadm
Debug: Module loaded:
/usr/lib/dovecot/modules/doveadm/lib10_doveadm_acl_plugin.so
Debug: Skipping module doveadm_expire_plugin, because dlopen() failed:
/usr/lib/dovecot/modules/doveadm/lib10_doveadm_expire_plugin.so: undefined
symbol: expire_set_deinit (this is usually intentional, so just ignore this
message)
Debug: Module loaded:
/usr/lib/dovecot/modules/doveadm/lib10_doveadm_quota_plugin.so
Debug: Module loaded:
/usr/lib/dovecot/modules/doveadm/lib10_doveadm_sieve_plugin.so
Debug: Module loaded:
/usr/lib/dovecot/modules/doveadm/lib20_doveadm_fts_lucene_plugin.so
Debug: Module loaded:

Re: Lua Auth

2017-12-01 Thread Mark Moseley
On Thu, Nov 30, 2017 at 5:26 AM, Stephan Bosch <s.bo...@ox.io> wrote:

>
>
> Op 29-11-2017 om 6:17 schreef Aki Tuomi:
>
>> On November 29, 2017 at 4:37 AM Mark Moseley <moseleym...@gmail.com>
>>> wrote:
>>>
>>>
>>> Just happened to be surfing the docs and saw this. This is beyond
>>> awesome:
>>>
>>> https://wiki2.dovecot.org/AuthDatabase/Lua
>>>
>>> Any words of wisdom on using it? I'd be putting a bunch of mysql logic in
>>> it. Any horrible gotchas there? When it says 'blocking', should I assume
>>> that means that a auth worker process will *not* accept any new auth
>>> lookups until both auth_passdb_lookup() and auth_userdb_lookup() have
>>> completed (in which I'd be doing several mysql calls)? If that's the
>>> case,
>>> I assume that the number of auth workers should be bumped up.
>>>
>>> And is a 2.3 release fairly imminent?
>>>
>> Hi!
>>
>> This feature was added very recently, and there is very little
>> operational experience on it. As the docs should say, blocking=yes means
>> that an auth worker is used, and yes, it will block each auth worker during
>> authentication, but what we tried, it should perform rather nicely.
>>
>> The most important gotcha is to always test your lua code rigorously,
>> because there is not much we can do to save you.
>>
>> It should be present in master branch, so if someone feels like trying it
>> out, please let us know if you find any bugs or strangeness. It's not
>> present in nightlies yet.
>>
>> We are planning on releasing 2.3.0 this year.
>>
>
> The Xi package builder has this feature enabled since yesterday. It is
> available in the dovecot-lua package; the first Xi package that doesn't
> have an official Debian equivalent (yet anyway).
>
>
>
I've been playing with Lua auth and so far no issues. I was previously
putting together a very ugly MySQL stored procedure. Using Lua would be a
lot easier (esp when it comes to returning an arbitrary number of columns).

I'd love to see any test Lua code that the dovecot team has been playing
around with (and realize it's not remotely production-ready, so don't worry
about caveats

I did have a couple of questions though:

1) Is the data returned by Lua auth not cacheable? I've got the following
settings (and I'm just using Lua in the userdb lookup, not passdb -- passdb
is doing a lightweight SQL lookup for username/password):

auth_cache_negative_ttl = 1 mins
auth_cache_size = 10 M
auth_cache_ttl = 10 mins

but I notice that every time I auth, it'll redo all the queries in my Lua
code. I'd have expected that data to be served out of cache till the 10min
TTL is up


2) Is there an appropriate way to return data with spaces in it (or
presumably other non-alphanum chars. My quota name had a space in it, which
somehow got interpreted as 'yes' , i.e.:

imap: Error: Failed to initialize quota: Invalid quota root quota: Unknown
quota backend: yes

I simply changed the space to an underscore as a workaround, but I'm
curious if there's a better way. I tried various quoting without success.
Didn't try escaping yet.


3) Can you elaborate on the "auth_request#response_from_template(template)"
and "auth_request#var_expand(template)" functions? Specifically how to use
them. I'm guessing that I could've used one of them to work around #2 (that
it would have done the escaping for me)


Thanks!


Lua Auth

2017-11-28 Thread Mark Moseley
Just happened to be surfing the docs and saw this. This is beyond awesome:

https://wiki2.dovecot.org/AuthDatabase/Lua

Any words of wisdom on using it? I'd be putting a bunch of mysql logic in
it. Any horrible gotchas there? When it says 'blocking', should I assume
that means that a auth worker process will *not* accept any new auth
lookups until both auth_passdb_lookup() and auth_userdb_lookup() have
completed (in which I'd be doing several mysql calls)? If that's the case,
I assume that the number of auth workers should be bumped up.

And is a 2.3 release fairly imminent?


Re: stats module

2017-11-03 Thread Mark Moseley
On Fri, Nov 3, 2017 at 9:35 AM, Jeff Abrahamson  wrote:

> Sorry, Aki, I don't follow you.  Did I do it wrong in the file 91-stats
> that I shared in my original mail (attached here)?
>
> Jeff
>
>
> On 03/11/17 16:50, Aki Tuomi wrote:
> > You need to add the stats listener, by yourself.
> >
> > Aki
> >
> >> On November 3, 2017 at 5:19 PM Jeff Abrahamson  wrote:
> >>
> >>
> >> Thanks for your suggestions, Steffen.
> >>
> >> Running doveconf -n shows no errors and also, sadly, no mention of the
> >> stats listener:
> >>
> >> ╭╴ (master=)╶╮
> >> ╰ [T] jeff@nantes-1:p27 $ doveconf -n
> >> # 2.2.22 (fe789d2): /etc/dovecot/dovecot.conf
> >> # Pigeonhole version 0.4.13 (7b14904)
> >> # OS: Linux 4.4.0-97-generic x86_64 Ubuntu 16.04.3 LTS
> >> auth_mechanisms = plain login
> >> auth_socket_path = /var/run/dovecot/auth-userdb
> >> mail_location = maildir:~/Maildir
> >> managesieve_notify_capability = mailto
> >> managesieve_sieve_capability = fileinto reject envelope
> >> encoded-character vacation subaddress comparator-i;ascii-numeric
> >> relational regex imap4flags copy include variables body enotify
> >> environment mailbox date index ihave duplicate mime foreverypart
> >> extracttext
> >> namespace inbox {
> >>   inbox = yes
> >>   location =
> >>   mailbox Drafts {
> >> special_use = \Drafts
> >>   }
> >>   mailbox Junk {
> >> special_use = \Junk
> >>   }
> >>   mailbox Sent {
> >> special_use = \Sent
> >>   }
> >>   mailbox "Sent Messages" {
> >> special_use = \Sent
> >>   }
> >>   mailbox Trash {
> >> special_use = \Trash
> >>   }
> >>   prefix =
> >> }
> >> passdb {
> >>   driver = pam
> >> }
> >> plugin {
> >>   sieve = ~/.dovecot.sieve
> >>   sieve_dir = ~/sieve
> >> }
> >> protocols = imap sieve
> >> service auth {
> >>   unix_listener /var/spool/postfix/private/auth {
> >> group = postfix
> >> mode = 0666
> >> user = postfix
> >>   }
> >>   unix_listener /var/spool/postfix/private/dovecot-auth {
> >> group = postfix
> >> mode = 0660
> >> user = postfix
> >>   }
> >> }
> >> service imap-login {
> >>   inet_listener imaps {
> >> port = 993
> >> ssl = yes
> >>   }
> >> }
> >> ssl_cert =  >> ssl_cipher_list =
> >> EDH+CAMELLIA:EDH+aRSA:EECDH+aRSA+AESGCM:EECDH+aRSA+SHA384:
> EECDH+aRSA+SHA256:EECDH:+CAMELLIA256:+AES256:+CAMELLIA128:+AES128:+SSLv3:!
> aNULL:!eNULL:!LOW:!3DES:!MD5:!EXP:!PSK:!DSS:!RC4:!SEED:!
> ECDSA:CAMELLIA256-SHA:AES256-SHA:CAMELLIA128-SHA:AES128-SHA
> >> ssl_key =  >> ssl_protocols = !SSLv2 !SSLv3
> >> userdb {
> >>   driver = passwd
> >> }
> >> protocol lda {
> >>   deliver_log_format = msgid=%m: %$
> >>   mail_plugins = sieve
> >>   postmaster_address = postmaster
> >>   quota_full_tempfail = yes
> >>   rejection_reason = Your message to <%t> was automatically
> >> rejected:%n%r
> >> }
> >> protocol imap {
> >>   imap_client_workarounds = delay-newmail
> >>   mail_max_userip_connections = 20
> >> }
> >> protocol pop3 {
> >>   mail_max_userip_connections = 10
> >>   pop3_client_workarounds = outlook-no-nuls oe-ns-eoh
> >> }
> >> ╭╴ (master=)╶╮
> >> ╰ [T] jeff@nantes-1:p27 $
> >>
> >> Here I have a tail -f /var/log/mail.log and mail.err running in the
> >> background so we can see the results of the restart:
> >>
> >> [T] jeff@nantes-1:conf.d $ ls -l
> >> total 136
> >> -rw-r--r-- 1 root root  5301 Aug 25 15:26 10-auth.conf
> >> -rw-r--r-- 1 root root  1893 Mar 16  2016 10-director.conf
> >> -rw-r--r-- 1 root root  2805 Mar 16  2016 10-logging.conf
> >> -rw-r--r-- 1 root root 16172 Aug 25 15:35 10-mail.conf
> >> -rw-r--r-- 1 root root  3547 Aug 25 15:40 10-master.conf
> >> -rw-r--r-- 1 root root  2307 Aug 25 16:27 10-ssl.conf
> >> -rw-r--r-- 1 root root   291 Apr 11  2017 10-tcpwrapper.conf
> >> -rw-r--r-- 1 root root  1668 Mar 16  2016 15-lda.conf
> >> -rw-r--r-- 1 root root  2808 Mar 16  2016 15-mailboxes.conf
> >> -rw-r--r-- 1 root root  3295 Mar 16  2016 20-imap.conf
> >> -rw-r--r-- 1 root root  2398 Apr 11  2017 20-managesieve.conf
> >> -rw-r--r-- 1 root root  4109 Aug 25 15:28 20-pop3.conf
> >> -rw-r--r-- 1 root root   676 Mar 16  2016 90-acl.conf
> >> -rw-r--r-- 1 root root   292 Mar 16  2016 90-plugin.conf
> >> -rw-r--r-- 1 root root  2502 Mar 16  2016 90-quota.conf
> >> -rw-r--r-- 1 root root  6822 Apr 11  2017 90-sieve.conf
> >> -rw-r--r-- 1 root root  1829 Apr 11  2017 90-sieve-extprograms.conf
> >> -rw-r--r-- 1 root root  1856 Nov  3 16:11 91-stats
> >> -rw-r--r-- 1 root root  1430 Oct 31 16:33
> 99-mail-stack-delivery.conf
> >> -rw-r--r-- 1 root root   499 Mar 16  2016
> 

Re: Conditionally disabling auth policy

2017-09-28 Thread Mark Moseley
On Thu, Sep 28, 2017 at 9:34 AM, Aki Tuomi <aki.tu...@dovecot.fi> wrote:

>
> > On September 28, 2017 at 7:20 PM Mark Moseley <moseleym...@gmail.com>
> wrote:
> >
> >
> > On Wed, Sep 27, 2017 at 10:06 PM, Aki Tuomi <aki.tu...@dovecot.fi>
> wrote:
> >
> > >
> > >
> > > On 27.09.2017 20:14, Mark Moseley wrote:
> > > > On Wed, Sep 27, 2017 at 10:03 AM, Marcus Rueckert <da...@opensu.se>
> > > wrote:
> > > >
> > > >> On 2017-09-27 16:57:44 +, Mark Moseley wrote:
> > > >>> I've been digging into the auth policy stuff with weakforced
> lately.
> > > >> There
> > > >>> are cases (IP ranges, so could be wrapped up in remote {} blocks)
> where
> > > >>> it'd be nice to skip the auth policy (internal hosts that I can
> trust,
> > > >> but
> > > >>> that are hitting the same servers as the outside world).
> > > >>>
> > > >>> Is there any way to disable auth policy, possibly inside a
> remote{}?
> > > >>>
> > > >>> auth_policy_server_url complains that it can't be used inside a
> remote
> > > >>> block, so no dice there. Anything I'm missing?
> > > >> From my config:
> > > >> ```
> > > >>   allowed_subnets=newNetmaskGroup()
> > > >>   allowed_subnets:addMask('fe80::/64')
> > > >>   allowed_subnets:addMask('127.0.0.0/8')
> > > >> [snip]
> > > >>   if (not(allowed_subnets.match(lt.remote)))
> > > >>   -- do GeoIP check
> > > >>   end
> > > >> ```
> > > >>
> > > >> of course could just skip all checks in that case if really wanted.
> but
> > > >> you probably want to be careful not to skip too many checks
> otherwise
> > > >> the attack moves from your imap port e.g. to your webmailer.
> > > >>
> > > >>
> > > >>
> > > > Hi. Yup, I've got my own whitelisting going on, on the wforce side of
> > > > things. I'm just looking to forgo the 3 HTTP reqs completely to
> wforce,
> > > > from the dovecot side, if possible. I've got some internal services
> that
> > > > can generate a significant amount of dovecot logins, but it's kind of
> > > silly
> > > > to keep doing auth policy lookups for those internal servers.
> > > >
> > > > To continue the Lua thread, I was thinking I could also drop a local
> > > > openresty to do some conditional lookups. I.e. if remote IP is known
> > > good,
> > > > a localhost nginx just sends back the response; if not a known good
> IP,
> > > > then proxy the req over to the wforce cluster. That might be a bit
> > > overkill
> > > > though :)
> > > Hi!
> > >
> > > Currently it's not possible to disable auth_policy conditionally,
> > > although it does sound like a good idea.
> > >
> > > You should probably see also if your webmail supports passing the
> > > original IP to dovecot using
> > >
> > > a01 ID ("X-Original-IP" "1.2.3.4")
> > >
> > > before login, which would let you use weakforced in more meaningful
> way.
> > > There are few other fields too that can be used
> > >
> > > Aki
> > >
> >
> > Yup, I've got that set up. I've got no problems with short-circuiting the
> > request on the weakforce side quickly, in case of known good ips. Just
> > hoping to avoid some unnecessary auth policy lookups.
> >
> > Out of curiosity (and I've googled this before), what other fields can be
> > used there?
>
> * x-originating-ip - client IP
> * x-originating-port - client port
> * x-connected-ip - server IP (like, on proxy)
> * x-connected-port - server port
> * x-proxy-ttl - non-negative integer, decremented on each hop, used for
> loop detection.
> * x-session-id - session ID, if you want to provide one
> * x-session-ext-id - session prefix
> * x-forward- - field to import into passdb during
> authentication, comes prefixed with forward_. e.g if you set
> x-forward-username, it comes as forward_username, and can be used like
>
> username=%{forward_username}
>


The 'forward' stuff is gold. I found that I had to access it like this:
%{passwd:forward_}

Is that the right way to use it?

Also (unrelated), I noticed this in the wiki but it's not in the release
notes for 2.2.32 (and it sounds super useful):

"Since v2.2.32 it's possible to use conditionals in variable expansion"


Re: Conditionally disabling auth policy

2017-09-28 Thread Mark Moseley
On Wed, Sep 27, 2017 at 10:06 PM, Aki Tuomi <aki.tu...@dovecot.fi> wrote:

>
>
> On 27.09.2017 20:14, Mark Moseley wrote:
> > On Wed, Sep 27, 2017 at 10:03 AM, Marcus Rueckert <da...@opensu.se>
> wrote:
> >
> >> On 2017-09-27 16:57:44 +, Mark Moseley wrote:
> >>> I've been digging into the auth policy stuff with weakforced lately.
> >> There
> >>> are cases (IP ranges, so could be wrapped up in remote {} blocks) where
> >>> it'd be nice to skip the auth policy (internal hosts that I can trust,
> >> but
> >>> that are hitting the same servers as the outside world).
> >>>
> >>> Is there any way to disable auth policy, possibly inside a remote{}?
> >>>
> >>> auth_policy_server_url complains that it can't be used inside a remote
> >>> block, so no dice there. Anything I'm missing?
> >> From my config:
> >> ```
> >>   allowed_subnets=newNetmaskGroup()
> >>   allowed_subnets:addMask('fe80::/64')
> >>   allowed_subnets:addMask('127.0.0.0/8')
> >> [snip]
> >>   if (not(allowed_subnets.match(lt.remote)))
> >>   -- do GeoIP check
> >>   end
> >> ```
> >>
> >> of course could just skip all checks in that case if really wanted. but
> >> you probably want to be careful not to skip too many checks otherwise
> >> the attack moves from your imap port e.g. to your webmailer.
> >>
> >>
> >>
> > Hi. Yup, I've got my own whitelisting going on, on the wforce side of
> > things. I'm just looking to forgo the 3 HTTP reqs completely to wforce,
> > from the dovecot side, if possible. I've got some internal services that
> > can generate a significant amount of dovecot logins, but it's kind of
> silly
> > to keep doing auth policy lookups for those internal servers.
> >
> > To continue the Lua thread, I was thinking I could also drop a local
> > openresty to do some conditional lookups. I.e. if remote IP is known
> good,
> > a localhost nginx just sends back the response; if not a known good IP,
> > then proxy the req over to the wforce cluster. That might be a bit
> overkill
> > though :)
> Hi!
>
> Currently it's not possible to disable auth_policy conditionally,
> although it does sound like a good idea.
>
> You should probably see also if your webmail supports passing the
> original IP to dovecot using
>
> a01 ID ("X-Original-IP" "1.2.3.4")
>
> before login, which would let you use weakforced in more meaningful way.
> There are few other fields too that can be used
>
> Aki
>

Yup, I've got that set up. I've got no problems with short-circuiting the
request on the weakforce side quickly, in case of known good ips. Just
hoping to avoid some unnecessary auth policy lookups.

Out of curiosity (and I've googled this before), what other fields can be
used there?


Re: Conditionally disabling auth policy

2017-09-27 Thread Mark Moseley
On Wed, Sep 27, 2017 at 10:03 AM, Marcus Rueckert <da...@opensu.se> wrote:

> On 2017-09-27 16:57:44 +0000, Mark Moseley wrote:
> > I've been digging into the auth policy stuff with weakforced lately.
> There
> > are cases (IP ranges, so could be wrapped up in remote {} blocks) where
> > it'd be nice to skip the auth policy (internal hosts that I can trust,
> but
> > that are hitting the same servers as the outside world).
> >
> > Is there any way to disable auth policy, possibly inside a remote{}?
> >
> > auth_policy_server_url complains that it can't be used inside a remote
> > block, so no dice there. Anything I'm missing?
>
> From my config:
> ```
>   allowed_subnets=newNetmaskGroup()
>   allowed_subnets:addMask('fe80::/64')
>   allowed_subnets:addMask('127.0.0.0/8')
> [snip]
>   if (not(allowed_subnets.match(lt.remote)))
>   -- do GeoIP check
>   end
> ```
>
> of course could just skip all checks in that case if really wanted. but
> you probably want to be careful not to skip too many checks otherwise
> the attack moves from your imap port e.g. to your webmailer.
>
>
>

Hi. Yup, I've got my own whitelisting going on, on the wforce side of
things. I'm just looking to forgo the 3 HTTP reqs completely to wforce,
from the dovecot side, if possible. I've got some internal services that
can generate a significant amount of dovecot logins, but it's kind of silly
to keep doing auth policy lookups for those internal servers.

To continue the Lua thread, I was thinking I could also drop a local
openresty to do some conditional lookups. I.e. if remote IP is known good,
a localhost nginx just sends back the response; if not a known good IP,
then proxy the req over to the wforce cluster. That might be a bit overkill
though :)


Conditionally disabling auth policy

2017-09-27 Thread Mark Moseley
I've been digging into the auth policy stuff with weakforced lately. There
are cases (IP ranges, so could be wrapped up in remote {} blocks) where
it'd be nice to skip the auth policy (internal hosts that I can trust, but
that are hitting the same servers as the outside world).

Is there any way to disable auth policy, possibly inside a remote{}?

auth_policy_server_url complains that it can't be used inside a remote
block, so no dice there. Anything I'm missing?


Re: lazy_expunge doesn't work since upgrade 2.2.30 to 2.2.32

2017-09-18 Thread Mark Moseley
On Fri, Sep 15, 2017 at 1:42 PM, Harald Leithner <leith...@itronic.at>
wrote:

> Am 2017-09-15 21:25, schrieb Mark Moseley:
>
>> On Wed, Sep 6, 2017 at 2:01 AM, Harald Leithner <leith...@itronic.at>
>> wrote:
>>
>> Any idea why this doesn't setup work anymore in 2.2.32?
>>>
>>> thx
>>>
>>> Harald
>>>
>>>
>>>
>> I just ran into the same thing myself. For me, when I added this to the
>> "location" in the expunged namespace, it started working again:
>>
>> ...:LISTINDEX=expunged.list.index
>>
>> I can't tell you why exactly (presumably has something to do with
>> "mailbox_list_index = yes"). Perhaps the devs might know?
>>
>
> Thx, this seams to fixed the problem.
>


Dovecot guys, any idea why this was the case? I just want to understand why
so I can keep an out eye during future upgrades.


Re: lazy_expunge doesn't work since upgrade 2.2.30 to 2.2.32

2017-09-15 Thread Mark Moseley
On Wed, Sep 6, 2017 at 2:01 AM, Harald Leithner  wrote:

> Any idea why this doesn't setup work anymore in 2.2.32?
>
> thx
>
> Harald
>
>

I just ran into the same thing myself. For me, when I added this to the
"location" in the expunged namespace, it started working again:

...:LISTINDEX=expunged.list.index

I can't tell you why exactly (presumably has something to do with
"mailbox_list_index = yes"). Perhaps the devs might know?


Re: weakforced

2017-08-17 Thread Mark Moseley
On Thu, Aug 17, 2017 at 1:16 AM, Teemu Huovila 
wrote:

> Below is an answer by the current weakforced main developer. It overlaps
> partly with Samis answer.
>
> ---snip---
>  > Do you have any hints/tips/guidelines for things like sizing, both in a
> > per-server sense (memory, mostly) and in a cluster-sense (logins per sec
> ::
> > node ratio)? I'm curious too how large is quite large. Not looking for
> > details but just a ballpark figure. My largest install would have about 4
> > million mailboxes to handle, which I'm guessing falls well below 'quite
> > large'. Looking at stats, our peak would be around 2000 logins/sec.
> >
>
> So in terms of overall requests per second, on a 4 CPU server, latencies
> start to rise pretty quickly once you get to around 18K requests per
> second. Now, bearing in mind that each login from Dovecot could generate 2
> allow and 1 report requests, this leads to roughly 6K logins per second on
> a 4 CPU server.
>
> In terms of memory usage, the more the better obviously, but it depends on
> your policy and how many time windows you have. Most of our customers have
> 24GB+.
>
> > I'm also curious if -- assuming they're well north of 2000 logins/sec --
> > the replication protocol begins to overwhelm the daemon at very high
> > concurrency.
> >
> Eventually it will, but in tests it consumes a pretty tiny fraction of the
> overall CPU load compared to requests so it must be a pretty high limit.
> Also, if you don’t update the time windows DB in the allow function, then
> that doesn’t cause any replication. We’ve tested with three servers, each
> handling around 5-6000 logins/sec (i.e. 15-18K requests each) and the
> overall query rate was maintained.
>
> > Any rules of thumb on things like "For each additional 1000 logins/sec,
> add
> > another # to setNumSiblingThreads and another # to setNumWorkerThreads"
> > would be super appreciated too.
> >
>
> Actually the rule of thumb is more like:
>
> - WorkerThreads - Set to number of CPUs. Set number of LuaContexts to
> WorkerThreads + 2
> - SiblingThreads - Leave at 2 unless you see issues.
>
> > Thanks! And again, feel free to point me elsewhere if there's a better
> > place to ask.
> Free free to ask questions using the weakforced issues on GitHub.
>
> > For a young project, the docs are actually quite good.
>
> Thanks, that’s appreciated - we try to keep them up to date and
> comprehensive.
>
>
>

Wow, wow, wow. Thanks so much to all three of you guys for such detailed
answers. That's absolutely perfect info and just what I was looking for.


Re: weakforced

2017-08-16 Thread Mark Moseley
On Tue, Jul 18, 2017 at 10:40 PM, Aki Tuomi <aki.tu...@dovecot.fi> wrote:

>
>
> On 19.07.2017 02:38, Mark Moseley wrote:
> > I've been playing with weakforced, so it fills in the 'fail2ban across a
> > cluster' niche (not to mention RBLs). It seems to work well, once you've
> > actually read the docs :)
> >
> > I was curious if anyone had played with it and was *very* curious if
> anyone
> > was using it in high traffic production. Getting things to 'work' versus
> > getting them to work *and* handle a couple hundred dovecot servers is a
> > very wide margin. I realize this is not a weakforced mailing list (there
> > doesn't appear to be one anyway), but the users here are some of the
> > likeliest candidates for having tried it out.
> >
> > Mainly I'm curious if weakforced can handle serious concurrency and
> whether
> > the cluster really works under load.
>
> Hi!
>
> Weakforced is used by some of our customers in quite large
> installations, and performs quite nicely.
>
>
>

Cool, good to know.

Do you have any hints/tips/guidelines for things like sizing, both in a
per-server sense (memory, mostly) and in a cluster-sense (logins per sec ::
node ratio)? I'm curious too how large is quite large. Not looking for
details but just a ballpark figure. My largest install would have about 4
million mailboxes to handle, which I'm guessing falls well below 'quite
large'. Looking at stats, our peak would be around 2000 logins/sec.

I'm also curious if -- assuming they're well north of 2000 logins/sec --
the replication protocol begins to overwhelm the daemon at very high
concurrency.

Any rules of thumb on things like "For each additional 1000 logins/sec, add
another # to setNumSiblingThreads and another # to setNumWorkerThreads"
would be super appreciated too.

Thanks! And again, feel free to point me elsewhere if there's a better
place to ask. For a young project, the docs are actually quite good.


weakforced

2017-07-18 Thread Mark Moseley
I've been playing with weakforced, so it fills in the 'fail2ban across a
cluster' niche (not to mention RBLs). It seems to work well, once you've
actually read the docs :)

I was curious if anyone had played with it and was *very* curious if anyone
was using it in high traffic production. Getting things to 'work' versus
getting them to work *and* handle a couple hundred dovecot servers is a
very wide margin. I realize this is not a weakforced mailing list (there
doesn't appear to be one anyway), but the users here are some of the
likeliest candidates for having tried it out.

Mainly I'm curious if weakforced can handle serious concurrency and whether
the cluster really works under load.


Re: v2.2.31 release candidate released

2017-06-23 Thread Mark Moseley
On Fri, Jun 23, 2017 at 4:30 AM, Timo Sirainen <t...@iki.fi> wrote:

> On 23 Jun 2017, at 3.44, Mark Moseley <moseleym...@gmail.com> wrote:
> >
> > It'd be great if https://dovecot.org/list/dovecot/2016-June/104763.html
> > could make it into this RC (assuming you guys approved it back when it
> was
> > submitted)
>
> I'll try to get it to 2.2.32. 2.2.31 won't have any changes anymore that
> aren't absolutely required.
>
>
Sounds good to me :)

Thanks!


Re: v2.2.31 release candidate released

2017-06-22 Thread Mark Moseley
On Tue, Jun 20, 2017 at 12:45 PM, Timo Sirainen  wrote:

> https://dovecot.org/releases/2.2/rc/dovecot-2.2.31.rc1.tar.gz
> https://dovecot.org/releases/2.2/rc/dovecot-2.2.31.rc1.tar.gz.sig
>
> Unless new bugs are found, this will be the final v2.2.31 release, which
> will be released on Monday.
>
>  * LMTP: Removed "(Dovecot)" from added Received headers. Some
>installations want to hide it, and there's not really any good reason
>for anyone to have it.
>
>  + Add ssl_alt_cert and ssl_alt_key settings to add support for
>having both RSA and ECDSA certificates.
>  + pop3-migration plugin: Strip trailing whitespace from headers
>when matching mails between IMAP and POP3. This helps with migrations
>from Zimbra.
>  + acl: Add acl_globals_only setting to disable looking up
>per-mailbox dovecot-acl files.
>  + Parse invalid message addresses better. This mainly affects the
>generated IMAP ENVELOPE replies.
>  - v2.2.30 wasn't fixing corrupted dovecot.index.cache files properly.
>It could have deleted wrong mail's cache or assert-crashed.
>  - v2.2.30 mail-crypt-acl plugin was assert-crashing
>  - v2.2.30 welcome plugin wasn't working
>  - Various fixes to handling mailbox listing. Especially related to
>handling nonexistent autocreated/autosubscribed mailboxes and ACLs.
>  - Global ACL file was parsed as if it was local ACL file. This caused
>some of the ACL rule interactions to not work exactly as intended.
>  - auth: forward_* fields didn't work properly: Only the first forward
>field was working, and only if the first passdb lookup succeeded.
>  - Using mail_sort_max_read_count sometimes caused "Broken sort-*
>indexes, resetting" errors.
>  - Using mail_sort_max_read_count may have caused very high CPU usage.
>  - Message address parsing could have crashed on invalid input.
>  - imapc_features=fetch-headers wasn't always working correctly and
>caused the full header to be fetched.
>  - imapc: Various bugfixes related to connection failure handling.
>  - quota=imapc sent unnecessary FETCH RFC822.SIZE to server when
>expunging mails.
>  - quota=count: quota_warning = -storage=.. was never executed
>  - quota=count: Add support for "ns" parameter
>  - dsync: Fix incremental syncing for mails that don't have Date or
>Message-ID headers.
>  - imap: Fix hang when client sends pipelined SEARCH +
>EXPUNGE/CLOSE/LOGOUT.
>  - oauth2: Token validation didn't accept empty server responses.
>  - imap: NOTIFY command has been almost completely broken since the
>beginning. I guess nobody has been trying to use it.
>


It'd be great if https://dovecot.org/list/dovecot/2016-June/104763.html
could make it into this RC (assuming you guys approved it back when it was
submitted)


Redis auth?

2017-06-12 Thread Mark Moseley
I was curious what the status of this redis auth patch is:

https://dovecot.org/list/dovecot/2016-June/104763.html


Re: Retrieving mail from read-only mdbox

2017-06-01 Thread Mark Moseley
On Wed, May 31, 2017 at 3:24 PM, M. Balridge <dove...@r.paypc.com> wrote:

> Quoting Mark Moseley <moseleym...@gmail.com>:
>
> > I've tried using IMAP with mail_location pointed at the snapshot, but,
> > though I can get a listing of emails in the mailbox, the fetch fails when
> > dovecot can't write-lock dovecot.index.log.
>
> I'm surprised that dovecot would even try to write-lock a write-protected
> file/directory, though I can appreciate the situation where a file may be
> in a
> directory that is writable by some UID other than the one dovecot is
> running as.
>
> Is there an unsafe control over lock_method similar to Samba's fake oplocks
> setting in Dovecot?
>
> If anyone wants some good "horror" writing, a perusal of Jeremy Allison's
> write-up on the schizophrenic approaches to file-locking is worthy of your
> time.
>
> https://www.samba.org/samba/news/articles/low_point/tale_two_stds_os2.html
>
>
>
There's no fake locks from what I can tell. If I'm reading the source
right, the only valid options are fcntl, flock, and dotlock.
Tried 'em all :)


Re: Retrieving mail from read-only mdbox

2017-06-01 Thread Mark Moseley
>
> >
> > > I've tried using IMAP with mail_location pointed at the snapshot, but,
> > > though I can get a listing of emails in the mailbox, the fetch fails
> when
> > > dovecot can't write-lock dovecot.index.log.
> >
> > I've thought about doing this someday (adding snapshots to a user's
> > namespace) but never got around to doing it.  Snapshots get rotated
> > (e.g. hourly.1 -> hourly.2 -> etc.)  so every hour, so any indices
> > produced gets invalidated.  You would need to generate MEMORY indices
> > to create it on the fly.  Something like
> >
> >   namespace snapshots {
> >   location = ...:INDEX==MEMORY
> >   ...
> >   }
> >
> > I'm not sure how dovecot would react when NetApp pulls the rug out
> > from under one of the hourly snapshots and replace it with the next
> > hour's version.
> >
> > Joseph Tam 
>
> location=...:INDEX=MEMORY, actually.
>
> When the rug gets pulled, what happens, depends on whether the user has
> the snapshots location open or not, but it would probably kick the user out
> in the end and complain. But then the user would probably reconnect? =)
>
>

I didn't want to muddy the waters in my first message with all the stuff
I've tried, but that was one of them. On our existing maildir mailboxes, we
have INDEX pointed at a local SSD, so that was one of the first things I
tried. For this readonly mdbox, I tried pointing INDEX= at a local disk, as
well as MEMORY. I've also got CONTROL= set to local disk as well for this.

However (and I assumed that this was probably due to the nature of mdbox
and a known thing), if I set INDEX to anything (MEMORY or a path) on my
mdbox mailboxes, dovecot acts as if the mailbox is empty. I've tried every
permutation I can think of, but the end result is this: without INDEX=, I
can get a list of messages (still can't FETCH due to the index); if I add
INDEX=, I get no results. Debug output with INDEX shows that auth is ok,
the index is being picked up correctly and that the mail_location is still
correct.

I notice too that when I have INDEX set to a path, when I strace the imap
process, though it stat()s the mail_location, it never once tries to stat()
or open any of the storage files under the mail_location. It *does* stat()
the 'storage' directory inside the snapshot, but never walks that directory
nor stat()s the m.# files inside of it.

If I have INDEX set to MEMORY, as of 2.2.30 (I didn't see this with 2.2.27,
though I still got an empty mailbox result), I get a traceback in the logs:

Jun 01 14:10:06 imap-login: Info: Login: user=,
method=PLAIN, rip=127.0.0.1, lip=127.0.0.1, mpid=8542, secured,
session=
Jun 01 14:10:06 imap(md...@test.com): Panic: file mailbox-list.c: line
1330: unreached
Jun 01 14:10:06 imap(md...@test.com): Error: Raw backtrace:
/usr/lib/dovecot/libdovecot.so.0(+0x9d9e2) [0x6b2dd043f9e2] ->
/usr/lib/dovecot/libdovecot.so.0(+0x9dacd) [0x6b2dd043facd] ->
/usr/lib/dovecot/libdovecot.so.0(i_fatal+0) [0x6b2dd03d1821] ->
/usr/lib/dovecot/libdovecot-storage.so.0(mailbox_list_get_root_forced+0x51)
[0x6b2dd0721891] ->
/usr/lib/dovecot/libdovecot-storage.so.0(mdbox_map_init+0x28)
[0x6b2dd07376b8] ->
/usr/lib/dovecot/libdovecot-storage.so.0(mdbox_storage_create+0xeb)
[0x6b2dd073e2cb] ->
/usr/lib/dovecot/libdovecot-storage.so.0(mail_storage_create_full+0x3d4)
[0x6b2dd0714884] ->
/usr/lib/dovecot/libdovecot-storage.so.0(mail_storage_create+0x2c)
[0x6b2dd0714c2c] ->
/usr/lib/dovecot/libdovecot-storage.so.0(mail_namespaces_init_add+0x159)
[0x6b2dd070ac09] ->
/usr/lib/dovecot/libdovecot-storage.so.0(mail_namespaces_init+0xd9)
[0x6b2dd070bd99] -> dovecot/imap [md...@test.com 127.0.0.1]() [0x426a10] ->
/usr/lib/dovecot/libdovecot.so.0(+0x37783) [0x6b2dd03d9783] ->
/usr/lib/dovecot/libdovecot.so.0(+0x37a4d) [0x6b2dd03d9a4d] ->
/usr/lib/dovecot/libdovecot.so.0(+0x383da) [0x6b2dd03da3da] ->
/usr/lib/dovecot/libdovecot.so.0(io_loop_call_io+0x52) [0x6b2dd0454c42] ->
/usr/lib/dovecot/libdovecot.so.0(io_loop_handler_run_internal+0x109)
[0x6b2dd04562b9] ->
/usr/lib/dovecot/libdovecot.so.0(io_loop_handler_run+0x3c) [0x6b2dd0454cdc]
-> /usr/lib/dovecot/libdovecot.so.0(io_loop_run+0x38) [0x6b2dd0454e88] ->
/usr/lib/dovecot/libdovecot.so.0(master_service_run+0x13) [0x6b2dd03dbd93]
-> dovecot/imap [md...@test.com 127.0.0.1](main+0x302) [0x40caa2] ->
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf0) [0x6b2dcfff9830] ->
dovecot/imap [md...@test.com 127.0.0.1](_start+0x29) [0x40cc29]
Jun 01 14:10:06 imap(md...@test.com): Fatal: master: service(imap): child
8542 killed with signal 6 (core dumps disabled)

But again, I figured that maybe this was normal for mdbox (or that I've
goofed something else up). Also, the 'If I set index, the mailbox is empty
or I get a traceback' happens if I'm looking at a readonly mdbox or a
regular, writable mdbox (and the writable one works perfectly if you remove
INDEX=).

BTW, in my case, the 'snapshot' IMAP wouldn't be directly accessible to end

Retrieving mail from read-only mdbox

2017-05-31 Thread Mark Moseley
This is a 'has anyone run into this and solved it' post. And yes, I've been
reading and re-reading TFM but without luck. The background is that I'm
working on tooling before we start a mass maildir->mdbox conversion. One of
those tools is recovering mail from backups (easy as pie with maildir).

We've got all of our email on Netapp file servers. They have nice
snapshotting but the snapshots are, of course, readonly.

My question: is there a doveadm command that will allow for email to be
retrieved from a readonly mdbox, either directly (like manipulating the
mdbox files directly) or by doveadm talking to the dovecot processes?

Ideally, there'd be something like doveadm dump, but that could dump
selected message contents.

I've tried using IMAP with mail_location pointed at the snapshot, but,
though I can get a listing of emails in the mailbox, the fetch fails when
dovecot can't write-lock dovecot.index.log.

If anyone has gotten something similar to work, I'd love to hear about it.
A working IMAP setup would be the ideal, since it's more easily automatible
(but I'll take whatever I can get).

Any and all hints are most welcome!


Re: Host ... is being updated before previous update had finished

2017-04-21 Thread Mark Moseley
Timo/Aki/Docecot guys, any hints here? Is this a bug? Design issue?

On Fri, Apr 7, 2017 at 10:10 AM Mark Moseley <moseleym...@gmail.com> wrote:

> On Mon, Apr 3, 2017 at 6:04 PM, Mark Moseley <moseleym...@gmail.com>
> wrote:
>
>> We just had a bunch of backend boxes go down due to a DDoS in our
>> director cluster. When the DDoS died down, our director ring was a mess.
>>
>> Each box had thousands (and hundreds per second, which is a bit much) of
>> log lines like the following:
>>
>> Apr 03 19:59:29 director: Warning: director(10.1.20.10:9090/left): Host
>> 10.1.17.15 is being updated before previous update had finished (up ->
>> down) - setting to state=down vhosts=100
>> Apr 03 19:59:29 director: Warning: director(10.1.20.10:9090/left): Host
>> 10.1.17.15 is being updated before previous update had finished (down ->
>> up) - setting to state=up vhosts=100
>> Apr 03 19:59:29 director: Warning: director(10.1.20.10:9090/left): Host
>> 10.1.17.15 is being updated before previous update had finished (up ->
>> down) - setting to state=down vhosts=100
>> Apr 03 19:59:29 director: Warning: director(10.1.20.10:9090/left): Host
>> 10.1.17.15 is being updated before previous update had finished (down ->
>> up) - setting to state=up vhosts=100
>> Apr 03 19:59:29 director: Warning: director(10.1.20.10:9090/left): Host
>> 10.1.17.15 is being updated before previous update had finished (up ->
>> down) - setting to state=down vhosts=100
>> Apr 03 19:59:29 director: Warning: director(10.1.20.2:9090/right): Host
>> 10.1.17.15 is being updated before previous update had finished (down ->
>> up) - setting to state=up vhosts=100
>> Apr 03 19:59:29 director: Warning: director(10.1.20.2:9090/right): Host
>> 10.1.17.15 is being updated before previous update had finished (up ->
>> down) - setting to state=down vhosts=100
>> Apr 03 19:59:29 director: Warning: director(10.1.20.2:9090/right): Host
>> 10.1.17.15 is being updated before previous update had finished (down ->
>> up) - setting to state=up vhosts=100
>> Apr 03 19:59:29 director: Warning: director(10.1.20.2:9090/right): Host
>> 10.1.17.15 is being updated before previous update had finished (up ->
>> down) - setting to state=down vhosts=100
>> Apr 03 19:59:29 director: Warning: director(10.1.20.2:9090/right): Host
>> 10.1.17.15 is being updated before previous update had finished (down ->
>> up) - setting to state=up vhosts=100
>>
>> This was on every director box and the status of all of the directors in
>> 'doveadm director ring status' was 'handshaking'.
>>
>> Here's a sample packet between directors:
>>
>> 19:51:23.552280 IP 10.1.20.10.56670 > 10.1.20.1.9090: Flags [P.], seq
>> 4147:5128, ack 0, win 0, options [nop,nop,TS val 1373505883 ecr
>> 1721203906], length 981
>>
>> Q.  [f.|.HOST   10.1.20.10  90901006732 10.1.17.15
>>  100 D1491260800
>> HOST10.1.20.10  90901006733 10.1.17.15  100
>> U1491260800
>> HOST10.1.20.10  90901006734 10.1.17.15  100
>> D1491260800
>> HOST10.1.20.10  90901006735 10.1.17.15  100
>> U1491260800
>> HOST10.1.20.10  90901006736 10.1.17.15  100
>> D1491260800
>> HOST10.1.20.10  90901006737 10.1.17.15  100
>> U1491260800
>> HOST10.1.20.10  90901006738 10.1.17.15  100
>> D1491260800
>> HOST10.1.20.10  90901006739 10.1.17.15  100
>> U1491260800
>> HOST10.1.20.10  90901006740 10.1.17.15  100
>> D1491260800
>> HOST10.1.20.10  90901006741 10.1.17.15  100
>> U1491260800
>> HOST10.1.20.10  90901006742 10.1.17.15  100
>> D1491260800
>> HOST10.1.20.10  90901006743 10.1.17.15  100
>> U1491260800
>> HOST10.1.20.10  90901006744 10.1.17.15  100
>> D1491260800
>> HOST10.1.20.10  90901006745 10.1.17.15  100
>> U1491260800
>> HOST10.1.20.10  90901006746 10.1.17.15  100
>> D1491260800
>> HOST10.1.20.10  90901006747 10.1.17.15  100
>> U1491260800
>> SYNC10.1.20.10  90901011840 7   1491263483  3377546382
>>
>> I'm guessing that D1491260800 is the user hash (with D for down), and the
>> U version is for 'up'.
>>
>> I'm happy to provide the full tcpdump (and/or doveconf -a), though the
>> tcpdump is basically all identical the one I pasted (same hash, same host).
>>
>> This seems pretty fragile. There shou

Re: IMAP hibernate and scalability in general

2017-04-10 Thread Mark Moseley
On Thu, Apr 6, 2017 at 9:22 PM, Christian Balzer <ch...@gol.com> wrote:

>
> Hello,
>
> On Thu, 6 Apr 2017 22:13:07 +0300 Timo Sirainen wrote:
>
> > On 6 Apr 2017, at 21.14, Mark Moseley <moseleym...@gmail.com> wrote:
> > >
> > >>
> > >> imap-hibernate processes are similar to imap-login processes in that
> they
> > >> should be able to handle thousands or even tens of thousands of
> connections
> > >> per process.
> > >>
> > >
> > > TL;DR: In a director/proxy setup, what's a good client_limit for
> > > imap-login/pop3-login?
> >
> > You should have the same number of imap-login processes as the number of
> CPU cores, so they can use all the available CPU without doing unnecessary
> context switches. The client_limit is then large enough to handle all the
> concurrent connections you need, but not so large that it would bring down
> the whole system if it actually happens.
> >
> Also keep in mind that pop3 login processes deal with rather ephemeral
> events, unlike IMAP with IDLE sessions lasting months.
> So they're unlike to grow beyond their initial numbers even with a small
> (few hundreds) client_limit.
>
> On the actual mailbox servers, either login processes tend to use about
> 1% of one core, very lightweight.
>
> > > Would the same apply for imap-login when it's being used in proxy
> mode? I'm
> > > moving us to a director setup (cf. my other email about director rings
> > > getting wedged from a couple days ago) and, again, for the sake of
> starting
> > > conservatively, I've got imap-login set to a client limit of 20, since
> I
> > > figure that proxying is a lot more work than just doing IMAP logins.
> I'm
> > > doing auth to mysql at both stages (at the proxy level and at the
> backend
> > > level).
> >
> > Proxying isn't doing any disk IO or any other blocking operations.
> There's no benefit to having more processes. The only theoretical advantage
> would be if some client could trigger a lot of CPU work and cause delays to
> handling other clients, but I don't think that's possible (unless somehow
> via OpenSSL but I'd guess that would be a bug in it then).
> >
> Indeed, in proxy mode you can go nuts, here I see pop3-logins being
> busier, but still just 2-5% of a core as opposed to typically 1-2% for
> imap-logins.
> That's with 500 pop3 sessions at any given time and 70k IMAP sessions per
> node.
> Or in other words, less than 1 core total typically.
>
> > > Should I be able to handle a much higher client_limit for imap-login
> and
> > > pop3-login than 20?
> >
> > Yeah.
> >
> The above is with a 4k client_limit, I'm definitely going to crank that up
> to 16k when the opportunity arise (quite disruptive on a proxy...).



Timo, any sense on where (if any) the point is where there are so many
connections on a given login process that it would get too busy to keep up?
I.e. where the sheer amount of stuff the login process has to do outweighs
the CPU savings of not having to context switch so much?

I realize that's a terribly subjective question, so perhaps you might have
a guess in very very round numbers? Given a typical IMAP userbase
(moderately busy, most people sitting in IDLE, etc), I woud've thought 10k
connections on a single process would've been past that tipping point.

With the understood caveat of being totally subjective and dependent on
local conditions, should 20k be ok? 50k? 100k?

Maybe a better question is, is there anywhere in the login process that is
possible to block? If not, I'd figure that a login process that isn't using
up 100% of a core can be assumed to *not* be falling behind. Does that seem
accurate?


Re: Host ... is being updated before previous update had finished

2017-04-07 Thread Mark Moseley
On Mon, Apr 3, 2017 at 6:04 PM, Mark Moseley <moseleym...@gmail.com> wrote:

> We just had a bunch of backend boxes go down due to a DDoS in our director
> cluster. When the DDoS died down, our director ring was a mess.
>
> Each box had thousands (and hundreds per second, which is a bit much) of
> log lines like the following:
>
> Apr 03 19:59:29 director: Warning: director(10.1.20.10:9090/left): Host
> 10.1.17.15 is being updated before previous update had finished (up ->
> down) - setting to state=down vhosts=100
> Apr 03 19:59:29 director: Warning: director(10.1.20.10:9090/left): Host
> 10.1.17.15 is being updated before previous update had finished (down ->
> up) - setting to state=up vhosts=100
> Apr 03 19:59:29 director: Warning: director(10.1.20.10:9090/left): Host
> 10.1.17.15 is being updated before previous update had finished (up ->
> down) - setting to state=down vhosts=100
> Apr 03 19:59:29 director: Warning: director(10.1.20.10:9090/left): Host
> 10.1.17.15 is being updated before previous update had finished (down ->
> up) - setting to state=up vhosts=100
> Apr 03 19:59:29 director: Warning: director(10.1.20.10:9090/left): Host
> 10.1.17.15 is being updated before previous update had finished (up ->
> down) - setting to state=down vhosts=100
> Apr 03 19:59:29 director: Warning: director(10.1.20.2:9090/right): Host
> 10.1.17.15 is being updated before previous update had finished (down ->
> up) - setting to state=up vhosts=100
> Apr 03 19:59:29 director: Warning: director(10.1.20.2:9090/right): Host
> 10.1.17.15 is being updated before previous update had finished (up ->
> down) - setting to state=down vhosts=100
> Apr 03 19:59:29 director: Warning: director(10.1.20.2:9090/right): Host
> 10.1.17.15 is being updated before previous update had finished (down ->
> up) - setting to state=up vhosts=100
> Apr 03 19:59:29 director: Warning: director(10.1.20.2:9090/right): Host
> 10.1.17.15 is being updated before previous update had finished (up ->
> down) - setting to state=down vhosts=100
> Apr 03 19:59:29 director: Warning: director(10.1.20.2:9090/right): Host
> 10.1.17.15 is being updated before previous update had finished (down ->
> up) - setting to state=up vhosts=100
>
> This was on every director box and the status of all of the directors in
> 'doveadm director ring status' was 'handshaking'.
>
> Here's a sample packet between directors:
>
> 19:51:23.552280 IP 10.1.20.10.56670 > 10.1.20.1.9090: Flags [P.], seq
> 4147:5128, ack 0, win 0, options [nop,nop,TS val 1373505883 ecr
> 1721203906], length 981
>
> Q.  [f.|.HOST   10.1.20.10  90901006732 10.1.17.15
>  100 D1491260800
> HOST10.1.20.10  90901006733 10.1.17.15  100
> U1491260800
> HOST10.1.20.10  90901006734 10.1.17.15  100
> D1491260800
> HOST10.1.20.10  90901006735 10.1.17.15  100
> U1491260800
> HOST10.1.20.10  90901006736 10.1.17.15  100
> D1491260800
> HOST10.1.20.10  90901006737 10.1.17.15  100
> U1491260800
> HOST10.1.20.10  90901006738 10.1.17.15  100
> D1491260800
> HOST10.1.20.10  90901006739 10.1.17.15  100
> U1491260800
> HOST10.1.20.10  90901006740 10.1.17.15  100
> D1491260800
> HOST10.1.20.10  90901006741 10.1.17.15  100
> U1491260800
> HOST10.1.20.10  90901006742 10.1.17.15  100
> D1491260800
> HOST10.1.20.10  90901006743 10.1.17.15  100
> U1491260800
> HOST10.1.20.10  90901006744 10.1.17.15  100
> D1491260800
> HOST10.1.20.10  90901006745 10.1.17.15  100
> U1491260800
> HOST10.1.20.10  90901006746 10.1.17.15  100
> D1491260800
> HOST10.1.20.10  90901006747 10.1.17.15  100
> U1491260800
> SYNC10.1.20.10  90901011840 7   1491263483  3377546382
>
> I'm guessing that D1491260800 is the user hash (with D for down), and the
> U version is for 'up'.
>
> I'm happy to provide the full tcpdump (and/or doveconf -a), though the
> tcpdump is basically all identical the one I pasted (same hash, same host).
>
> This seems pretty fragile. There should be some sort of tie break for
> that, instead of bringing the entire cluster to its knees. Or just drop the
> backend host completely. Or something, anything besides hosing things
> pretty badly.
>
> This is 2.2.27, on both the directors and backend. If the answer is
> upgrade to 2.2.28, then I'll upgrade immediately. I see commit
> a9ade104616bbb81c34cc6f8bfde5dab0571afac mentions the same error but the
> commit predates 2.2.27 by a month and a half.
>
> In the meantime, is there a

Re: IMAP hibernate and scalability in general

2017-04-06 Thread Mark Moseley
On Thu, Apr 6, 2017 at 3:10 AM, Timo Sirainen  wrote:

> On 6 Apr 2017, at 9.56, Christian Balzer  wrote:
> >
> >> For no particular reason besides wanting to start conservatively, we've
> got
> >> client_limit set to 50 on the hibernate procs (with 1100 total
> hibernated
> >> connections on the box I'm looking at). At only a little over a meg
> each,
> >> I'm fine with those extra processes.
> >>
> > Yeah, but 50 would be a tad too conservative for our purposes here.
> > I'll keep an eye on it and see how it goes, first checkpoint would be at
> > 1k hibernated sessions. ^_^
>
> imap-hibernate processes are similar to imap-login processes in that they
> should be able to handle thousands or even tens of thousands of connections
> per process.
>

TL;DR: In a director/proxy setup, what's a good client_limit for
imap-login/pop3-login?

Would the same apply for imap-login when it's being used in proxy mode? I'm
moving us to a director setup (cf. my other email about director rings
getting wedged from a couple days ago) and, again, for the sake of starting
conservatively, I've got imap-login set to a client limit of 20, since I
figure that proxying is a lot more work than just doing IMAP logins. I'm
doing auth to mysql at both stages (at the proxy level and at the backend
level).

On a sample director box, I've got 1 imap connections, varying from
50mbit/sec to the backends up to 200mbit/sec. About a third of the
connections are TLS, if that makes a diff. That's pretty normal from what
I've seen. The director servers are usually 90-95% idle.

Should I be able to handle a much higher client_limit for imap-login and
pop3-login than 20?


Re: IMAP hibernate and scalability in general

2017-04-06 Thread Mark Moseley
We've been using hibernate for about half a year with no ill effects. There
were various logged errors in earlier versions of dovecot, but even with
those, we never heard a reported customer-side error (almost always when
transitioning from hibernate back to regular imap; in the case of those
errors, presumably the mail client just reconnected silently).

For no particular reason besides wanting to start conservatively, we've got
client_limit set to 50 on the hibernate procs (with 1100 total hibernated
connections on the box I'm looking at). At only a little over a meg each,
I'm fine with those extra processes.

On Wed, Apr 5, 2017 at 11:17 PM, Aki Tuomi  wrote:

>
>
> On 06.04.2017 06:15, Christian Balzer wrote:
> > Hello,
> >
> > as some may remember, we're running very dense IMAP cluster here, in
> > excess of 50k IMAP sessions per node (current record holder is 68k,
> design
> > is for 200k+).
> >
> > The first issue we ran into was that the dovecot master process (which is
> > single thread and thus a bottleneck) was approaching 100% CPU usage (aka
> > using a full core) when trying to spawn off new IMAP processes.
> >
> > This was rectified by giving IMAP a service count of 200 to create a pool
> > of "idling" processes eventually, reducing the strain for the master
> > process dramatically. That of course required generous cranking up
> > ulimits, FDs in particular.
> >
> > The next issues of course is (as mentioned before) the memory usage of
> all
> > those IMAP processes and the fact that quite a few things outside of
> > dovecote (ps, etc) tend to get quite sedate when dealing with tens of
> > thousands of processes.
> >
> > We just started to deploy a new mailbox cluster pair with 2.2.27 and
> > having IMAP hibernate configured.
> > Getting this work is a PITA though with regards to ownership and access
> > rights to the various sockets, this part could definitely do with some
> > better (I know, difficult) defaults or at least more documentation (none
> > besides the source and this ML).
> >
> > Initial results are very promising, depending on what your clients are
> > doing (are they well behaved, are your users constantly looking at other
> > folders, etc) the vast majority of IDLE processes will be in hibernated
> > at any given time and thus not only using a fraction of the RAM otherwise
> > needed but also freeing up process space.
> >
> > Real life example:
> > 240 users, 86 imap processes (80% of those not IDLE) and:
> > dovecot   119157  0.0  0.0  10452  3236 ?SApr01   0:21
> dovecot/imap-hibernate [237 connections]
> > That's 237 hibernated connections and thus less processes than otherwise.
> >
> >
> > I assume that given the silence on the ML what we are going to be the
> > first hibernate users where the term "large scale" does apply.
> > Despite that I have some questions, clarifications/confirmations:
> >
> > Our current default_client_limit is 16k, which can be seen by having 5
> > config processes on our 65k+ session node. ^_-
> >
> > This would also apply to imap-hibernate, one wonders if that's fine
> > (config certainly has no issues) or if something smaller would be
> > appropriate here?
> >
> > Since we have idling IMAP processes around most of the time, the strain
> of
> > re-spawning proper processes from imap-hibernate should be just as
> reduced
> > as for the dovecot master, correct?
> >
> >
> > I'll keep reporting our experiences here, that is if something blows up
> > spectacularly. ^o^
> >
> > Christian
>
> Hi!
>
> We have customers using it in larger deployments. A good idea is to have
> as much of your clients hibernating as possible, as the hibernation
> process is much smaller than actual IMAP process.
>
> You should probably also look at reusing the processes, as this will
> probably help your performance,
> https://wiki.dovecot.org/PerformanceTuning and
> https://wiki.dovecot.org/LoginProcess are probably a good starting
> point, although I suspect you've read these already.
>
> If you are running a dense server, cranking up various limits is rather
> expected.
>
> Aki
>


Host ... is being updated before previous update had finished

2017-04-03 Thread Mark Moseley
We just had a bunch of backend boxes go down due to a DDoS in our director
cluster. When the DDoS died down, our director ring was a mess.

Each box had thousands (and hundreds per second, which is a bit much) of
log lines like the following:

Apr 03 19:59:29 director: Warning: director(10.1.20.10:9090/left): Host
10.1.17.15 is being updated before previous update had finished (up ->
down) - setting to state=down vhosts=100
Apr 03 19:59:29 director: Warning: director(10.1.20.10:9090/left): Host
10.1.17.15 is being updated before previous update had finished (down ->
up) - setting to state=up vhosts=100
Apr 03 19:59:29 director: Warning: director(10.1.20.10:9090/left): Host
10.1.17.15 is being updated before previous update had finished (up ->
down) - setting to state=down vhosts=100
Apr 03 19:59:29 director: Warning: director(10.1.20.10:9090/left): Host
10.1.17.15 is being updated before previous update had finished (down ->
up) - setting to state=up vhosts=100
Apr 03 19:59:29 director: Warning: director(10.1.20.10:9090/left): Host
10.1.17.15 is being updated before previous update had finished (up ->
down) - setting to state=down vhosts=100
Apr 03 19:59:29 director: Warning: director(10.1.20.2:9090/right): Host
10.1.17.15 is being updated before previous update had finished (down ->
up) - setting to state=up vhosts=100
Apr 03 19:59:29 director: Warning: director(10.1.20.2:9090/right): Host
10.1.17.15 is being updated before previous update had finished (up ->
down) - setting to state=down vhosts=100
Apr 03 19:59:29 director: Warning: director(10.1.20.2:9090/right): Host
10.1.17.15 is being updated before previous update had finished (down ->
up) - setting to state=up vhosts=100
Apr 03 19:59:29 director: Warning: director(10.1.20.2:9090/right): Host
10.1.17.15 is being updated before previous update had finished (up ->
down) - setting to state=down vhosts=100
Apr 03 19:59:29 director: Warning: director(10.1.20.2:9090/right): Host
10.1.17.15 is being updated before previous update had finished (down ->
up) - setting to state=up vhosts=100

This was on every director box and the status of all of the directors in
'doveadm director ring status' was 'handshaking'.

Here's a sample packet between directors:

19:51:23.552280 IP 10.1.20.10.56670 > 10.1.20.1.9090: Flags [P.], seq
4147:5128, ack 0, win 0, options [nop,nop,TS val 1373505883 ecr
1721203906], length 981

Q.  [f.|.HOST   10.1.20.10  90901006732 10.1.17.15  100
D1491260800
HOST10.1.20.10  90901006733 10.1.17.15  100
U1491260800
HOST10.1.20.10  90901006734 10.1.17.15  100
D1491260800
HOST10.1.20.10  90901006735 10.1.17.15  100
U1491260800
HOST10.1.20.10  90901006736 10.1.17.15  100
D1491260800
HOST10.1.20.10  90901006737 10.1.17.15  100
U1491260800
HOST10.1.20.10  90901006738 10.1.17.15  100
D1491260800
HOST10.1.20.10  90901006739 10.1.17.15  100
U1491260800
HOST10.1.20.10  90901006740 10.1.17.15  100
D1491260800
HOST10.1.20.10  90901006741 10.1.17.15  100
U1491260800
HOST10.1.20.10  90901006742 10.1.17.15  100
D1491260800
HOST10.1.20.10  90901006743 10.1.17.15  100
U1491260800
HOST10.1.20.10  90901006744 10.1.17.15  100
D1491260800
HOST10.1.20.10  90901006745 10.1.17.15  100
U1491260800
HOST10.1.20.10  90901006746 10.1.17.15  100
D1491260800
HOST10.1.20.10  90901006747 10.1.17.15  100
U1491260800
SYNC10.1.20.10  90901011840 7   1491263483  3377546382

I'm guessing that D1491260800 is the user hash (with D for down), and the U
version is for 'up'.

I'm happy to provide the full tcpdump (and/or doveconf -a), though the
tcpdump is basically all identical the one I pasted (same hash, same host).

This seems pretty fragile. There should be some sort of tie break for that,
instead of bringing the entire cluster to its knees. Or just drop the
backend host completely. Or something, anything besides hosing things
pretty badly.

This is 2.2.27, on both the directors and backend. If the answer is upgrade
to 2.2.28, then I'll upgrade immediately. I see
commit a9ade104616bbb81c34cc6f8bfde5dab0571afac mentions the same error but
the commit predates 2.2.27 by a month and a half.

In the meantime, is there any doveadm command I could've done to fix this?
I tried removing the host (doveadm director remove 10.1.17.15) but that
didn't do anything. I didn't think to try to flush the mapping for that
user till just now. I suspect that with the ring unsync'd, flushing the
user wouldn't have helped.

The only remedy was to kill dovecot on every box in the director cluster
and then (with dovecot down on *all* of them) start dovecot back up.
Restarting each director's dovecot (with other directors' dovecots still
running) did nothing. Only by brining the entire cluster down did dovecot
stop 

"Connection queue full" error

2017-03-23 Thread Mark Moseley
Just a quickie: why is "Connection queue full" logged under Info, instead
of something like error? Or at least have the word 'error' in it?

Seems like a pretty error-ish thing to happen. Anything that causes the
connection to fail from the server side should show up in a grep -i for
error. I.e. I don't care about clients failing to match up SSL cipher
suites; that's fine as Info (SSL errors ironically do have 'error' in them,
though I assume that's coming from the ssl libs).

But the server dropping connections due to running out of available daemons
(and any other "server isn't working right" conditions) is definitely
worthy of Error.


Re: Director+NFS Experiences

2017-02-24 Thread Mark Moseley
On Fri, Feb 24, 2017 at 11:41 AM, Francisco Wagner C. Freire <
wgrcu...@gmail.com> wrote:

> In our experience. A ring with more of 4 servers is bad, we have sync
> problems everyone.  Using 4 or less works perfect.
>
> Em 24 de fev de 2017 4:30 PM, "Mark Moseley" <moseleym...@gmail.com>
> escreveu:
>
>> >
>> > On Thu, Feb 23, 2017 at 3:15 PM, Timo Sirainen <t...@iki.fi> wrote:
>> >
>> >> On 24 Feb 2017, at 0.08, Mark Moseley <moseleym...@gmail.com> wrote:
>> >> >
>> >> > As someone who is about to begin the process of moving from maildir
>> to
>> >> > mdbox on NFS (and therefore just about to start the
>> 'director-ization'
>> >> of
>> >> > everything) for ~6.5m mailboxes, I'm curious if anyone can share any
>> >> > experiences with it. The list is surprisingly quiet about this
>> subject,
>> >> and
>> >> > articles on google are mainly just about setting director up. I've
>> yet
>> >> to
>> >> > stumble across an article about someone's experiences with it.
>> >> >
>> >> > * How big of a director cluster do you use? I'm going to have
>> millions
>> >> of
>> >> > mailboxes behind 10 directors.
>> >>
>> >> I wouldn't use more than 10.
>> >>
>> >>
>> > Cool
>>
>
Interesting. That's good feedback. One of the things I wondered about is
whether it'd be better to deploy a 10-node ring or split it into 2x 5-node
rings. Sounds like splitting it up might not be a bad idea. How often would
you see those sync problems (and were they the same errors as I posted or
something else)? And were you running poolmon from every node when you were
seeing sync errors?


Re: Director+NFS Experiences

2017-02-24 Thread Mark Moseley
>
> On Thu, Feb 23, 2017 at 3:15 PM, Timo Sirainen <t...@iki.fi> wrote:
>
>> On 24 Feb 2017, at 0.08, Mark Moseley <moseleym...@gmail.com> wrote:
>> >
>> > As someone who is about to begin the process of moving from maildir to
>> > mdbox on NFS (and therefore just about to start the 'director-ization'
>> of
>> > everything) for ~6.5m mailboxes, I'm curious if anyone can share any
>> > experiences with it. The list is surprisingly quiet about this subject,
>> and
>> > articles on google are mainly just about setting director up. I've yet
>> to
>> > stumble across an article about someone's experiences with it.
>> >
>> > * How big of a director cluster do you use? I'm going to have millions
>> of
>> > mailboxes behind 10 directors.
>>
>> I wouldn't use more than 10.
>>
>>
> Cool
>
>
>
>> > I'm guessing that's plenty. It's actually split over two datacenters.
>>
>> Two datacenters in the same director ring? This is dangerous. if there's
>> a network connectivity problem between them, they split into two separate
>> rings and start redirecting users to different backends.
>>
>
> I was unclear. The two director rings are unrelated and won't ever need to
> talk to each other. I only mentioned the two rings to point out that all
> 6.5m mailboxes weren't behind one ring, but rather split between two
>
>
>
>>
>> > * Do you have consistent hashing turned on? I can't think of any reason
>> not
>> > to have it turned on, but who knows
>>
>> Definitely turn it on. The setting only exists because of backwards
>> compatibility and will be removed at some point.
>>
>>
> Out of curiosity (and possibly extremely naive), unless you've moved a
> mailbox via 'doveadm director', if someone is pointed to a box via
> consistent hashing, why would the directors need to share that mailbox
> mapping? Again, assuming they're not moved (I'm also assuming that the
> mailbox would always, by default, hash to the same value in the consistent
> hash), isn't their hashing all that's needed to get to the right backend?
> I.e. "I know what the mailbox hashes to, and I know what backend that hash
> points at, so I'm done", in which case, no need to communicate to the other
> directors. I could see that if you moved someone, it *would* need to
> communicate that mapping. Then the only maps traded by directors would be
> the consistent hash boundaries *plus* any "moved" mailboxes. Again, just
> curious.
>
>
Timo,
Incidentally, on that error I posted:

Feb 12 06:25:03 director: Warning: director(10.1.20.3:9090/left): Host
10.1.17.3 is being updated before previous update had finished (up -> down)
- setting to state=down vhosts=0
Feb 12 06:25:03 director: Warning: director(10.1.20.3:9090/left): Host
10.1.17.3 is being updated before previous update had finished (down -> up)
- setting to state=up vhosts=0

any idea what would cause that? Is my guess that multiple directors tried
to update the status simultaneously correct?


Re: Director+NFS Experiences

2017-02-24 Thread Mark Moseley
On Thu, Feb 23, 2017 at 3:45 PM, Zhang Huangbin <z...@iredmail.org> wrote:

>
> > On Feb 24, 2017, at 6:08 AM, Mark Moseley <moseleym...@gmail.com> wrote:
> >
> > * Do you use the perl poolmon script or something else? The perl script
> was
> > being weird for me, so I rewrote it in python but it basically does the
> > exact same things.
>
> Would you mind sharing it? :)
>
> 
> Zhang Huangbin, founder of iRedMail project: http://www.iredmail.org/
> Time zone: GMT+8 (China/Beijing).
> Available on Telegram: https://t.me/iredmail
>
>

Attached. No claims are made on the quality of my code :)


poolmon
Description: Binary data


Re: Director+NFS Experiences

2017-02-23 Thread Mark Moseley
On Thu, Feb 23, 2017 at 3:15 PM, Timo Sirainen <t...@iki.fi> wrote:

> On 24 Feb 2017, at 0.08, Mark Moseley <moseleym...@gmail.com> wrote:
> >
> > As someone who is about to begin the process of moving from maildir to
> > mdbox on NFS (and therefore just about to start the 'director-ization' of
> > everything) for ~6.5m mailboxes, I'm curious if anyone can share any
> > experiences with it. The list is surprisingly quiet about this subject,
> and
> > articles on google are mainly just about setting director up. I've yet to
> > stumble across an article about someone's experiences with it.
> >
> > * How big of a director cluster do you use? I'm going to have millions of
> > mailboxes behind 10 directors.
>
> I wouldn't use more than 10.
>
>
Cool



> > I'm guessing that's plenty. It's actually split over two datacenters.
>
> Two datacenters in the same director ring? This is dangerous. if there's a
> network connectivity problem between them, they split into two separate
> rings and start redirecting users to different backends.
>

I was unclear. The two director rings are unrelated and won't ever need to
talk to each other. I only mentioned the two rings to point out that all
6.5m mailboxes weren't behind one ring, but rather split between two



>
> > * Do you have consistent hashing turned on? I can't think of any reason
> not
> > to have it turned on, but who knows
>
> Definitely turn it on. The setting only exists because of backwards
> compatibility and will be removed at some point.
>
>
Out of curiosity (and possibly extremely naive), unless you've moved a
mailbox via 'doveadm director', if someone is pointed to a box via
consistent hashing, why would the directors need to share that mailbox
mapping? Again, assuming they're not moved (I'm also assuming that the
mailbox would always, by default, hash to the same value in the consistent
hash), isn't their hashing all that's needed to get to the right backend?
I.e. "I know what the mailbox hashes to, and I know what backend that hash
points at, so I'm done", in which case, no need to communicate to the other
directors. I could see that if you moved someone, it *would* need to
communicate that mapping. Then the only maps traded by directors would be
the consistent hash boundaries *plus* any "moved" mailboxes. Again, just
curious.


Director+NFS Experiences

2017-02-23 Thread Mark Moseley
As someone who is about to begin the process of moving from maildir to
mdbox on NFS (and therefore just about to start the 'director-ization' of
everything) for ~6.5m mailboxes, I'm curious if anyone can share any
experiences with it. The list is surprisingly quiet about this subject, and
articles on google are mainly just about setting director up. I've yet to
stumble across an article about someone's experiences with it.

* How big of a director cluster do you use? I'm going to have millions of
mailboxes behind 10 directors. I'm guessing that's plenty. It's actually
split over two datacenters. In the larger, we've got about 200k connections
currently, so in a perfectly-balanced world, each director would have 20k
connections on it. I'm guessing that's child's play. Any good rule of thumb
for ratio of 'backend servers::director servers'? In my larger DC, it's
about 5::1.

* Do you use the perl poolmon script or something else? The perl script was
being weird for me, so I rewrote it in python but it basically does the
exact same things.

* Seen any issues with director? In testing, I managed to wedge things by
having my poolmon script running on all the cluster boxes (I think). I've
since rewritten it to run *only* on the lowest-numbered director. When it
wedged, I had piles (read: hundreds per second) of log entries that said:

Feb 12 06:25:03 director: Warning: director(10.1.20.5:9090/right): Host
10.1.17.3 is being updated before previous update had finished (down -> up)
- setting to state=up vhosts=0
Feb 12 06:25:03 director: Warning: director(10.1.20.5:9090/right): Host
10.1.17.3 is being updated before previous update had finished (up -> down)
- setting to state=down vhosts=0
Feb 12 06:25:03 director: Warning: director(10.1.20.3:9090/left): Host
10.1.17.3 is being updated before previous update had finished (down -> up)
- setting to state=up vhosts=0
Feb 12 06:25:03 director: Warning: director(10.1.20.3:9090/left): Host
10.1.17.3 is being updated before previous update had finished (up -> down)
- setting to state=down vhosts=0

Because it was in testing, I didn't notice it and it was like this for
several days till dovecot was restarted on all the director nodes. I'm not
100% on what happened, but my *guess* is that two boxes tried to update the
status of the same backend server in rapid succession.

* Assuming you're using NFS, do you still see non-trivial amounts of
indexes getting corrupted?

* Again, assuming NFS and assuming at least some corrupted indexes, what's
your guess for success rate % for dovecot recovering them automatically?
And how about success rate % for ones that dovecot wasn't able to do
automatically but you had to use doveadm to repair it? Really what I'm
trying to figure out is 1) how often sysops will need to manually recover
indexes; and 2) how often admins *can't* manually recover indexes?

* if you have unrecoverable indexes (and assuming you have snapshots on
your NFS server), does grabbing the most recent indexes from the snapshots
always work for recovery (obviously, up till the point that the snapshot
was taken)?

* Any gotchas you've seen anywhere in a director-fied stack? I realize
that's a broad question :)

* Does one of your director nodes going down cause any issues? E.g. issues
with the left and right nodes syncing with each other? Or when the director
node comes back up?

* Does a backend node going down cause a storm of reconnects? In the time
between deploying director and getting mailboxes converted to mdbox,
reconnects for us will mean cold local-disk dovecot caches. But hopefully
consistent hashing helps with that?

* Do you have consistent hashing turned on? I can't think of any reason not
to have it turned on, but who knows

* Any other configuration knobs (including sysctl) that you needed to futz
with, vs the default?

I appreciate any feedback!


Re: Implementing secondary quota w/ "Archive" namespace

2016-12-01 Thread Mark Moseley
On Thu, Dec 1, 2016 at 4:37 AM, Timo Sirainen <t...@iki.fi> wrote:

> On 1 Dec 2016, at 2.22, Mark Moseley <moseleym...@gmail.com> wrote:
>
>
> How about this updated patch?
>
>
> Nope, still lets me move messages into the over-quota namespace.
>
> Both these are true in quota_check:
>
> ctx->moving
> quota_move_requires_check
>
> ..
>
> Anything else I can try? I'm not sure how the logic in the quota system
> works, so I'm not sure what to suggest. What's the gist of the patch (i.e.
> what's it trying to do that it wasn't before)?
>
> If I can get a handle on that, I can start littering things with debug
> statements to try to track stuff down.
>
>
> I just messed up the if-check. This one is now committed and should work:
> https://github.com/dovecot/core/commit/2ec4ab6f5a1172e86afc72c0f29f47
> 0d6fd2bd9a.diff
>
>

that looks good. When I apply it, I get:

quota-storage.c: In function ‘quota_save_finish’:
quota-storage.c:337:15: error: ‘struct mail_save_context’ has no member
named ‘copy_src_mail’
quota-storage.c:337:51: error: ‘struct mail_save_context’ has no member
named ‘copy_src_mail’
make[4]: *** [quota-storage.lo] Error 1

But if I then also apply the previous patch you gave, though it fails in a
number of sections:

# patch -p1 < ~moseley/diff2
(Stripping trailing CRs from patch.)
patching file src/lib-storage/mail-storage-private.h
(Stripping trailing CRs from patch.)
patching file src/lib-storage/mail-storage.c
Hunk #1 succeeded at 2238 (offset -20 lines).
Hunk #2 succeeded at 2255 (offset -20 lines).
(Stripping trailing CRs from patch.)
patching file src/plugins/quota/quota-storage.c
Hunk #1 FAILED at 185.
Hunk #2 FAILED at 242.
Hunk #3 FAILED at 297.
3 out of 3 hunks FAILED -- saving rejects to file
src/plugins/quota/quota-storage.c.rej

BUT, it then compiles.

I haven't tested it extensively, but with this latest patch, when I try to
move mail to the over-quota Archive mailbox, it correctly fails! Awesome!


Re: Implementing secondary quota w/ "Archive" namespace

2016-11-30 Thread Mark Moseley
On Thu, Nov 24, 2016 at 9:10 PM, Mark Moseley <moseleym...@gmail.com> wrote:

> On Thu, Nov 24, 2016 at 10:52 AM, Timo Sirainen <t...@iki.fi> wrote:
>
>> On 24 Nov 2016, at 9.33, Mark Moseley <moseleym...@gmail.com> wrote:
>> >
>> > On Wed, Nov 23, 2016 at 6:05 PM, Timo Sirainen <t...@iki.fi> wrote:
>> >
>> >> On 23 Nov 2016, at 0.49, Mark Moseley <moseleym...@gmail.com> wrote:
>> >>>
>> >>> If I move messages between namespaces, it appears to ignore the quotas
>> >> I've
>> >>> set on them. A *copy* will trigger the quota error. But a *move* just
>> >>> happily piles on to the overquota namespace. Is that normal?
>> >>
>> >> Probably needs a bit more thinking, but I guess the attached patch
>> would
>> >> help.
>> >>
>> >>
>> > I appreciate the patch! Esp on a Weds night. I applied and rerolled
>> > dovecot, but I can still move messages into the over-quota namespace.
>>
>> How about this updated patch?
>>
>>
> Nope, still lets me move messages into the over-quota namespace.
>
> Both these are true in quota_check:
>
> ctx->moving
> quota_move_requires_check
>
>
>
>
>> > Out of curiosity, in the Quota wiki page, it mentions that 'in theory
>> there
>> > could be e.g. "user quota" and "domain quota" roots'. That's also super
>> > interesting to me. Does anyone have any experience with that? I.e. any
>> > gotchas?
>>
>>
>> There's no automatic quota recalculation for domain quotas, because it
>> would have to somehow sum up all the users' quotas. Also I think that it
>> still does do the automatic quota recalculation if it gets into a situation
>> where it realizes that quotas are wrong, but it'll then just use the single
>> user's quota as the entire domain quota. So maybe it would work if you
>> externally sum up all the users' quotas and update it to the domain quota
>> in cronjob, e.g. once per hour. I guess it would be also nice if the
>> internal quota recalculation could be disabled and maybe execute an
>> external script to do it (similar to quota-warnings).
>>
>>

Anything else I can try? I'm not sure how the logic in the quota system
works, so I'm not sure what to suggest. What's the gist of the patch (i.e.
what's it trying to do that it wasn't before)?

If I can get a handle on that, I can start littering things with debug
statements to try to track stuff down.


Re: Implementing secondary quota w/ "Archive" namespace

2016-11-24 Thread Mark Moseley
On Thu, Nov 24, 2016 at 10:52 AM, Timo Sirainen <t...@iki.fi> wrote:

> On 24 Nov 2016, at 9.33, Mark Moseley <moseleym...@gmail.com> wrote:
> >
> > On Wed, Nov 23, 2016 at 6:05 PM, Timo Sirainen <t...@iki.fi> wrote:
> >
> >> On 23 Nov 2016, at 0.49, Mark Moseley <moseleym...@gmail.com> wrote:
> >>>
> >>> If I move messages between namespaces, it appears to ignore the quotas
> >> I've
> >>> set on them. A *copy* will trigger the quota error. But a *move* just
> >>> happily piles on to the overquota namespace. Is that normal?
> >>
> >> Probably needs a bit more thinking, but I guess the attached patch would
> >> help.
> >>
> >>
> > I appreciate the patch! Esp on a Weds night. I applied and rerolled
> > dovecot, but I can still move messages into the over-quota namespace.
>
> How about this updated patch?
>
>
Nope, still lets me move messages into the over-quota namespace.

Both these are true in quota_check:

ctx->moving
quota_move_requires_check




> > Out of curiosity, in the Quota wiki page, it mentions that 'in theory
> there
> > could be e.g. "user quota" and "domain quota" roots'. That's also super
> > interesting to me. Does anyone have any experience with that? I.e. any
> > gotchas?
>
>
> There's no automatic quota recalculation for domain quotas, because it
> would have to somehow sum up all the users' quotas. Also I think that it
> still does do the automatic quota recalculation if it gets into a situation
> where it realizes that quotas are wrong, but it'll then just use the single
> user's quota as the entire domain quota. So maybe it would work if you
> externally sum up all the users' quotas and update it to the domain quota
> in cronjob, e.g. once per hour. I guess it would be also nice if the
> internal quota recalculation could be disabled and maybe execute an
> external script to do it (similar to quota-warnings).
>
>
>
>
>
>


Re: Implementing secondary quota w/ "Archive" namespace

2016-11-23 Thread Mark Moseley
On Wed, Nov 23, 2016 at 6:05 PM, Timo Sirainen <t...@iki.fi> wrote:

> On 23 Nov 2016, at 0.49, Mark Moseley <moseleym...@gmail.com> wrote:
> >
> > If I move messages between namespaces, it appears to ignore the quotas
> I've
> > set on them. A *copy* will trigger the quota error. But a *move* just
> > happily piles on to the overquota namespace. Is that normal?
>
> Probably needs a bit more thinking, but I guess the attached patch would
> help.
>
>
I appreciate the patch! Esp on a Weds night. I applied and rerolled
dovecot, but I can still move messages into the over-quota namespace.

I threw some i_debug's into quota_roots_equal()  (and one right at the
top), but I don't ever see them in the debug logs. But both "ctx->moving"
and "src_box == NULL" are true, so it never calls quota_roots_equal anyway
in that patched 'if' clause in quota_check. I threw the following into
quota_check and it printed to the debug log for both if's:

if (ctx->moving ) i_debug("quota: quota_check: YES to ctx->moving"
);
if (src_box == NULL) i_debug("quota: quota_check: YES to src_box ==
NULL" );


Out of curiosity, in the Quota wiki page, it mentions that 'in theory there
could be e.g. "user quota" and "domain quota" roots'. That's also super
interesting to me. Does anyone have any experience with that? I.e. any
gotchas?


Re: Implementing secondary quota w/ "Archive" namespace

2016-11-22 Thread Mark Moseley
On Mon, Nov 21, 2016 at 6:20 PM, Fred Turner  wrote:

> Yeah, I gradually figured out it wouldn't work yesterday when delving back
> into this and testing. No separate quotas per namespaces until 2.1 or
> something, I think?
>
> So, got any suggestions on getting it to work with v2.x? I found an old
> thread from 2013 by Andreas (I think?) and he didn't seem to quite be able
> to get it to work. Actually, though, I'd be happy to even be able to apply
> a quota to the primary Inbox namespace and none to the secondary "Archive"
> namespace, but my testing on a 10.10 Server wasn't having much success
> either.
>
> Thanks for the responses and input!
> Fred
>
> > On Nov 21, 2016, at 17:53, Timo Sirainen  wrote:
> >
> >> On 20 Sep 2016, at 21.28, Fred Turner  wrote:
> >>
> >> Mac Pro Server 2012
> >> Mac OS X Server 10.6.8
> >> Dovecot 1.1.20apple0.5
> >
> > That's an old one..
> >
> >> quota = maildir:User quota:ns=
> >>
> >> quota2 = maildir:ns=testArchive/
> >> quota2_rule = *:storage=20G
> >>
> >> The first line is already in the default config, with the exception of
> the added “:ns=“ at the end. The 2nd line in the examples I saw had a
> middle component w/ the quota name, but when I tried that, like so:
> >>
> >> quota2 = maildir:Archive quota:ns=testArchive/
> >>
> >> my server fails and shows this in the logs:
> >>
> >>> Fatal: IMAP(*): Quota root test backend maildir: Unknown parameter:
> ns=testArchive/
> >>
> >>
> >> Any idea why it doesn’t like that? Also, do I need to add a quota_rule
> for the primary quota? It does not have one normally in the Mac OS X Server
> config…
> >
> > You're trying to use Dovecot v2.x configuration in Dovecot v1.x. Sorry,
> won't work without upgrade.
>


So I've been playing with this and I mostly have things working. It's
2.2.26.0, btw. In all the below, both namespaces are working and I can
copy/move messages back and forth between them.

One thing that I've not figured out yet (though I'm sure I'm just missing
something scouring the docs):

If I move messages between namespaces, it appears to ignore the quotas I've
set on them. A *copy* will trigger the quota error. But a *move* just
happily piles on to the overquota namespace. Is that normal?

E.g., here's the maildirsize from the 'archive' namespace (with quotas set
absurdly low for testing) and I just moved some messages into it from INBOX:

2S,10C
32252 31
2809 1

and it'll just keep tacking on. As you can see it's over on bytes and # of
messages. But it will successfully block a copy. This behavior of ignoring
the quota for moves goes in both directions, from INBOX to 'archive' and
vice versa.

And note that the values above are what I set, so it *is* seeing the quota
just fine (and like I said, when I copy a message, it gets appropriately
blocked due to quota).

Is this the normal behavior for message moves?

Oh, and it's definitely a move:

  A0004 UID MOVE 180 Archive.archive1..
* OK [COPYUID 1268932143 180 53] Moved UIDs...* 69 EXPUNGE..A0004 OK Move
completed (0.042 + 0.000 + 0.041 secs)...




BTW, since I spent a good deal of time before I figured this out, if you're
using SQL prefetch, the syntax for overrding the location in passdb
password_query becomes (with the example ns of 'archive'):

userdb_namespace/archive/location

instead of

namespace/archive/location


I couldn't for the life of me figure out why dovecot was
ignoring 'namespace/archive/location'. Writing this email helped me figure
it out, as usual :)


=

doveconf -n:

# 2.2.26 (54d6540): /etc/dovecot/dovecot.conf
# Pigeonhole version 0.4.16 (fed8554)
# OS: Linux 3.14.77 x86_64 Ubuntu 12.04.5 LTS
auth_cache_negative_ttl = 1 mins
auth_cache_size = 10 M
auth_cache_ttl = 10 mins
auth_debug = yes
auth_debug_passwords = yes
auth_mechanisms = plain login
base_dir = /var/run/dovecot/
debug_log_path = /var/log/dovecot/debug.log
default_client_limit = 3005
default_internal_user = doveauth
default_process_limit = 1500
deliver_log_format = M=%m, F=%f, S="%s" B="%p/%w" => %$
disable_plaintext_auth = no
first_valid_uid = 199
imap_capability = +UNSELECT
last_valid_uid = 201
listen = *
log_path = /var/log/dovecot/mail.log
mail_debug = yes
mail_location = maildir:~/Maildir
mail_nfs_storage = yes
mail_privileged_group = mail
mail_uid = 200
managesieve_notify_capability = mailto
managesieve_sieve_capability = fileinto reject envelope encoded-character
vacation subaddress comparator-i;ascii-numeric relational regex imap4flags
copy include variables body enotify environment mailbox date index ihave
duplicate mime foreverypart extracttext
namespace {
  hidden = no
  inbox = yes
  list = yes
  location =
  prefix = INBOX.
  separator = .
  subscriptions = yes
  type = private
}
namespace archive {
  inbox = no
  list = children
  location = maildir:~/Archive
  prefix = Archive.
  separator = .
  subscriptions = yes
  type = private
}
passdb {
  args = 

Re: Implementing secondary quota w/ "Archive" namespace

2016-11-21 Thread Mark Moseley
On Sun, Nov 20, 2016 at 3:28 PM, Fred Turner  wrote:

> Hey Everybody—
>
> Posted this to the list a couple of months ago, but didn’t get any
> responses. Is there a better place to ask this question about quota &
> namespace configuration? Seems like a lot of the discussion here is a
> little deeper/lower-level than my configuration question, like debugging
> and development…
>
> Thx,
> Fred
>
>
> > On Sep 20, 2016, at 02:28 PM, Fred Turner  wrote:
> >
> > Hello folks—
> >
> > My first post, so please be gentle… :-)
> >
> > I have a client email server using SSDs for primary user mailboxes, but
> since the number of users keeps growing and they all seem to be very
> reluctant to delete anything, I’ve implemented an “Archive” namespace that
> stores its mailboxes on a larger HD RAID. The idea is that, as the users
> approach their quota, they move messages to the Archive mailboxes to
> alleviate space in their primary Inbox namespace. This secondary storage
> part is working well, but I’m having trouble w/ getting the quotas to work
> right. Here are the basics of the setup:
> >
> > Mac Pro Server 2012
> > Mac OS X Server 10.6.8
> > Dovecot 1.1.20apple0.5
> >
> > Here is how I’ve configured my namespaces (during testing):
> >
> > namespace private {
> > separator = /
> > prefix =
> > inbox = yes
> > }
> >
> > namespace private {
> > separator = /
> > prefix = testArchive/
> > location = maildir:/Shared Items/MailArchive/%u
> > subscriptions = yes
> > }
> >
> > My quota research has led me to try this:
> >
> > quota = maildir:User quota:ns=
> >
> > quota2 = maildir:ns=testArchive/
> > quota2_rule = *:storage=20G
> >
> > The first line is already in the default config, with the exception of
> the added “:ns=“ at the end. The 2nd line in the examples I saw had a
> middle component w/ the quota name, but when I tried that, like so:
> >
> > quota2 = maildir:Archive quota:ns=testArchive/
> >
> > my server fails and shows this in the logs:
> >
> >> Fatal: IMAP(*): Quota root test backend maildir: Unknown parameter:
> ns=testArchive/
> >
> >
> > Any idea why it doesn’t like that? Also, do I need to add a quota_rule
> for the primary quota? It does not have one normally in the Mac OS X Server
> config…
> >
> > Thus far in my testing, I’ve been able to get the 2 quotas to show up in
> Roundcube and Mac Mail.app. It’s a little messy…the first shows up as “User
> quota”, the 2nd as “ns=testArchive/“, presumably because I cannot leave the
> description field in there.
> >
> > Unfortunately, both quotas show the same amount of space in use. If I
> drop the primary quota to a mere 4MB for testing, and if I have 5.2MB of
> messages in a testArchive folder, the space used for “User quota” shows as
> 5.2MB (>100%), as does the “ns=testArchive/“ quota (which is 20GB). In
> actuality, the Inbox namespace is really only using a few KB— the 5.2MB is
> in the testArchive namespace. This means that I cannot move messages
> between either set of namespaces, and new messages are not delivered. So,
> the quota trouble here is negating the whole point of having the Archive
> namespace...
> >
> > Is there a way to get Dovecot to “see” the 2 quotas as unique/discrete?
> It seems like I’m close to accomplishing what I want, but just can’t quite
> get it to cooperate. And that “Unknown parameter” error is bewildering. Any
> ideas?
> >
> > Thx,
> > Fred
> >
> > P.S. I can add my Dovecot config to the thread upon request…didn’t want
> to make this initial message even longer.
>


I beat my head against basically the same wall a few years back (and
similarly felt like I was almost in reach but could never quite get it
working), so I'm highly interested in the same topic. But I'd love to hear
from someone smarter than me if this is even possible. I don't mind beating
my head against a wall if it's not for no reason.

Can anyone verify if this is even possible? Timo?


Re: How to understand NFS lookup (requests) spike?

2016-02-17 Thread Mark Moseley
On Wed, Feb 17, 2016 at 12:49 AM, Alessio Cecchi  wrote:

> Hi, I'm are running a classic Dovecot setup:
>
> About ten thousand connected users
> mailbox in Maildir format shared via NFS (NetApp)
> Director for POP/IMAP
> Delivery via Dovecot LDA
>
> All works fine but sometimes I see a spike on the load of POP/IMAP servers
> and high disk usage (close to 100%) on NFS NetApp.
>
> When this happens on NFS stats (of POP/IMAP) I can see an high volume of
> "lookup, remove, rename" requests.
>
> Example (avg is during normal load, max is request number during the
> spike):
>
> Lookup avg 100 max 700
> Remove avg 50 max 300
> Rename avg 50 max 300
> Getattr avg 200 max 250
> Total NFS avg 600 max 1800
>
> I think that some users are doing some kinds of "intensive" operations on
> their mailbox but what and who?
>
> I am currently using "iotop" to monitor the activity of individual users
> but I can't figure out who is causing the high number of I/O requests.
>
>
>
One suggestions is that I'd walk those directories and see if someone has a
mailbox with 100k files in it.


Re: [Dovecot] Dovecot stones

2012-04-02 Thread Mark Moseley
On Sat, Mar 31, 2012 at 2:28 PM, Timo Sirainen t...@iki.fi wrote:
 For the last few days I've been thinking about my company and what it really 
 should do, and as much as the current plan seems reasonable, I think in good 
 conscience I really can't help but to bring up an alternative plan:

...

I'm slightly concerned that there's been no mention of what license
these stones are going to be released under. GPL2? GPL3? Apache? I'm
just hoping these aren't some sort of open core stones that will
only work for basic features but that I'll end up needing to buy
Enterprise-grade stones  to cover large clusters.


Re: [Dovecot] Dovecot v2.2 plans

2012-02-15 Thread Mark Moseley
On Mon, Feb 13, 2012 at 3:47 AM, Timo Sirainen t...@iki.fi wrote:
 Here's a list of things I've been thinking about implementing for Dovecot 
 v2.2. Probably not all of them will make it, but I'm at least interested in 
 working on these if I have time.

 Previously I've mostly been working on things that different companies were 
 paying me to work on. This is the first time I have my own company, but the 
 prioritization still works pretty much the same way:

  - 1. priority: If your company is highly interested in getting something 
 implemented, we can do it as a project via my company. This guarantees that 
 you'll get the feature implemented in a way that integrates well into your 
 system.
  - 2. priority: Companies who have bought Dovecot support contract can let me 
 know what they're interested in getting implemented. It's not a guarantee 
 that it gets implemented, but it does affect my priorities. :)
  - 3. priority: Things other people want to get implemented.

 There are also a lot of other things I have to spend my time on, which are 
 before the 2. priority above. I guess we'll see how things work out.


Not to beat a dead horse, but the ability to use remote directors
might be interesting. It'd make moving into a director setup probably
a bit more easy. Then any server could proxy to the backend servers,
but without losing the advantage of director-based locality. If a box
sees one of its own IPs in the director_servers list, then it knows
it's part of the ring. If it doesn't, then it could contact a randomly
selected director IP.


Re: [Dovecot] IMAP-proxy or not with sogo webmail and dovecot backend

2012-02-13 Thread Mark Moseley
On Mon, Feb 13, 2012 at 5:54 AM, Jan-Frode Myklebust janfr...@tanso.net wrote:
 We've been collecting some stats to see what kind of benefits
 UP/SquirrelMail's IMAP Proxy in for our SOGo webmail users. Dovecot is
 running in High-performance mode http://wiki2.dovecot.org/LoginProcess
 with authentication caching http://wiki2.dovecot.org/Authentication/Caching

 During the weekend two servers (webmail3 and webmail4) has been running
 with local imapproxy and two servers without (webmail1 and webmail2). Each
 server has served about 1 million http requests, over 3 days.

 server          avg. response time      # requests
 
 webmail1.example.net   0.370411        1092386
 webmail2.example.net   0.374227        1045141
 webmail3.example.net   0.378097        1043919  imapproxy
 webmail4.example.net   0.378593        1028653  imapproxy


 ONLY requests that took more than 5 seconds to process:

 server          avg. response time      # requests
 
 webmail1.example.net   26.048          1125
 webmail2.example.net   26.2997         1080
 webmail3.example.net   28.5596         808      imapproxy
 webmail4.example.net   27.1004         964      imapproxy

 ONLY requests that took more than 10 seconds to process:

 server          avg. response time      # requests
 
 webmail1.example.net   49.1407         516
 webmail2.example.net   53.0139         459
 webmail3.example.net   59.7906         333      imapproxy
 webmail4.example.net   58.167          384      imapproxy

 The responstimes are not very fast, but they do seem to support
 the claim that an imapproxy isn't needed for dovecot.

Out of curiosity, are you running dovecot locally on those webmail
servers as well, or is it talking to remote dovecot servers? I ask
because I'm looking at moving our webmail from an on-box setup to a
remote pool to support director and was going to look into whether
running imapproxyd would help there. We don't bother with it in the
local setup, since dovecot is so fast, but remote (but still on a LAN)
might be different. Though imapproxyd seems to make (wait for it...)
squirrelmail unhappy (complains about IMAP errors, when sniffing shows
none), though I've not bothered to debug it yet.


Re: [Dovecot] Slightly more intelligent way of handling issues in sdbox?

2012-02-07 Thread Mark Moseley
On Tue, Feb 7, 2012 at 4:08 AM, Mark Zealey mark.zea...@webfusion.com wrote:
 06-02-2012 22:47, Timo Sirainen yazmış:

 On 3.2.2012, at 16.16, Mark Zealey wrote:

 I was doing some testing on sdbox yesterday. Basically I did the
 following procedure:

 1) Create new sdbox; deliver 2 messages into it (u.1, u.2)
 2) Create a copy of the index file (no cache file created yet)
 3) deliver another message to the mailbox (u.3)
 4) copy back index file from stage (2)
 5) deliver new mail

 Then the message delivered in stage 3 ie u.3 gets replaced with the
 message delivered in (5) also called u.3.

 http://hg.dovecot.org/dovecot-2.1/rev/a765e0a895a9 fixes this.


 I've not actually tried this patch yet, but looking at it, it is perhaps
 useful for the situation I described below when the index is corrupt. In
 this case I am describing however, the not is NOT corrupt - it is simply an
 older version (ie it only thinks there are the first 2 mails in the
 directory, not the 3rd). This could happen for example when mails are being
 stored on different storage than indexes; say for example you have 2 servers
 with remote NFS stored mails but local indexes that rsync between the
 servers every hour. You manually fail over one server to the other and you
 then have a copy of the correct indexes but only from an hour ago. The mails
 are all there on the shared storage but because the indexes are out of date,
 when a new message comes in it will be automatically overwritten.

 (speaking of which, it would be great if force-resync also rebuilt the
 cache files if there are valid cache files around, rather than just doing
 away with them)

 Well, ideally there shouldn't be so much corruption that this matters..


 That's true, but in our experience we usually get corruption in batches
 rather than a one-off occurrence. Our most common case is something like
 this: Say for example there's an issue with the NFS server (assuming we are
 storing indexes on there as well now) and so we have to killall -9 dovecot
 processes or similar. In that case you get a number of corrupted indexes on
 the server. Rebuilding the indexes generates an IO storm (say via lmtp or a
 pop3 access); then the clients log in via imap and we have to re-read all
 the messages to generate the cache files which is a second IO storm. If the
 caches were rebuilt at least semi-intelligently (ie you could extract from
 the cache files a list of things that had previously been cached) that would
 reduce the effects of rare storage level issues such as this.

 Mark

What about something like: a writer to an index/cache file checks for
the existence of file name.1. If it doesn't exist or is over a day
old, if the current index/cache file is not corrupt, take a snapshot
of it as file name.1. Then if an index/cache file is corrupt, it can
check for file name.1 and use that as the basis for a rebuild, so at
least only a day's worth of email is reverted to its previous state
(instead of all of it), assuming it's been modified in less than a
day. Clearly it'd take up a bit more disk space, though the various
dovecot.* files are pretty modest in size, even for big mailboxes.

Or it might be a decent use case for some sort of journaling, so that
the actual index/cache files don't ever get written to, except during
a consolidation, to roll up journals once they've reached some
threshold. There'd definitely be a performance price to pay though,
not to mention breaking backwards compatibility.

And I'm just throwing stuff out to see if any of it sticks, so don't
mistake this for even remotely well thought-out suggestions :)


Re: [Dovecot] moving mail out of alt storage

2012-01-30 Thread Mark Moseley
On Sat, Jan 28, 2012 at 12:44 PM, Timo Sirainen t...@iki.fi wrote:
 On 12.1.2012, at 20.32, Mark Moseley wrote:

 On Wed, 2011-09-14 at 23:17 -0400, Micah Anderson wrote:
 I moved some mail into the alt storage:

 doveadm altmove -u jo...@example.com seen savedbefore 1w

 and now I want to move it back to the regular INBOX, but I can't see how
 I can do that with either 'altmove' or 'mailbox move'.

 Is this sdbox or mdbox? With sdbox you could simply mv the files. Or
 apply patch: http://hg.dovecot.org/dovecot-2.0/rev/1910c76a6cc9

 This is mdbox, which is why I am not sure how to operate because I am
 used to individual files as is with maildir.

 micah


 I'm curious about this too. Is moving the m.# file out of the ALT
 path's storage/ directory into the non-ALT storage/ directory
 sufficient? Or will that cause odd issues?

 You can manually move m.* files to alt storage and back. Just make sure that 
 the same file isn't being simultaneously modified by Dovecot or you'll 
 corrupt it.


Cool, good to know. Thanks!


Re: [Dovecot] MySQL server has gone away

2012-01-28 Thread Mark Moseley
On Sat, Jan 28, 2012 at 12:07 PM, Timo Sirainen t...@iki.fi wrote:
 On 13.1.2012, at 20.29, Mark Moseley wrote:

 If there are multiple hosts, it seems like the most robust thing to do
 would be to exhaust the existing connections and if none of those
 succeed, then start a new connection to one of them. It will probably
 result in much more convoluted logic but it'd probably match better
 what people expect from a retry.

 Done: http://hg.dovecot.org/dovecot-2.0/rev/4e7676b890f1


Excellent, thanks!


[Dovecot] Director questions

2012-01-23 Thread Mark Moseley
In playing with dovecot director, a couple of things came up, one
related to the other:

1) Is there an effective maximum of directors that shouldn't be
exceeded? That is, even if technically possible, that I shouldn't go
over? Since we're 100% NFS, we've scaled servers horizontally quite a
bit. At this point, we've got servers operating as MTAs, servers doing
IMAP/POP directly, and servers separately doing IMAP/POP as webmail
backends. Works just dandy for our existing setup. But to director-ize
all of them, I'm looking at a director ring of maybe 75-85 servers,
which is a bit unnerving, since I don't know if the ring will be able
to keep up. Is there a scale where it'll bog down?

2) If it is too big, is there any way, that I might be missing, to use
remote directors? It looks as if directors have to live locally on the
same box as the proxy. For my MTAs, where they're not customer-facing,
I'm much less worried about the latency it'd introduce. Likewise with
my webmail servers, the extra latency would probably be trivial
compared to the rest of the request--but then again, might not. But
for direct IMAP, the latency likely be more noticeable. So ideally I'd
be able to make my IMAP servers (well, the frontside of the proxy,
that is) be the director pool, while leaving my MTAs to talk to the
director remotely, and possibly my webmail servers remote too. Is that
a remote possibility?


Re: [Dovecot] Director questions

2012-01-23 Thread Mark Moseley
On Mon, Jan 23, 2012 at 11:37 AM, Timo Sirainen t...@iki.fi wrote:
 On 23.1.2012, at 21.13, Mark Moseley wrote:

 In playing with dovecot director, a couple of things came up, one
 related to the other:

 1) Is there an effective maximum of directors that shouldn't be
 exceeded? That is, even if technically possible, that I shouldn't go
 over?

 There's no definite number, but each director adds some extra traffic to 
 network and sometimes extra latency to lookups. So you should have only as 
 many as you need.

Ok.


 Since we're 100% NFS, we've scaled servers horizontally quite a
 bit. At this point, we've got servers operating as MTAs, servers doing
 IMAP/POP directly, and servers separately doing IMAP/POP as webmail
 backends. Works just dandy for our existing setup. But to director-ize
 all of them, I'm looking at a director ring of maybe 75-85 servers,
 which is a bit unnerving, since I don't know if the ring will be able
 to keep up. Is there a scale where it'll bog down?

 That's definitely too many directors. So far the largest installation I know 
 of has 4 directors. Another one will maybe have 6-10 to handle 2Gbps traffic.

Ok


 2) If it is too big, is there any way, that I might be missing, to use
 remote directors? It looks as if directors have to live locally on the
 same box as the proxy. For my MTAs, where they're not customer-facing,
 I'm much less worried about the latency it'd introduce. Likewise with
 my webmail servers, the extra latency would probably be trivial
 compared to the rest of the request--but then again, might not. But
 for direct IMAP, the latency likely be more noticeable. So ideally I'd
 be able to make my IMAP servers (well, the frontside of the proxy,
 that is) be the director pool, while leaving my MTAs to talk to the
 director remotely, and possibly my webmail servers remote too. Is that
 a remote possibility?

 I guess that could be a possibility, but .. Why do you need so many proxies 
 at all? Couldn't all of your traffic go through just a few dedicated 
 proxy/director servers?

I'm probably conceptualizing it wrongly. In our system, since it's
NFS, we have everything pooled. For a given mailbox, any number of MTA
(Exim) boxes could actually do the delivery, any number of IMAP
servers can do IMAP for that mailbox, and any number of webmail
servers could do IMAP too for that mailbox. So our horizontal scaling,
server-wise, is just adding more servers to pools. This is on the
order of a few million mailboxes, per datacenter. It's less messy than
it probably sounds :)

I was assuming that at any spot where a server touched the actual
mailbox, it would need to instead proxy to a set of backend servers.
Is that accurate or way off? If it is accurate, it sounds like we'd
need to shuffle things up a bit.


Re: [Dovecot] Performance of Maildir vs sdbox/mdbox

2012-01-19 Thread Mark Moseley
On Wed, Jan 18, 2012 at 8:39 PM, Stan Hoeppner s...@hardwarefreak.com wrote:
 On 1/18/2012 7:54 AM, Timo Sirainen wrote:
 On Wed, 2012-01-18 at 20:44 +0800, Lee Standen wrote:

 * All mail storage presented via NFS over 10Gbps Ethernet (Jumbo Frames)

 * Postfix will feed new email to Dovecot via LMTP

 * Dovecot servers have been split based on their role

   - Dovecot LDA Servers (running LMTP protocol)

   - Dovecot POP/IMAP servers (running POP/IMAP protocols)

 You're going to run into NFS caching troubles with the above split
 setup. I don't recommend it. You will see error messages about index
 corruption with it, and with dbox it can cause metadata loss.
 http://wiki2.dovecot.org/NFS http://wiki2.dovecot.org/Director

 Would it be possible to fix this NFS mdbox index corruption issue in
 this split scenario by using a dual namespace and disabling indexing on
 the INBOX?  The goal being no index file collisions between LDA and imap
 processes.  Maybe something like:

 namespace {
  separator = /
  prefix = #mbox/
  location = mbox:~/mail:INBOX=/var/mail/%u:INDEX=MEMORY
  inbox = yes
  hidden = yes
  list = no
 }
 namespace {
  separator = /
  prefix =
  location = mdbox:~/mdbox
 }

 Client access to new mail might be a little slower, but if it eliminates
 the index corruption issue and allows the split architecture, it may be
 a viable option.

 --
 Stan

It could be that I botched my test up somehow, but when I tested
something similar yesterday (pointing the index at another location on
the LDA), it didn't work. I was sending from the LDA server and
confirmed that the messages made it to storage/m.# but without the
real indexes being updated. When I checked the mailbox via IMAP, it
never seemed to register that there was a message there, so I'm
guessing that dovecot never looks at the storage files but just relies
on the indexes to be correct. That sound right, Timo?


Re: [Dovecot] LMTP Logging

2012-01-18 Thread Mark Moseley
On Wed, Jan 18, 2012 at 6:52 AM, Timo Sirainen t...@iki.fi wrote:
 On Mon, 2012-01-16 at 17:17 -0800, Mark Moseley wrote:
 Just had a minor suggestion, with no clue how hard/easy it would be to
 implement:

 The %f flag in deliver_log_format seems to pick up the From: header,
 instead of the MAIL FROM:... arg. It'd be handy to have a %F that
 shows the MAIL FROM arg instead. I'm looking at tracking emails
 through logs from Exim to Dovecot easily. I know Message-ID can be
 used for correlation but it adds some complexity to searching, i.e. I
 can't just grep for the sender (as logged by Exim), unless I assume
 MAIL FROM always == From:

 Added to v2.1: http://hg.dovecot.org/dovecot-2.1/rev/7ee2cfbcae2e
 http://hg.dovecot.org/dovecot-2.1/rev/08cc9d2a79e6



You're awesome, thanks!


Re: [Dovecot] Performance of Maildir vs sdbox/mdbox

2012-01-18 Thread Mark Moseley
snip
 * All mail storage presented via NFS over 10Gbps Ethernet (Jumbo Frames)

 * Postfix will feed new email to Dovecot via LMTP

 * Dovecot servers have been split based on their role

  - Dovecot LDA Servers (running LMTP protocol)

  - Dovecot POP/IMAP servers (running POP/IMAP protocols)


 You're going to run into NFS caching troubles with the above split
 setup. I don't recommend it. You will see error messages about index
 corruption with it, and with dbox it can cause metadata loss.
 http://wiki2.dovecot.org/NFS http://wiki2.dovecot.org/Director


 That might be the one thing (unfortunately) which prevents us from going
 with the dbox format.  I understand the same issue can actually occur on
 Dovecot Maildir as well, but because Maildir works without these index
 files, we were willing to just go with it.  I will raise it again, but there
 has been a lot of push back about introducing a single point of failure,
 even though this is a perceived one.
/snip

I'm in the middle of working on a Maildir-mdbox migration as well,
and likewise, over NFS (all Netapps but moving to Sun), and likewise
with split LDA and IMAP/POP servers (and both of those served out of
pools). I was hoping doing things like setting mail_nfs_index = yes
and mmap_disable = yes and mail_fsync = always/optimized would
mitigate most of the risks of index corruption, as well as probably
turning indexing off on the LDA side of things--i.e. all the
suggestions at http://wiki2.dovecot.org/NFS. Is that definitely not
the case? Is there anything else (beyond moving to a director-based
architecture) that can mitigate the risk of index corruption? In our
case, incoming IMAP/POP are 'stuck' to servers based on IP persistence
for a given amount of time, but incoming LDA is randomly distributed.


Re: [Dovecot] Performance of Maildir vs sdbox/mdbox

2012-01-18 Thread Mark Moseley
On Wed, Jan 18, 2012 at 9:58 AM, Timo Sirainen t...@iki.fi wrote:
 On 18.1.2012, at 19.54, Mark Moseley wrote:

 I'm in the middle of working on a Maildir-mdbox migration as well,
 and likewise, over NFS (all Netapps but moving to Sun), and likewise
 with split LDA and IMAP/POP servers (and both of those served out of
 pools). I was hoping doing things like setting mail_nfs_index = yes
 and mmap_disable = yes and mail_fsync = always/optimized would
 mitigate most of the risks of index corruption,

 They help, but aren't 100% effective and they also make the performance worse.

In testing, it seemed very much like the benefits of reducing IOPS by
up to a couple orders of magnitude outweighed having to use those
settings. Both in scripted testing and just using a mail UI, with the
NFS-ish settings, I didn't notice any lag and doing things like
checking a good-sized mailbox were at least as quick as Maildir. And
I'm hoping that reducing IOPS across the entire set of NFS servers
will compound the benefits quite a bit.


 as well as probably
 turning indexing off on the LDA side of things

 You can't turn off indexing with dbox.

Ah, too bad. I was hoping I could get away with the LDA not updating
the index but just dropping the message into storage/m.# but it'd
still be seen on the IMAP/POP side--but hadn't tested that. Guess
that's not the case.


 --i.e. all the
 suggestions at http://wiki2.dovecot.org/NFS. Is that definitely not
 the case? Is there anything else (beyond moving to a director-based
 architecture) that can mitigate the risk of index corruption? In our
 case, incoming IMAP/POP are 'stuck' to servers based on IP persistence
 for a given amount of time, but incoming LDA is randomly distributed.

 What's the problem with director-based architecture?

Nothing, per se. It's just that migrating to mdbox *and* to a director
architecture is quite a bit more added complexity than simply
migrating to mdbox alone.

Hopefully, I'm not hijacking this thread. This seems pretty pertinent
as well to the OP.


[Dovecot] LMTP Logging

2012-01-16 Thread Mark Moseley
Just had a minor suggestion, with no clue how hard/easy it would be to
implement:

The %f flag in deliver_log_format seems to pick up the From: header,
instead of the MAIL FROM:... arg. It'd be handy to have a %F that
shows the MAIL FROM arg instead. I'm looking at tracking emails
through logs from Exim to Dovecot easily. I know Message-ID can be
used for correlation but it adds some complexity to searching, i.e. I
can't just grep for the sender (as logged by Exim), unless I assume
MAIL FROM always == From:


Re: [Dovecot] MySQL server has gone away

2012-01-13 Thread Mark Moseley
On Fri, Jan 13, 2012 at 1:36 AM, Timo Sirainen t...@iki.fi wrote:
 On 13.1.2012, at 4.00, Mark Moseley wrote:

 I'm running 2.0.17 and I'm still seeing a decent amount of MySQL
 server has gone away errors, despite having multiple hosts defined in
 my auth userdb 'connect'. This is Debian Lenny 32-bit and I'm seeing
 the same thing with 2.0.16 on Debian Squeeze 64-bit.

 E.g.:

 Jan 12 20:30:33 auth-worker: Error: mysql: Query failed, retrying:
 MySQL server has gone away

 Our mail mysql servers are busy enough that wait_timeout is set to a
 whopping 30 seconds. On my regular boxes, I see a good deal of these
 in the logs. I've been doing a lot of mucking with doveadm/dsync
 (working on maildir-mdbox migration finally, yay!) on test boxes
 (same dovecot package  version) and when I get this error, despite
 the log saying it's retrying, it doesn't seem to be. Instead I get:

 dsync(root): Error: user ...: Auth USER lookup failed

 Try with only one host in the connect string? My guess: Both the 
 connections have timed out, and the retrying fails as well (there is only one 
 retry). Although if the retrying lookup fails, there should be an error 
 logged about it also (you don't see one?)

 Also another idea to avoid them in the first place:

 service auth-worker {
  idle_kill = 20
 }


With just one 'connect' host, it seems to reconnect just fine (using
the same tests as above) and I'm not seeing the same error. It worked
every time that I tried, with no complaints of MySQL server has gone
away.

If there are multiple hosts, it seems like the most robust thing to do
would be to exhaust the existing connections and if none of those
succeed, then start a new connection to one of them. It will probably
result in much more convoluted logic but it'd probably match better
what people expect from a retry.

Alternatively, since in all my tests, the mysql server has closed the
connection prior to this, is the auth worker not recognizing its
connection is already half-closed (in which case, it probably
shouldn't even consider it a legitimate connection and just
automatically reconnect, i.e. try #1, not the retry, which would
happen after another failure).

I'll give the idle_kill a try too. I kind of like the idea of
idle_kill for auth processes anyway, just to free up some connections
on the mysql server.


Re: [Dovecot] MySQL server has gone away

2012-01-13 Thread Mark Moseley
On Fri, Jan 13, 2012 at 11:38 AM, Robert Schetterer
rob...@schetterer.org wrote:
 Am 13.01.2012 19:29, schrieb Mark Moseley:
 On Fri, Jan 13, 2012 at 1:36 AM, Timo Sirainen t...@iki.fi wrote:
 On 13.1.2012, at 4.00, Mark Moseley wrote:

 I'm running 2.0.17 and I'm still seeing a decent amount of MySQL
 server has gone away errors, despite having multiple hosts defined in
 my auth userdb 'connect'. This is Debian Lenny 32-bit and I'm seeing
 the same thing with 2.0.16 on Debian Squeeze 64-bit.

 E.g.:

 Jan 12 20:30:33 auth-worker: Error: mysql: Query failed, retrying:
 MySQL server has gone away

 Our mail mysql servers are busy enough that wait_timeout is set to a
 whopping 30 seconds. On my regular boxes, I see a good deal of these
 in the logs. I've been doing a lot of mucking with doveadm/dsync
 (working on maildir-mdbox migration finally, yay!) on test boxes
 (same dovecot package  version) and when I get this error, despite
 the log saying it's retrying, it doesn't seem to be. Instead I get:

 dsync(root): Error: user ...: Auth USER lookup failed

 Try with only one host in the connect string? My guess: Both the 
 connections have timed out, and the retrying fails as well (there is only 
 one retry). Although if the retrying lookup fails, there should be an error 
 logged about it also (you don't see one?)

 Also another idea to avoid them in the first place:

 service auth-worker {
  idle_kill = 20
 }


 With just one 'connect' host, it seems to reconnect just fine (using
 the same tests as above) and I'm not seeing the same error. It worked
 every time that I tried, with no complaints of MySQL server has gone
 away.

 If there are multiple hosts, it seems like the most robust thing to do
 would be to exhaust the existing connections and if none of those
 succeed, then start a new connection to one of them. It will probably
 result in much more convoluted logic but it'd probably match better
 what people expect from a retry.

 Alternatively, since in all my tests, the mysql server has closed the
 connection prior to this, is the auth worker not recognizing its
 connection is already half-closed (in which case, it probably
 shouldn't even consider it a legitimate connection and just
 automatically reconnect, i.e. try #1, not the retry, which would
 happen after another failure).

 I'll give the idle_kill a try too. I kind of like the idea of
 idle_kill for auth processes anyway, just to free up some connections
 on the mysql server.

 by the way , if you use sql for auth have you tried auth caching ?

 http://wiki.dovecot.org/Authentication/Caching

 i.e.

 # Authentication cache size (e.g. 10M). 0 means it's disabled. Note that
 # bsdauth, PAM and vpopmail require cache_key to be set for caching to
 be used.

 auth_cache_size = 10M

 # Time to live for cached data. After TTL expires the cached record is no
 # longer used, *except* if the main database lookup returns internal
 failure.
 # We also try to handle password changes automatically: If user's previous
 # authentication was successful, but this one wasn't, the cache isn't used.
 # For now this works only with plaintext authentication.

 auth_cache_ttl = 1 hour

 # TTL for negative hits (user not found, password mismatch).
 # 0 disables caching them completely.

 auth_cache_negative_ttl = 0


Yup, we have caching turned on for our production boxes. On this
particular box, I'd just shut off caching so that I could work on a
script for converting from maildir-mdbox and run it repeatedly on the
same mailbox. I got tired of restarting dovecot between each test :)


Re: [Dovecot] MySQL server has gone away

2012-01-13 Thread Mark Moseley
On Fri, Jan 13, 2012 at 2:46 PM, Paul B. Henson hen...@acm.org wrote:
 On Fri, Jan 13, 2012 at 01:36:38AM -0800, Timo Sirainen wrote:

 Also another idea to avoid them in the first place:

 service auth-worker {
   idle_kill = 20
 }

 Ah, set the auth-worker timeout to less than the mysql timeout to
 prevent a stale mysql connection from ever being used. I'll try that,
 thanks.

I gave that a try. Sometimes it seems to kill off the auth-worker but
not till after a minute or so (with idle_kill = 20). Other times, the
worker stays around for more like 5 minutes (I gave up watching),
despite being idle -- and I'm the only person connecting to it, so
it's definitely idle. Does auth-worker perhaps only wake up every so
often to check its idle status?

To test, I kicked off a dsync, then grabbed a netstat:

tcp0  0 10.1.15.129:40070   10.1.52.47:3306
ESTABLISHED 29146/auth worker [
tcp0  0 10.1.15.129:33369   10.1.52.48:3306
ESTABLISHED 29146/auth worker [
tcp0  0 10.1.15.129:54083   10.1.52.49:3306
ESTABLISHED 29146/auth worker [

then kicked off this loop:

# while true; do date; ps p 29146 |tail -n1; sleep 1; done

Fri Jan 13 18:05:14 EST 2012
29146 ?S  0:00 dovecot/auth worker [0 wait, 0 passdb, 0 userdb]
Fri Jan 13 18:05:15 EST 2012
29146 ?S  0:00 dovecot/auth worker [0 wait, 0 passdb, 0 userdb]

  More lines of the loop ...

Fri Jan 13 18:05:35 EST 2012
29146 ?S  0:00 dovecot/auth worker [0 wait, 0 passdb, 0 userdb]
18:05:36.252976 IP 10.1.52.48.3306  10.1.15.129.33369: F 77:77(0) ack
92 win 91 nop,nop,timestamp 1850213473 320254609
18:05:36.288549 IP 10.1.15.129.33369  10.1.52.48.3306: . ack 78 win
913 nop,nop,timestamp 320257515 1850213473
Fri Jan 13 18:05:36 EST 2012
29146 ?S  0:00 dovecot/auth worker [0 wait, 0 passdb, 0 userdb]
18:05:37.196204 IP 10.1.52.49.3306  10.1.15.129.54083: F 806:806(0)
ack 1126 win 123 nop,nop,timestamp 1534230122 320254609
18:05:37.228594 IP 10.1.15.129.54083  10.1.52.49.3306: . ack 807 win
1004 nop,nop,timestamp 320257609 1534230122
18:05:37.411955 IP 10.1.52.47.3306  10.1.15.129.40070: F 806:806(0)
ack 1126 win 123 nop,nop,timestamp 774321777 320254650
18:05:37.448573 IP 10.1.15.129.40070  10.1.52.47.3306: . ack 807 win
1004 nop,nop,timestamp 320257631 774321777
Fri Jan 13 18:05:37 EST 2012
29146 ?S  0:00 dovecot/auth worker [0 wait, 0 passdb, 0 userdb]

... more lines of the loop ...

Fri Jan 13 18:10:13 EST 2012
29146 ?S  0:00 dovecot/auth worker [0 wait, 0 passdb, 0 userdb]
Fri Jan 13 18:10:14 EST 2012
29146 ?S  0:00 dovecot/auth worker [0 wait, 0 passdb, 0 userdb]
^C

at which point I bailed out. Looking again a couple of minutes later,
it was gone. Nothing else was going on and the logs don't show any
activity between 18:05:07 and 18:10:44.


Re: [Dovecot] moving mail out of alt storage

2012-01-12 Thread Mark Moseley
On Thu, Sep 15, 2011 at 10:14 AM, Micah Anderson mi...@riseup.net wrote:
 Timo Sirainen t...@iki.fi writes:

 On Wed, 2011-09-14 at 23:17 -0400, Micah Anderson wrote:
 I moved some mail into the alt storage:

 doveadm altmove -u jo...@example.com seen savedbefore 1w

 and now I want to move it back to the regular INBOX, but I can't see how
 I can do that with either 'altmove' or 'mailbox move'.

 Is this sdbox or mdbox? With sdbox you could simply mv the files. Or
 apply patch: http://hg.dovecot.org/dovecot-2.0/rev/1910c76a6cc9

 This is mdbox, which is why I am not sure how to operate because I am
 used to individual files as is with maildir.

 micah


I'm curious about this too. Is moving the m.# file out of the ALT
path's storage/ directory into the non-ALT storage/ directory
sufficient? Or will that cause odd issues?


[Dovecot] MySQL server has gone away

2012-01-12 Thread Mark Moseley
I'm running 2.0.17 and I'm still seeing a decent amount of MySQL
server has gone away errors, despite having multiple hosts defined in
my auth userdb 'connect'. This is Debian Lenny 32-bit and I'm seeing
the same thing with 2.0.16 on Debian Squeeze 64-bit.

E.g.:

Jan 12 20:30:33 auth-worker: Error: mysql: Query failed, retrying:
MySQL server has gone away

Our mail mysql servers are busy enough that wait_timeout is set to a
whopping 30 seconds. On my regular boxes, I see a good deal of these
in the logs. I've been doing a lot of mucking with doveadm/dsync
(working on maildir-mdbox migration finally, yay!) on test boxes
(same dovecot package  version) and when I get this error, despite
the log saying it's retrying, it doesn't seem to be. Instead I get:

dsync(root): Error: user ...: Auth USER lookup failed
dsync(root): Fatal: User lookup failed: Internal error occurred. Refer
to server log for more information.

Watching tcpdump at the same time, it looks like it's going through
some of the mysql servers, but all of them have by now disconnected
and are in CLOSE_WAIT.

Here's an (edited) example after doing a dsync that completes without
errors, with tcpdump running in the background:

# sleep 30; netstat -ant | grep 3306; dsync -C^ -u mail...@test.com
backup mdbox:~/mdbox

tcp1  0 10.1.15.129:57436   10.1.52.48:3306 CLOSE_WAIT
tcp1  0 10.1.15.129:49917   10.1.52.49:3306 CLOSE_WAIT
tcp1  0 10.1.15.129:35904   10.1.52.47:3306 CLOSE_WAIT

20:49:59.725005 IP 10.1.15.129.35904  10.1.52.47.3306: F 1126:1126(0)
ack 807 win 1004 nop,nop,timestamp 312603858 77259
20:49:59.725459 IP 10.1.52.47.3306  10.1.15.129.35904: . ack 1127 win
123 nop,nop,timestamp 77998 312603858
20:49:59.725568 IP 10.1.15.129.57436  10.1.52.48.3306: F 1126:1126(0)
ack 807 win 1004 nop,nop,timestamp 312603858 1842560856
20:49:59.725779 IP 10.1.52.48.3306  10.1.15.129.57436: . ack 1127 win
123 nop,nop,timestamp 1842561225 312603858

dsync(root): Error: user mail...@test.com: Auth USER lookup failed
dsync(root): Fatal: User lookup failed: Internal error occurred. Refer
to server log for more information.


10.1.15.129 in this case is the dovecot server, and the 10.1.52.0/24
boxes are mysql servers. That's the same pattern I've seen almost
every time. Just a FIN packet to two of the servers (ack'd by the
mysql server) and then it fails.

Is the retry mechanism supposed to transparently start a new
connection, or is this how it works? In connecting remotely to these
same servers (which aren't getting production traffic, so I'm the only
person connecting to them), I get seemingly random disconnects via
IMAP, always coinciding with a MySQL server has gone away error in
the logs.

This is non-production, so I'm happy to turn on whatever debugging
would be useful.

Here's doveconf -n from the box the tcpdump was on. This box is just
configured for lmtp (but have seen the same thing on one configured
for IMAP/POP as well), so it's pretty small, config-wise:

# 2.0.17: /etc/dovecot/dovecot/dovecot.conf
# OS: Linux 3.0.9-nx i686 Debian 5.0.9
auth_cache_negative_ttl = 0
auth_cache_ttl = 0
auth_debug = yes
auth_failure_delay = 0
base_dir = /var/run/dovecot/
debug_log_path = /var/log/dovecot/debug.log
default_client_limit = 3005
default_internal_user = doveauth
default_process_limit = 1500
deliver_log_format = M=%m, F=%f, S=%s = %$
disable_plaintext_auth = no
first_valid_uid = 199
last_valid_uid = 201
lda_mailbox_autocreate = yes
listen = *
log_path = /var/log/dovecot/mail.log
mail_debug = yes
mail_fsync = always
mail_location = maildir:~/Maildir:INDEX=/var/cache/dovecot/%2Mu/%2.2Mu/%u
mail_nfs_index = yes
mail_nfs_storage = yes
mail_plugins = zlib quota
mail_privileged_group = mail
mail_uid = 200
managesieve_notify_capability = mailto
managesieve_sieve_capability = fileinto reject envelope
encoded-character vacation subaddress comparator-i;ascii-numeric
relational regex imap4flags copy include variables body enotify
environment mailbox date ihave
mdbox_rotate_interval = 1 days
mmap_disable = yes
namespace {
  hidden = no
  inbox = yes
  list = yes
  location =
  prefix = INBOX.
  separator = .
  subscriptions = yes
  type = private
}
passdb {
  args = /opt/dovecot/etc/lmtp/sql.conf
  driver = sql
}
plugin {
  info_log_path = /var/log/dovecot/dovecot-deliver.log
  log_path = /var/log/dovecot/dovecot-deliver.log
  quota = maildir:User quota
  quota_rule = *:bytes=25M
  quota_rule2 = INBOX.Trash:bytes=+10%%
  quota_rule3 = *:messages=3000
  sieve = ~/sieve/dovecot.sieve
  sieve_before = /etc/dovecot/scripts/spam.sieve
  sieve_dir = ~/sieve/
  zlib_save = gz
  zlib_save_level = 3
}
protocols = lmtp sieve
service auth-worker {
  unix_listener auth-worker {
mode = 0666
  }
  user = doveauth
}
service auth {
  client_limit = 8000
  unix_listener login/auth {
mode = 0666
  }
  user = doveauth
}
service lmtp {
  executable = lmtp -L
  process_min_avail = 10
  unix_listener 

Re: [Dovecot] Glued-together private namespaces

2011-11-16 Thread Mark Moseley
On Wed, Nov 16, 2011 at 10:34 AM, Timo Sirainen t...@iki.fi wrote:
 On Tue, 2011-11-15 at 16:04 -0800, Mark Moseley wrote:
  The gotcha is that you have two completely independent quotas with
  independent usage/limits for the INBOX and Archive namespaces. If that
  is what you want, it should all be fine.

 Nope, that's totally fine. The idea is to put Archive on cheaper
 (slower) storage and then grant more generous quotas there to make it
 worth their while to use, without slowing down their Inbox. Another
 application would be to put their Spam in another namespace (for
 people who choose to have it put in a separate folder) with a lower
 quota, again to offload it onto cheaper storage, since hardly anyone
 actually looks at it.

 Should be fine then.

 Or is this something that I could be doing more transparently in 2.1 with 
 imapc?

 I don't really see how that could help.

Ah, bummer. I thought maybe 2.1 could proxy to a separate folder or
namespace (but I've also barely had a chance to look at it), like so
certain folders would be grabbed from a proxy. Haven't really thought
that through though :)


Re: [Dovecot] Glued-together private namespaces

2011-11-15 Thread Mark Moseley
On Tue, Nov 15, 2011 at 11:39 AM, Timo Sirainen t...@iki.fi wrote:
 On Mon, 2011-11-14 at 10:23 -0800, Mark Moseley wrote:

  Thanks to a fortuitously unrelated thread (how to disable quota for
  second namespace), I got the quota part figured out and that seems to
  be working: Add a second entry to plugin {}, e.g. quota2 =
  maildir:Archive quota:ns=INBOX.Archives. and add rules for
  userdb_quota2_rule, userdb_quota2_rule2, etc.
 
  My real question now is: Are there any fatal gotchas in this that I'm
  just not thinking of?
 

 Haven't had a chance to try this large-scale yet. Anybody have any
 thoughts on it?

 The gotcha is that you have two completely independent quotas with
 independent usage/limits for the INBOX and Archive namespaces. If that
 is what you want, it should all be fine.

Nope, that's totally fine. The idea is to put Archive on cheaper
(slower) storage and then grant more generous quotas there to make it
worth their while to use, without slowing down their Inbox. Another
application would be to put their Spam in another namespace (for
people who choose to have it put in a separate folder) with a lower
quota, again to offload it onto cheaper storage, since hardly anyone
actually looks at it.

Or is this something that I could be doing more transparently in 2.1 with imapc?


Re: [Dovecot] Glued-together private namespaces

2011-11-14 Thread Mark Moseley
On Mon, Sep 26, 2011 at 10:11 AM, Mark Moseley moseleym...@gmail.com wrote:
 On Fri, Sep 23, 2011 at 3:35 PM, Mark Moseley moseleym...@gmail.com wrote:
 I've been goofing with this all day with 2.0.15 and I'm starting to
 realize that either a) I'm not that smart, b) it's been so long since
 I messed with namespaces that I'm going about it completely wrong, or
 c) it's just not possible. I haven't posted 'doveconf -n' and other
 details, because mainly I'm just looking for 'yes, this is possible'
 or 'no, you're smoking crack' before posting further details. At this
 point, it's all maildir and moving to mdbox, while highly desirable in
 the future, is not possible in the near- to medium-term.

 I'm trying to glue a namespace underneath INBOX:

 namespace INBOX {
        type = private
        separator = .
        prefix = INBOX.    # Yes, this used to be on Courier
        inbox = yes
        list = yes
        hidden = no
        subscriptions = yes
        location = maildir:~/Maildir
 }
 namespace archive {
        type = private
        separator = .
        prefix = INBOX.Archives.
        inbox = no
        list = children
        subscriptions = yes
        location = maildir:~/Maildir-Archive
 }


 I've tried putting namespace archive's 'prefix' as just Archives,
 but Tbird doesn't seem to see this namespace, regardless of how much I
 futz with the imap settings in tbird.

 With the above setup, it actually seems to work correctly (provided
 ~/Maildir-Archive exists), though I'm sure a big gotcha is waiting in
 the wings. I can move messages around, create subfolders, subscribe to
 folders in ~/Maildir-Archive). The only thing I can't seem to get
 working is quotas. With my password_query like:

 password_query = ...
 CONCAT( '*:bytes=', 1M ) AS 'userdb_quota_rule', \
 CONCAT( '*:messages=10' ) AS 'userdb_quota_rule2', \
 CONCAT( 'INBOX.Archives:bytes=+4900M' ) AS 'userdb_quota_rule3', \
 CONCAT( 'INBOX.Archives:messages=+3900' ) AS 'userdb_quota_rule4'
 ...

 only the default quota seems to be in place for any subfolder of
 INBOX.Archives and for INBOX.Archives itself, i.e. *:bytes still
 applies to INBOX.Archives. The debug log show that:

 Debug: Quota root: name=User quota backend=maildir args=
 Debug: Quota rule: root=User quota mailbox=* bytes=1048576 messages=0
 Debug: Quota rule: root=User quota mailbox=* bytes=1048576 messages=10
 Debug: Quota rule: root=User quota mailbox=INBOX.Archives
 bytes=+5138022400 messages=0
 Debug: Quota rule: root=User quota mailbox=INBOX.Archives
 bytes=+5138022400 messages=+3900

 These are wildly stupid quotas but they're just there to test. With
 INBOX already at capacity (byte-wise; only set to a meg), copying
 large messages inside INBOX.Archives fails (only copying a 800k
 message but the quota should be 5gig now).

 Again, before I post configs, I'm just curious if what I'm trying to
 do isn't remotely possible, or that I'm approaching this entirely
 wrongly. Thanks!


 Thanks to a fortuitously unrelated thread (how to disable quota for
 second namespace), I got the quota part figured out and that seems to
 be working: Add a second entry to plugin {}, e.g. quota2 =
 maildir:Archive quota:ns=INBOX.Archives. and add rules for
 userdb_quota2_rule, userdb_quota2_rule2, etc.

 My real question now is: Are there any fatal gotchas in this that I'm
 just not thinking of?


Haven't had a chance to try this large-scale yet. Anybody have any
thoughts on it?


Re: [Dovecot] Glued-together private namespaces

2011-09-26 Thread Mark Moseley
On Fri, Sep 23, 2011 at 3:35 PM, Mark Moseley moseleym...@gmail.com wrote:
 I've been goofing with this all day with 2.0.15 and I'm starting to
 realize that either a) I'm not that smart, b) it's been so long since
 I messed with namespaces that I'm going about it completely wrong, or
 c) it's just not possible. I haven't posted 'doveconf -n' and other
 details, because mainly I'm just looking for 'yes, this is possible'
 or 'no, you're smoking crack' before posting further details. At this
 point, it's all maildir and moving to mdbox, while highly desirable in
 the future, is not possible in the near- to medium-term.

 I'm trying to glue a namespace underneath INBOX:

 namespace INBOX {
        type = private
        separator = .
        prefix = INBOX.    # Yes, this used to be on Courier
        inbox = yes
        list = yes
        hidden = no
        subscriptions = yes
        location = maildir:~/Maildir
 }
 namespace archive {
        type = private
        separator = .
        prefix = INBOX.Archives.
        inbox = no
        list = children
        subscriptions = yes
        location = maildir:~/Maildir-Archive
 }


 I've tried putting namespace archive's 'prefix' as just Archives,
 but Tbird doesn't seem to see this namespace, regardless of how much I
 futz with the imap settings in tbird.

 With the above setup, it actually seems to work correctly (provided
 ~/Maildir-Archive exists), though I'm sure a big gotcha is waiting in
 the wings. I can move messages around, create subfolders, subscribe to
 folders in ~/Maildir-Archive). The only thing I can't seem to get
 working is quotas. With my password_query like:

 password_query = ...
 CONCAT( '*:bytes=', 1M ) AS 'userdb_quota_rule', \
 CONCAT( '*:messages=10' ) AS 'userdb_quota_rule2', \
 CONCAT( 'INBOX.Archives:bytes=+4900M' ) AS 'userdb_quota_rule3', \
 CONCAT( 'INBOX.Archives:messages=+3900' ) AS 'userdb_quota_rule4'
 ...

 only the default quota seems to be in place for any subfolder of
 INBOX.Archives and for INBOX.Archives itself, i.e. *:bytes still
 applies to INBOX.Archives. The debug log show that:

 Debug: Quota root: name=User quota backend=maildir args=
 Debug: Quota rule: root=User quota mailbox=* bytes=1048576 messages=0
 Debug: Quota rule: root=User quota mailbox=* bytes=1048576 messages=10
 Debug: Quota rule: root=User quota mailbox=INBOX.Archives
 bytes=+5138022400 messages=0
 Debug: Quota rule: root=User quota mailbox=INBOX.Archives
 bytes=+5138022400 messages=+3900

 These are wildly stupid quotas but they're just there to test. With
 INBOX already at capacity (byte-wise; only set to a meg), copying
 large messages inside INBOX.Archives fails (only copying a 800k
 message but the quota should be 5gig now).

 Again, before I post configs, I'm just curious if what I'm trying to
 do isn't remotely possible, or that I'm approaching this entirely
 wrongly. Thanks!


Thanks to a fortuitously unrelated thread (how to disable quota for
second namespace), I got the quota part figured out and that seems to
be working: Add a second entry to plugin {}, e.g. quota2 =
maildir:Archive quota:ns=INBOX.Archives. and add rules for
userdb_quota2_rule, userdb_quota2_rule2, etc.

My real question now is: Are there any fatal gotchas in this that I'm
just not thinking of?


[Dovecot] Glued-together private namespaces

2011-09-23 Thread Mark Moseley
I've been goofing with this all day with 2.0.15 and I'm starting to
realize that either a) I'm not that smart, b) it's been so long since
I messed with namespaces that I'm going about it completely wrong, or
c) it's just not possible. I haven't posted 'doveconf -n' and other
details, because mainly I'm just looking for 'yes, this is possible'
or 'no, you're smoking crack' before posting further details. At this
point, it's all maildir and moving to mdbox, while highly desirable in
the future, is not possible in the near- to medium-term.

I'm trying to glue a namespace underneath INBOX:

namespace INBOX {
type = private
separator = .
prefix = INBOX.# Yes, this used to be on Courier
inbox = yes
list = yes
hidden = no
subscriptions = yes
location = maildir:~/Maildir
}
namespace archive {
type = private
separator = .
prefix = INBOX.Archives.
inbox = no
list = children
subscriptions = yes
location = maildir:~/Maildir-Archive
}


I've tried putting namespace archive's 'prefix' as just Archives,
but Tbird doesn't seem to see this namespace, regardless of how much I
futz with the imap settings in tbird.

With the above setup, it actually seems to work correctly (provided
~/Maildir-Archive exists), though I'm sure a big gotcha is waiting in
the wings. I can move messages around, create subfolders, subscribe to
folders in ~/Maildir-Archive). The only thing I can't seem to get
working is quotas. With my password_query like:

password_query = ...
CONCAT( '*:bytes=', 1M ) AS 'userdb_quota_rule', \
CONCAT( '*:messages=10' ) AS 'userdb_quota_rule2', \
CONCAT( 'INBOX.Archives:bytes=+4900M' ) AS 'userdb_quota_rule3', \
CONCAT( 'INBOX.Archives:messages=+3900' ) AS 'userdb_quota_rule4'
...

only the default quota seems to be in place for any subfolder of
INBOX.Archives and for INBOX.Archives itself, i.e. *:bytes still
applies to INBOX.Archives. The debug log show that:

Debug: Quota root: name=User quota backend=maildir args=
Debug: Quota rule: root=User quota mailbox=* bytes=1048576 messages=0
Debug: Quota rule: root=User quota mailbox=* bytes=1048576 messages=10
Debug: Quota rule: root=User quota mailbox=INBOX.Archives
bytes=+5138022400 messages=0
Debug: Quota rule: root=User quota mailbox=INBOX.Archives
bytes=+5138022400 messages=+3900

These are wildly stupid quotas but they're just there to test. With
INBOX already at capacity (byte-wise; only set to a meg), copying
large messages inside INBOX.Archives fails (only copying a 800k
message but the quota should be 5gig now).

Again, before I post configs, I'm just curious if what I'm trying to
do isn't remotely possible, or that I'm approaching this entirely
wrongly. Thanks!


Re: [Dovecot] Dovecot 2, Imap service, client_limit

2011-07-19 Thread Mark Moseley
On Tue, Jul 19, 2011 at 3:56 PM, Steve Fatula compconsult...@yahoo.com wrote:
 I see back in November of last year, a thread about using client_limit in the
 imap service (not imap-login) that would allow each imap process serve more 
 than
 one connection. Sounded good, until I tried it!

 When I did, unlike the OP of that thread, I got:

 dovecot: imap(submit.user): Fatal: setuid(503(submit.user) from userdb lookup)
 failed with euid=501(links): Operation not permitted (This binary should
 probably be called with process user set to 503(submit.user) instead of
 501(links))

 So, it would appear that this does not work. Still, the thread was posting
 results of their testing even that showed it worked.

 Sample message within the thread, read for more:

 http://www.dovecot.org/list/dovecot/2010-November/054893.html

 I'd love to be able to use a single imap process for more than one connection.
 Is this still possible, or, not? If so, how?

 Steve



In my case, we use a single shared user for all mailboxes, so there's
no UID issue. The imap process is always running as that one UID, so
it doesn't ever try to setuid to something else.

Timo, is there some way/setting to only let client_limit != 1 apply to
processes running as the same user? I.e. if a imap process (with
client_limit  1) was running as UID 501 (to use the OP's uids), and
imap-login needed to send UID 503 to an imap process, it wouldn't send
it to the one running as UID 501, but rather either create new imap
proc or if UID 503 already had a imap proc running, then send UID 503
to that one. (I realize that makes almost no grammatical sense, but
hopefully you know what I mean).


Re: [Dovecot] load increase after upgrade to 2.0.8

2010-12-16 Thread Mark Moseley
2010/12/16 Jose Celestino j...@co.sapo.pt:
 On Qui, 2010-12-16 at 12:56 +0100, Ralf Hildebrandt wrote:
 * Cor Bosman c...@xs4all.nl:

  I saw someone also posted a patch to the LKML.

 I guess I missed that one


 http://lkml.org/lkml/2010/12/15/470

Timo, if we apply the above kernel patch, do we still need to patch
dovecot or is it just an either-or thing?

I'm guessing the reason I saw so many less context switches when using
a client_limit of more than one due to having more connections per
process == less children to wake up.


Re: [Dovecot] load increase after upgrade to 2.0.8

2010-12-10 Thread Mark Moseley
On Thu, Dec 9, 2010 at 4:54 PM, Stan Hoeppner s...@hardwarefreak.com wrote:
 Mark Moseley put forth on 12/9/2010 12:18 PM:

 If you at some point upgrade to 2.6.35, I'd be interested to hear if
 the load skyrockets on you. I also get the impression that the load
 average calculation in these recent kernels is 'touchier' than in
 pre-2.6.35.

 This thread may be of value in relation to this issue:

 http://groups.google.com/group/linux.kernel/browse_thread/thread/eb5cb488b7404dd2/0c954e88d2f20e56

 It seems there are some load issues regarding recent Linux kernels, from
 2.6.34 (maybe earlier?) on up.  The commit of the patch to fix it was
 Dec 8th--yesterday.  So it'll be a while before distros get this patch out.

Glad you brought up that thread. I've seen it before but not any time
lately (and certainly not since 2 days ago!). Hopefully that'll be in
2.6.37, instead of not till 2.6.38 (or that I can patch it into
2.6.36.2). I roll my own kernels, so no need to wait on distros.


 However, this still doesn't seem to explain Ralf's issue, where the
 kernel stays the same, but the Dovecot version changes, with 2.0.x
 causing the high load and 1.2.x being normal.  Maybe 2.0.x simply causes
 this bug to manifest itself more loudly?

 This Linux kernel bug doesn't explain the high load reported with 2.0.x
 on FreeBSD either.  But it is obviously somewhat at play for people
 running these kernels versions and 2.0.x.  To what degree it adds to the
 load I cannot say.

Yeah, my comment on the kernel thing was just in reply to one of Cor's
3 debugging tracks, 1 of which was to try upgrading the kernel. I
figured I should mention the load issue if he might be upgrading to
the latest/greatest, since it could make things worse (or make things
*look* worse).


Re: [Dovecot] load increase after upgrade to 2.0.8

2010-12-09 Thread Mark Moseley
On Wed, Dec 8, 2010 at 11:54 PM, Ralf Hildebrandt
ralf.hildebra...@charite.de wrote:
 * Mark Moseley moseleym...@gmail.com:
 On Wed, Dec 8, 2010 at 3:03 PM, Timo Sirainen t...@iki.fi wrote:
  On 8.12.2010, at 22.52, Cor Bosman wrote:
 
  1 server with service_count = 0, and src/imap/main.c patch
 
  By this you mean service_count=0 for both service imap-login and service 
  imap blocks, right?
 
 

 Speaking from my own experience, the system loads on our dovecot boxes
 went up *substantially* when we upgraded kernels from 2.6.32.x and
 2.6.33.x to newer ones (late 2.6.35.x and 2.6.36 -- haven't tried
 2.6.36.1 yet). But I also saw loads on all sort of other types of
 boxes grow when moved to 2.6.35.x and 2.6.36, so it's not necessarily
 dovecot-related. Though you've got plenty to choose from between
 2.6.27.x and up.

 We're on 2.6.32 and the load only goes up when I change dovecot (not
 when I change the kernel, which I didn't do so far)

If you at some point upgrade to 2.6.35, I'd be interested to hear if
the load skyrockets on you. I also get the impression that the load
average calculation in these recent kernels is 'touchier' than in
pre-2.6.35. Even with similar CPU and I/O utilization, the load
average on a 2.6.35 both is much higher than pre- and it also seems
to react more quickly; more jitter I guess. That's based on nothing
scientific though.


 Getting 'imap-login' and 'pop3-login' set to service_count=0 and
 'pop3' and 'imap' set to service_count=1000 (as per Timo's suggestion)
 helped keep the boxes from spinning into oblivion. To reduce the
 enormous amount of context switches, I've got 'pop3's client_limit set
 to 4. I played around with 'imap's client_limit between 1 and 5 but
 haven't quite found the sweet spot yet. pop3 with client_limit 4 seems
 to work pretty good. That brought context switches down from
 10,000-30,000 to sub-10,000.

 Interesting. Would that spawn a lot of pop3 processes? On the other
 hand, almost nobody is using pop3 here

Upping the client_limit actually results in less processes, since a
single process can service up to #client_limit connections. When I
bumped up the client_limit for imap, my context switches plummeted.
Though as Timo pointed out on another thread the other day when I was
asking about this, when that proc blocks on I/O, it's blocking all the
connections that the process is servicing.Timo, correct me if I'm
wildly off here -- I didn't even know this existed before a week or
two ago. So you can then end up creating a bottleneck, thus why I've
been playing with finding a sweet spot for imap. I figure that enough
of a process's imap connections must be sitting in IDLE at any given
moment, so setting client_limit to like 4 or 5 isn't too bad. Though
it's not impossible that by putting multiple connections on a single
process, I'm actually throttiling the system, resulting in fewer
context switches (though I'd imagine bottlenecked procs would be
blocked on I/O and do a lot of volcs's).


Re: [Dovecot] load increase after upgrade to 2.0.8

2010-12-09 Thread Mark Moseley
On Thu, Dec 9, 2010 at 11:13 AM, Ralf Hildebrandt
ralf.hildebra...@charite.de wrote:
 * Mark Moseley moseleym...@gmail.com:

  We're on 2.6.32 and the load only goes up when I change dovecot (not
  when I change the kernel, which I didn't do so far)

 If you at some point upgrade to 2.6.35, I'd be interested to hear if
 the load skyrockets on you.

 You mean even more? I'm still hoping it would decrease at some point :)
 I updated to 2.6.32-27-generic-pae today. I wonder what happens.

 I also get the impression that the load average calculation in these
 recent kernels is 'touchier' than in pre-2.6.35. Even with similar CPU
 and I/O utilization, the load average on a 2.6.35 both is much higher
 than pre- and it also seems to react more quickly; more jitter I guess.
 That's based on nothing scientific though.

 Interesting.

Interesting, and a major PITA. If I had more time, I'd go back and
iterate through each kernel leading up to 2.6.35 to see where things
start to go downhill. You'd like to think each kernel version would
make things just a little bit faster (or at least, nothing worse than
the same). Though if it really is just more jittery and is showing
higher loads but not actually working any harder than an older kernel,
then it's just my perception only.

 Upping the client_limit actually results in less processes, since a
 single process can service up to #client_limit connections. When I
 bumped up the client_limit for imap, my context switches plummeted.

 Which setting are you using now?

At the moment, I'm using client_limit=5 for imap, but I keep playing
with it. I have a feeling that's too high though. If I had faster cpus
and more memory on these boxes, it wouldn't be so painful to put it
back to 1.

 Though as Timo pointed out on another thread the other day when I was
 asking about this, when that proc blocks on I/O, it's blocking all the
 connections that the process is servicing.Timo, correct me if I'm
 wildly off here -- I didn't even know this existed before a week or
 two ago. So you can then end up creating a bottleneck, thus why I've
 been playing with finding a sweet spot for imap.

 Blocking on /proc? Never heard that before.

I was just being lazy. I meant 'process' :)   So, if I'm understanding
it correctly, assume you've got client_limit=2 and you've got
connection A and connection B serviced by a single process. If A does
a file operation that blocks, then B is effectively blocked too. So I
imagine if you get enough I/O backlog, you can create a downward
spiral where you can't service the connections faster than they're
coming in and you top out at the #process_limit. Which, btw, I set to
300 for imap with client_limit=5.

 I figure that enough of a process's imap connections must be sitting in
 IDLE at any given moment, so setting client_limit to like 4 or 5 isn't
 too bad. Though it's not impossible that by putting multiple
 connections on a single process, I'm actually throttiling the system,
 resulting in fewer context switches (though I'd imagine bottlenecked
 procs would be blocked on I/O and do a lot of volcs's).


  1   2   >