Re: [Dovecot] Locking /var/mail/user issue with postfix and dovecot

2012-10-26 Thread Stan Hoeppner
On 10/25/2012 10:54 PM, Ben Morrow wrote:
 At  7PM -0500 on 25/10/12 you (Stan Hoeppner) wrote:
 On 10/25/2012 4:24 PM, Ben Morrow wrote:
 At  1PM -0500 on 25/10/12 you (Stan Hoeppner) wrote:

 Switch Postfix to use the Dovecot Local Deliver Agent (LDA) in place of
 the Postfix local/virtual delivery agent.  Using Dovecot LDA eliminates
 the file locking issue.  Thus it also increases throughput as lock
 latency is eliminated.

 Nonsense. deliver and imap are still separate processes accessing the
 same mbox, so they still need to use locks. The only difference is that
 since they are both dovecot programs, they will automatically be using
 the *same* locking strategies, and things will Just Work.

 Nonsense implies what I stated was factually incorrect, which is not
 the case.  There's a difference between factual incorrectness and simply
 staying out of the weeds.
 
 What you stated was factually incorrect.
 
 If you want to get into the weeds, and have me call you out for
 nonsense, LDA/deliver is not a separate UNIX process.  The LDA code
 runs within the imap process for the given user.
 
 Nonsense. dovecot-lda runs in its own process, and does not involve the
 imap process in any way. As such it has to do locking.

You apparently know your tools better than I do.  Neither ps nor top
show a 'dovecot-lda' or similarly named process on my systems.  When I
send a test message from gmail through Postfix I only see CPU or memory
activity in an imap process.  When I close the MUA to end the imap
processes and then send a test message I don't see any CPU or memory
activity in any dovecot processes, only Postfix processes, including
local, and spamd.  So is devecot-lda running as a sub-process or thread
of Postfix' local process?  Or is it part of the 'dovecot' process, and
the message goes through so quick that top doesn't show any CPU usage by
the 'dovecot' process?

 If I have the following in my dovecot.conf:
...
snipped for readability
...

 I'm not sure what you mean by 'processes of [one's own] program' but

I.e. Dovecot has its own set of processes, Postfix has its processes,
etc.  With one's one processes I'd think it makes more sense to use
IPC and other tricks to accomplish concurrent access to a file rather
than filesystem locking features.

 it's extremely common for a process to have to take locks against
 another copy of itself. All traditional Unix LDAs and MUAs do this; for
 instance, procmail will take locks in part so that if another instance
 of procmail is delivering another mail to the same user at the same time
 the mbox won't end up corrupted.

I guess I've given MDAs w/mbox too much credit, without actually looking
at the guts.  Scalable databases such Oracle, db2, etc, are far more
intelligent about this, and can have many thousands of processes reading
and writing the same file concurrently, usually via O_DIRECT, not
buffered IO, so they have complete control over IO.  This is
accomplished with a record lock manager and IPC, preventing more than
one process from accessing one record concurrently, but allowing massive
read/write concurrency to multiple records in a file.  I'd think the
same concurrency optimization could be done with Dovecot.

However, as Timo has pointed out, so few people use mbox these days that
he simply hasn't spent much, if any, time optimizing mbox.  Implementing
some kind of lock manager and client code just for mbox IO concurrency
simply wouldn't be worth the time.  Unless he's already done something
similar with mdbox.  If he has, maybe that could be 'ported' to mbox as
well.  But again, it's probably not worth the effort given the number of
mbox users, and the fact that nobody is complaining about mbox
performance.  I'm certainly not.  It works great here.

-- 
Stan



Re: [Dovecot] Public folders and groups

2012-10-26 Thread Jan Phillip Greimann
I didn't know ADs well, but...can't you simply add the Field? In LDAP it 
should be possible, if you use MS AD, i dunno.


Am 25.10.2012 22:49, schrieb b m: No AD doesn't have such a field, but 
I could use some unused field to

 get what I want. Let's say set Attribute1 to group1. The problem is
 how to get that info. I guess I have to edit dovecot-ldap.conf and put
 in user_attrs something like that ,=acl_groups=Attribute1. Any
 suggestions?



Re: [Dovecot] Locking /var/mail/user issue with postfix and dovecot

2012-10-26 Thread Stan Hoeppner
On 10/25/2012 11:16 PM, Ben Morrow wrote:
 At  1AM +0300 on 26/10/12 you (Robert JR) wrote:
 On 2012-10-26 00:15, Ben Morrow wrote:

 As Stan said earlier, this is a Postfix question. The rule for
 
 [Looking back at the thread it wasn't Stan, it was Dennis Guhl. Sorry
 about that.]

I prodded him a second time, might have been off-list, and he finally
posted there.  So call it a team effort. ;)

Wietse has already replied, and in typical fashion, asked for concrete
evidence that Postfix was performing fcntl before dotlock, because he
obviously knows better than anyone that Postfix applies a dotlock first,
which you already explained here.

 dotlocking is that you must create the .lock *before* opening the
 file, in case whoever has it locked will be replacing the file
 altogether; but with fcntl locking you must acquire the lock *after*
 opening the file, since that's the way the syscall works. This means
 that if Postfix is going to use both forms of lock, it has to
 acquire a dotlock before it can look for a fcntl lock.

 In other words: the methods in mailbox_delivery_lock are *not* tried
 in order, because they can't be. Dotlock is always tried first.

 You should have compatible locking settings for all your programs
 accessing your mboxes. If Postfix is using dotlock, Dovecot should be
 using dotlock as well. If you don't have any local programs (mail
 clients, for instance) which require dotlocks, you should probably
 change Postfix to just use fcntl locks.

doc stuff snipped

 but if that is the case you might as well configure everything to just
 use fcntl locks, and forget dotlocks altogether.

Yep.  Postfix can use either or both.  And, surprise, recommends using
maildir to avoid mailbox locking entirely.

 Stan's earlier point is fundamentally correct: if you can treat the
 Dovecot mailstore as a black box, with mail going in through the LDA and
 LMTP and mail coming out through POP and IMAP, your life will be much
 easier. Traditional Unix mailbox locking strategies are *completely*
 insane, and if all you are doing is delivering mail from Postfix and
 reading it from Dovecot it would be better to avoid them altogether, and
 switch to dbox if you can. However, if you have any other programs which
 touch the mail spool (local or NFS mail clients, deliveries through
 procmail) this may not be possible.

And since this is a POP only server, users' MUAs should be deleting
after download, so there shouldn't be much mail in these mbox files at
any given time, making migration to maildir or dbox relatively simple.

When using Dovecot LDA you'll eliminate the filesystem level locking
problems with mbox.  However, you may still have read/write contention
within Dovecot, such as in your 20MB download as new mail arrives
example, especially if the new message has an xx MB attachment.  I don't
believe Dovecot is going to start appending a new message while it's
still reading out the existing 20MB of emails.  Depending on how long
this takes Dovecot may still issue a 4xx to Postfix, which will put the
new message in the deferred queue.  With maildir or dbox, reading
existing mail and writing new messages occurs concurrently, as each
message is a different file.

-- 
Stan



Re: [Dovecot] Locking /var/mail/user issue with postfix and dovecot

2012-10-26 Thread Ben Morrow
At  1AM -0500 on 26/10/12 you (Stan Hoeppner) wrote:
 On 10/25/2012 10:54 PM, Ben Morrow wrote:
  
  dovecot-lda runs in its own process, and does not involve the
  imap process in any way. As such it has to do locking.
 
 You apparently know your tools better than I do.  Neither ps nor top
 show a 'dovecot-lda' or similarly named process on my systems.  When I
 send a test message from gmail through Postfix I only see CPU or memory
 activity in an imap process.  When I close the MUA to end the imap
 processes and then send a test message I don't see any CPU or memory
 activity in any dovecot processes, only Postfix processes, including
 local, and spamd.  So is devecot-lda running as a sub-process or thread
 of Postfix' local process?  Or is it part of the 'dovecot' process, and
 the message goes through so quick that top doesn't show any CPU usage by
 the 'dovecot' process?

Assuming you have 

mailbox_command = /.../dovecot-lda -a ${RECIPIENT}

or something equivalent in your Postfix configuration, dovecot-lda runs
as a subprocess of local(8) under the uid of the delivered-to user.

  If I have the following in my dovecot.conf:
 ...
 snipped for readability
 ...
 
  I'm not sure what you mean by 'processes of [one's own] program' but
 
 I.e. Dovecot has its own set of processes, Postfix has its processes,
 etc.  With one's one processes I'd think it makes more sense to use
 IPC and other tricks to accomplish concurrent access to a file rather
 than filesystem locking features.

Filesystem locking, at least if NFS is not involved, is not that
expensive. Successfully acquiring a flock or fcntl lock takes only a
single syscall which doesn't have to touch the disk, and any form of IPC
is going to need to do that. (Even something like a shared memory region
will need a mutex for synchronisation, and acquiring the mutex has to go
through the kernel.)

Dotlocking *is* expensive, because acquiring a dotlock is a complicated
process requiring lots of syscalls, some of which have to write to disk;
and any scheme involving acquiring several locks on the same file is
going to be more so, especially if you can end up getting the first lock
but finding you can't get the second, so then you have to undo the first
and try again.

More importantly, the biggest problem with mbox as a mailbox format is
that any access at all has to lock the whole mailbox. If the LDA is
trying to deliver a new message at the same time as an IMAP user is
fetching a completely different message, or if two instances of the LDA
are trying to deliver at the same time, they will be competing for the
same lock even though they don't really need to be. A file-per-message
format like Maildir avoids this, to the point of being mostly lockless,
but that brings its own efficiency problems; the point of dbox is to
find the compromise between these positions that works best.

  it's extremely common for a process to have to take locks against
  another copy of itself. All traditional Unix LDAs and MUAs do this; for
  instance, procmail will take locks in part so that if another instance
  of procmail is delivering another mail to the same user at the same time
  the mbox won't end up corrupted.
 
 I guess I've given MDAs w/mbox too much credit, without actually looking
 at the guts.

I wouldn't look too hard at the details of the various ways there are of
locking and parsing mbox files, or the ways in which they can go wrong.
It's enough to make anyone swear off email for life :).

 Scalable databases such Oracle, db2, etc, are far more
 intelligent about this, and can have many thousands of processes reading
 and writing the same file concurrently, usually via O_DIRECT, not
 buffered IO, so they have complete control over IO.  This is
 accomplished with a record lock manager and IPC, preventing more than
 one process from accessing one record concurrently, but allowing massive
 read/write concurrency to multiple records in a file.  I'd think the
 same concurrency optimization could be done with Dovecot.
 
 However, as Timo has pointed out, so few people use mbox these days that
 he simply hasn't spent much, if any, time optimizing mbox.  Implementing
 some kind of lock manager and client code just for mbox IO concurrency
 simply wouldn't be worth the time.  Unless he's already done something
 similar with mdbox.  If he has, maybe that could be 'ported' to mbox as
 well.  But again, it's probably not worth the effort given the number of
 mbox users, and the fact that nobody is complaining about mbox
 performance.  I'm certainly not.  It works great here.

The only reason for using mbox is for compatibility with other systems
which use mbox, which means you have to do the locking the same way as
they do (assuming you can work out what that is). If you're going to
change the locking rules you might as well change the file format at the
same time, both to remove the insanity and to make it actually suitable
for use as an IMAP mailstore. That's 

Re: [Dovecot] Small issue with submission host

2012-10-26 Thread Raphael Ordinas

Hi,

Here's the doveconf -n output :

   # doveconf -n
   # 2.0.14: /usr/local/etc/dovecot/dovecot.conf
   # OS: FreeBSD 8.1-RELEASE-p5 amd64
   auth_mechanisms = plain login
   auth_username_format = %Lu
   auth_worker_max_count = 90
   default_process_limit = 1024
   first_valid_gid = 1500
   first_valid_uid = 1500
   hostname = mailhost.mydomain.tld
   last_valid_gid = 1500
   last_valid_uid = 1500
   lda_mailbox_autocreate = yes
   lda_mailbox_autosubscribe = yes
   listen = *
   mail_gid = 1500
   mail_location = maildir:~/Maildir
   mail_plugins = acl quota mail_log notify
   mail_privileged_group = mail
   mail_uid = 1500
   managesieve_notify_capability = mailto
   managesieve_sieve_capability = fileinto reject envelope
   encoded-character vacation subaddress comparator-i;ascii-numeric
   relational regex imap4flags copy include variables body enotify
   environment mailbox date
   passdb {
  args = /usr/local/etc/dovecot/dovecot-ldap.conf.ext
  driver = ldap
   }
   plugin {
  acl = vfile:/usr/local/etc/dovecot-acls:cache_secs=300
  autocreate = Sent
  autocreate1 = Trash
  autocreate2 = Drafts
  autocreate3 = Spam
  autocreate4 = Faux-positif
  autosubscribe = Sent
  autosubscribe1 = Trash
  autosubscribe2 = Drafts
  autosubscribe3 = Spam
  autosubscribe4 = Faux-positif
  autosubscribe5 = INBOX
  mail_log_events = delete undelete expunge copy mailbox_delete
   mailbox_rename
  mail_log_fields = uid box msgid size
  quota = maildir:User quota
  quota_rule = Trash:storage=+100M
  quota_warning = storage=95%% quota-warning 95
  quota_warning2 = storage=80%% quota-warning 80
  sieve = ~/.dovecot.sieve
  sieve_before = /usr/local/lib/dovecot/sieve/backup-all.sieve
  sieve_dir = ~/sieve
   }
   postmaster_address = postmas...@mydomain.tld
   protocols = imap lmtp sieve
   quota_full_tempfail = yes
   service anvil {
  client_limit = 3500
   }
   service auth-worker {
  user = $default_internal_user
   }
   service auth {
  client_limit = 5500
  unix_listener auth-master {
group = vmail
mode = 0660
user = vmail
  }
  user = doveauth
   }
   service imap-login {
  inet_listener imap {
port = 143
  }
  inet_listener imaps {
port = 993
ssl = yes
  }
   }
   service lmtp {
  inet_listener lmtp {
address = 172.0.0.1
port = 2525
  }
   }
   service managesieve-login {
  inet_listener sieve_deprecated {
port = 2000
  }
  process_limit = 1024
   }
   service managesieve {
  process_limit = 1024
   }
   service quota-warning {
  executable = script /usr/local/bin/quota-warning.sh
  unix_listener quota-warning {
user = vmail
  }
  user = dovecot
   }
   shutdown_clients = no
   ssl = required
   ssl_ca = /etc/ssl/certs/dovecot.pem
   ssl_cert = /etc/ssl/certs/dovecot.pem
   ssl_key = /etc/ssl/private/dovecot.pem
   submission_host = smtp.mydomain.tld
   userdb {
  args = /usr/local/etc/dovecot/dovecot-ldap.conf.ext
  driver = ldap
   }
   verbose_proctitle = yes
   protocol lmtp {
  mail_plugins = acl quota mail_log notify sieve
   }
   protocol lda {
  mail_plugins = acl quota mail_log notify
   }
   protocol imap {
  imap_client_workarounds = delay-newmail tb-extra-mailbox-sep
  mail_max_userip_connections = 20
  mail_plugins = acl quota mail_log notify imap_quota imap_acl
   autocreate
   }
   protocol sieve {
  managesieve_implementation_string = Dovecot Pigeonhole
  ssl = required
   }

Regards,

Raphael

Le 25/10/2012 16:08, Thomas Leuxner a écrit :

On Thu, Oct 25, 2012 at 03:09:47PM +0200, Raphael Ordinas wrote:

When sending mail to MTA (in case of sieve filter forwarding for
example), dovecot pass a RCPT TO command just after the EHLO. He's
missing the MAIL FROM command.
Therefore, my MTA show me a warning like this : improper command
pipelining after EHLO.

Works for me with latest and greatest although I'm not using the
'submission_host' option but pure LMTP Unix socket:

[...]
service lmtp {
   unix_listener /var/spool/postfix/private/dovecot-lmtp {
 group = postfix
 mode = 0660
 user = postfix
   }
}

Best to show your 'doveconf -n' for more thoughts.

Regards
Thomas


Re: [Dovecot] Public folders and groups

2012-10-26 Thread Ben Morrow
At  1PM -0700 on 25/10/12 b m wrote:
  From: Jan Phillip Greimann j...@softjury.de
 Am 25.10.2012 00:13, schrieb b m:

  Currently I have dovecot working with Active Directory
  authentication and public folders with acl. In acl I have the users
  I want to access the public folders. It'll be easier for me to use
  one group instead of 50 users but I can't get it to work. From where
  does dovecot get the group attribute for a user? Can it read the
  groups that a user belongs from AD?

 ACL groups support works by returning a comma-separated acl_groups
 extra field from userdb, which contains all the groups the user
 belongs to.
 
 It seems to be possible, I had an acl_groups field in my MySQL
 Database for this, I'am sure it is something like that in an AD too.

 No AD doesn't have such a field, but I could use some unused field to
 get what I want. Let's say set Attribute1 to group1. The problem
 is how to get that info. I guess I have to edit dovecot-ldap.conf and
 put in user_attrs something like that ,=acl_groups=Attribute1. Any
 suggestions?

That's the wrong way around. Assuming you created an 'imapGroups'
attribute containing a comma-separated list of IMAP groups, you would
want to add 'imapGroups=acl_groups' to user_attrs.

Alternatively, if you don't want to duplicate the information in the
LDAP directory, you can use post-login scripting to set up the groups
list (see http://wiki2.dovecot.org/PostLoginScripting). If you have your
system set up with nss_ldap or winbind so that AD users show up as
system users with their proper groups, the example on the wiki using the
'groups' command will work. Otherwise, you can pull the information
directly from LDAP, something like

#!/bin/sh

do_ldap () {
/usr/local/bin/ldapsearch -h PDC \
((objectClass=$1)($2)) $3 \
| sed -nes/^$3: //p
}

user_dn=$(do_ldap User sAMAccountName=$USER dn)
ACL_GROUPS=$(do_ldap Group member=$user_dn cn | paste -sd, -)

export ACL_GROUPS
export USERDB_KEYS=$USERDB_KEYS acl_groups
exec $@

Obviously you will need to adjust the path and connection parameters for
ldapsearch to suit your environment; also, I don't use AD, so you may
need to adjust the LDAP search. (If you prefer it might be easier to do
this in Perl or Python or something rather than shell.)

Ben



Re: [Dovecot] Small issue with submission host

2012-10-26 Thread Thomas Leuxner
On Fri, Oct 26, 2012 at 10:51:52AM +0200, Raphael Ordinas wrote:

service lmtp {
   inet_listener lmtp {
 address = 172.0.0.1
 port = 2525
   }
}

Right, so you are using network sockets with LMTP. Probably does not
answer the question why it is not working with the 'submission_host',
but is there a reason why the redirects are not reinjected this way?

submission_host = smtp.mydomain.tld

Regards
Thomas


signature.asc
Description: Digital signature


Re: [Dovecot] Shared folders not shown if INBOX.shared.%.% is used with dovecot 2.1.10

2012-10-26 Thread Christoph Bußenius

Hi,

On 22.10.2012 16:33, Christoph Bußenius wrote:

. list  INBOX.shared.%.%

Dovecot 2.1.10 does not list any folders in response to this command.


I hope this helps: I bisected this bug and found it was introduced with 
this changeset:


http://hg.dovecot.org/dovecot-2.1/rev/a41f64348d0d

changeset:   14453:a41f64348d0d
user:Timo Sirainen t...@iki.fi
date:Fri Apr 20 15:18:14 2012 +0300
files:   src/lib-storage/list/mailbox-list-fs-iter.c
description:
layout=fs: Don't assume '/' hierarchy separator when finding mailbox roots.

Cheers,
Christoph

--
Christoph Bußenius
Rechnerbetriebsgruppe der Fakultäten Informatik und Mathematik
Technische Universität München
+49 89-289-18519  Raum 00.05.040  Boltzmannstr. 3  Garching


Re: [Dovecot] Small issue with submission host

2012-10-26 Thread Thomas Leuxner
On Fri, Oct 26, 2012 at 11:00:12AM +0200, Thomas Leuxner wrote:
submission_host = smtp.mydomain.tld

On second thought, above probably overrides this:

# doveconf -a | grep sendmail
sendmail_path = /usr/sbin/sendmail

...which may be the culprit.

Regards
Thomas


signature.asc
Description: Digital signature


Re: [Dovecot] Small issue with submission host

2012-10-26 Thread Raphael Ordinas

Actually, LMTP inet listener is only used for delivery purpose.
I separated the MTA and the MDA on distinct hosts.
Incomming mails are received by the MTA which proceed to some check 
(anti-virus, spams, and aliases) and transport them to the MDA with LMTP.


Maybe I misunderstood something, but i don't see why LMTP is involve in 
a sieve forwarding process (or stuff like non delivery mail return) .

According to comments in the 15-lda.conf file :

   # Binary to use for sending mails.
   #sendmail_path = /usr/sbin/sendmail

   # If non-empty, send mails via this SMTP host[:port] instead of
   sendmail.
   submission_host = smtp.mydomain.tld

If you don't use the 'submission_host' option, dovecot will forward mail 
with '/usr/sbin/sendmail' binary which use the forwarders you tell it to 
use, am i right ?


Regards,

Raphael

Le 26/10/2012 11:00, Thomas Leuxner a écrit :

On Fri, Oct 26, 2012 at 10:51:52AM +0200, Raphael Ordinas wrote:


service lmtp {
   inet_listener lmtp {
 address = 172.0.0.1
 port = 2525
   }
}

Right, so you are using network sockets with LMTP. Probably does not
answer the question why it is not working with the 'submission_host',
but is there a reason why the redirects are not reinjected this way?


submission_host = smtp.mydomain.tld

Regards
Thomas


Re: [Dovecot] Shared folders not shown if INBOX.shared.%.% is used with dovecot 2.1.10

2012-10-26 Thread Timo Sirainen
On 26.10.2012, at 12.17, Christoph Bußenius wrote:

 On 22.10.2012 16:33, Christoph Bußenius wrote:
 . list  INBOX.shared.%.%
 
 Dovecot 2.1.10 does not list any folders in response to this command.
 
 I hope this helps: I bisected this bug and found it was introduced with this 
 changeset:
 
 http://hg.dovecot.org/dovecot-2.1/rev/a41f64348d0d

I couldn't reproduce this exactly and I don't see how a41f64348d0d makes any 
difference .. but I did find another way to reproduce at least a similar bug. 
Maybe this fixes your problem too? 
http://hg.dovecot.org/dovecot-2.1/rev/22875bcaa952



[Dovecot] Dovecot stops to work - anvil problem

2012-10-26 Thread FABIO FERRARI
Hi all,

we have a problem about anvil, it seems that when we have a high load the
dovecot stops to work. Sometimes it is sufficient to make a dovecot
reload, but sometimes we have to restart it.

These are the lines related to anvil in the dovecot.log:

[root@secchia ~]# grep anvil /var/log/dovecot.log | more
Oct 26 11:13:55 anvil: Error: net_accept() failed: Too many open files
Oct 26 11:14:32 imap-login: Error: net_connect_unix(anvil) failed:
Resource temporarily unavailable
Oct 26 11:14:32 imap-login: Fatal: Couldn't connect to anvil
Oct 26 11:14:33 pop3-login: Error: net_connect_unix(anvil) failed:
Resource temporarily unavailable
Oct 26 11:14:33 pop3-login: Fatal: Couldn't connect to anvil
[...] (many lines like these)
Oct 26 12:01:10 pop3-login: Fatal: Couldn't connect to anvil
Oct 26 12:01:18 auth: Error: read(anvil-auth-penalty) failed: Connection
reset by peer
Oct 26 12:01:18 auth: Error: read(anvil-auth-penalty) failed: Connection
reset by peer
Oct 26 12:01:18 auth: Error: net_connect_unix(anvil-auth-penalty) failed:
Connection refused
Oct 26 12:01:18 auth: Error: net_connect_unix(anvil-auth-penalty) failed:
Connection refused
Oct 26 12:01:18 auth: Error: read(anvil-auth-penalty) failed: Connection
reset by peer
Oct 26 12:01:18 auth: Error: net_connect_unix(anvil-auth-penalty) failed:
Connection refused

And this is the output of the doveconf -n:

[root@secchia ~]# doveconf -n
# 2.0.1: /etc/dovecot/dovecot.conf
# OS: Linux 2.6.18-308.11.1.el5 x86_64 Red Hat Enterprise Linux Server
release 5.8 (Tikanga) xfs
auth_cache_size = 1024
auth_cache_ttl = 21600 s
auth_debug = yes
auth_debug_passwords = yes
auth_master_user_separator = *
auth_mechanisms = plain login
auth_socket_path = /var/run/dovecot/auth-userdb
auth_verbose = yes
base_dir = /var/run/dovecot/
disable_plaintext_auth = no
hostname = mail.unimore.it
info_log_path = /var/log/dovecot.log
lda_mailbox_autocreate = yes
log_path = /var/log/dovecot.log
mail_debug = yes
mail_location = maildir:/cl/mail/vhosts/sms.unimo.it/%Ln/Maildir
mail_plugins = $mail_plugins quota
mailbox_idle_check_interval = 60 s
mbox_write_locks = fcntl
namespace {
  inbox = yes
  location =
  prefix = INBOX.
  separator = .
  type = private
}
passdb {
  args = /usr/local/etc/dovecot.masterusers
  driver = passwd-file
  master = yes
}
passdb {
  args = dovecot
  driver = pam
}
plugin {
  quota = maildir:User quota
  quota_exceeded_message = Quota exceeded (mailbox is full)
  quota_rule = *:storage=200MB
  quota_rule2 = *:messages=10
  quota_rule3 = INBOX.Trash:storage=+100M
  quota_warning = storage=90%% quota-warning 90 %u
  quota_warning2 = storage=85%% quota-warning 85 %u
  quota_warning3 = messages=95%% quota-warning 95 %u
  quota_warning4 = messages=80%% quota-warning 80 %u
  setting_name = quota
}
postmaster_address = postmas...@unimore.it
quota_full_tempfail = yes
service anvil {
  client_limit = 19
  process_limit = 19
}
service auth {
  client_limit = 14500
  unix_listener auth-userdb {
mode = 0600
user = vmail
  }
}
service imap-login {
  inet_listener imap {
port = 143
  }
  inet_listener imaps {
port = 993
  }
  process_limit = 5000
}
service imap {
  process_limit = 5000
}
service pop3-login {
  inet_listener pop3 {
port = 110
  }
  inet_listener pop3s {
port = 995
  }
}
service pop3 {
  process_limit = 1024
}
service quota-warning {
  executable = script /usr/local/bin/quota-warning.sh
  unix_listener quota-warning {
group = vmail
user = vmail
  }
  user = dovecot
}
ssl_ca = /etc/pki/tls/certs/ca_unimore_tcs.pem
ssl_cert = /etc/pki/tls/certs/cert-852-mail.unimore.it-cluster.pem
ssl_key = /etc/pki/tls/certs/mailcluster.key
userdb {
  args = /etc/dovecot/dovecot-sql.conf.ext
  driver = sql
}
verbose_ssl = yes
protocol lda {
  mail_plugins = $mail_plugins quota
}
protocol imap {
  mail_plugins = $mail_plugins imap_quota
}
protocol pop3 {
  mail_plugins = $mail_plugins quota
  pop3_client_workarounds = outlook-no-nuls oe-ns-eoh
  pop3_uidl_format = %08Xu%08Xv
}
[root@secchia ~]#

And these are the limit settings in the OS:
*   softnofile  131072
*   hardnofile  131072

Have someone had the same problem?

thanks in advance

Fabio Ferrari



Re: [Dovecot] Shared folders not shown if INBOX.shared.%.% is used with dovecot 2.1.10

2012-10-26 Thread Christoph Bußenius

Hello Timo,

On 26.10.2012 12:07, Timo Sirainen wrote:

but I did find another way to reproduce at least a similar bug. Maybe this 
fixes your problem too? http://hg.dovecot.org/dovecot-2.1/rev/22875bcaa952


That does fix the problem, thank you!

Cheers,
Christoph

--
Christoph Bußenius
Rechnerbetriebsgruppe der Fakultäten Informatik und Mathematik
Technische Universität München
+49 89-289-18519  Raum 00.05.040  Boltzmannstr. 3  Garching


[Dovecot] dovecot-lda delivery to Maildir/cur as 'seen'?

2012-10-26 Thread Dale Gallagher
Hi

I've added a server-side feature where authenticated customers sending
through our SMTP server have their outbound mail copied to their Sent
folder (like Gmail does). The delivery script called by qmail calls
dovecot-lda to deliver it to the user's Sent folder.

The problem now, is that the Sent folder shows the mail as unread,
which MUAs flag (and notify, in the case of some). I've searched the
docs and mailing list, but can't find an option to tell dovecot-lda to
mark the mail being delivered, as seen/read. If I've missed something,
please let me know. If not, then I think it might be a good idea to
add a feature to dovecot-lda permitting one to specify delivery to the
./cur subfolder of a Maildir, instead of ./new.

Thanks


[Dovecot] dovecot lda - Permission denied

2012-10-26 Thread tony . blue . mailinglist
Hallo,

please excuse my bad english. But I am not a native speaker.

I changed my cyrus to dovecot (alltogehter: fetchmail - procmail - exim4 - 
dovecot).

But I get (I think from /usr/lib/dovecot/deliver) the following error-message 
in my syslog:

...
Oct 25 23:37:13 gustav dovecot: lda: Error: userdb lookup: 
connect(/var/run/dovecot/auth-userdb) failed: Permission denied (euid=501(andy) 
egid=100(users) missing +w perm: /var/run/dovecot/auth-userdb, dir owned by 0:0 
mode=0755)
...
Oct 25 23:37:14 gustav dovecot: lda: Error: userdb lookup: 
connect(/var/run/dovecot/auth-userdb) failed: Permission denied (euid=500(tony) 
egid=100(users) missing +w perm: /var/run/dovecot/auth-userdb, dir owned by 0:0 
mode=0755)
...

Dovecot is installed as !include auth-passwdfile.conf.ext. For all users there 
is a entry in der /etc/dovecot/users.

Usaly the user rights are set to 600. I tryed 755, but I get the same 
errormessage.

...
service auth {

  unix_listener auth-userdb {
mode = 0755
user = mailstore
group = mailstore
  }
...

If I try ls /var/run/dovecot/auth-userdb -la - i get:

srwxr-xr-x 1 mailstore mailstore 0 Okt 25 23:36 /var/run/dovecot/auth-userdb

How can I solve this problem?

Tony 


Re: [Dovecot] Rebuilding indexes fails on inconsistent mdbox

2012-10-26 Thread Charles Marcus

On 2012-10-24 11:48 PM, Stan Hoeppner s...@hardwarefreak.com wrote:

Changing the process priority would not help.  Indexing a large mailbox
is an IO bound, not a compute bound, operation.  With Linux, changing
from the CFQ to deadline scheduler may help some with low
responsiveness.  But the only real solution for such a case where iowait
is bringing the system to its knees is to acquire storage with far
greater IOPS and concurrent IO capability.  I.e. a server.


Ok, I get it, thanks for elaborating Stan...

--

Best regards,

Charles



Re: [Dovecot] dovecot-lda delivery to Maildir/cur as 'seen'?

2012-10-26 Thread Dennis Guhl
On Fri, Oct 26, 2012 at 01:27:00PM +0200, Dale Gallagher wrote:
 Hi

[..]

 The problem now, is that the Sent folder shows the mail as unread,
 which MUAs flag (and notify, in the case of some). I've searched the

Use Sieve [1] with Imap4flags (RFC 5232)  to mark the email as read.

Dennis

[1] http://wiki2.dovecot.org/Pigeonhole/Sieve

[..]


Re: [Dovecot] Rebuilding indexes fails on inconsistent mdbox

2012-10-26 Thread Milan Holzäpfel
On Wed, 24 Oct 2012 13:43:19 +0200
Robert Schetterer r...@sys4.de wrote:

 Am 24.10.2012 13:28, schrieb Milan Holzäpfel:
  The whole mdbox is 6.6 GiB large and I guess that it contains about
  300k-600k messages. It's an archive of public mailing lists, so I could
  give access to the files. 
  
  Can anybody say something about this? May the mdbox be repaired? 
 
 perhaps this helps
 
 http://wiki2.dovecot.org/Tools/Doveadm/ForceResync
 
 however upgrading to dovecot latest might be a good idea

I tried this command, but all it will do is the rebuilding indexes
thing that Dovecot's deliver and imapd will also do. (As I mentioned,
this fails.) I haven't tried a more recent version of Dovecot so far. 

Regards,
Milan Holzäpfel



-- 
Milan Holzäpfel lis...@mjh.name


Re: [Dovecot] Rebuilding indexes fails on inconsistent mdbox

2012-10-26 Thread Milan Holzäpfel
On Wed, 24 Oct 2012 09:01:24 -0500
Stan Hoeppner s...@hardwarefreak.com wrote:

 On 10/24/2012 6:28 AM, Milan Holzäpfel wrote:
 
  I have a problem with an incosistent mdbox: 
 ...
  four hours after the problem initially appeared, I did a hard reset of
  the system because it was unresponsive.
 ...
  Can anybody say something about this? May the mdbox be repaired? 
 
 If the box is truly unresponsive, i.e. hard locked, then the corrupted
 indexes are only a symptom of the underlying problem, which is unrelated
 to Dovecot, UNLESS, the lack of responsiveness was due to massive disk
 access, which will occur when rebuilding indexes on a 6.6GB mailbox.
 You need to know the difference so we have accurate information to
 troubleshoot with.

Thanks for your suggestion. I wasn't looking for a solution for the
unresponsiveness, but I failed to make that clear. 

I was not patient enough to debug the unresponsiveness issue. The box
was not hard locked, but any command took very look if it would at all
complete. I think that it could be massive swapping, but I wouldn't
expect Dovecot to be the cause. 

After the reboot, Dovecot would happily re-execute the failing index
rebuild on each new incoming message, which suggests that Dovecot
wasn't the cause for the unresponsiveness. 

 If the there's a kernel or hardware problem, you should see related
 errors in dmesg.  Please share those.

The kernel had messages like

INFO: task cron:2799 blocked for more than 120 seconds.

in the dmesg. But again, I didn't mean to ask for a solution to this
problem. 

Regards,
Milan Holzäpfel


-- 
Milan Holzäpfel lis...@mjh.name


Re: [Dovecot] Rebuilding indexes fails on inconsistent mdbox

2012-10-26 Thread Milan Holzäpfel
On Wed, 24 Oct 2012 13:28:11 +0200
Milan Holzäpfel lis...@mjh.name wrote:

 I have a problem with an incosistent mdbox: 
 [...]
 The problem appeared out of nowhere. [...]

That's just wrong. Two minutes before the corruption occured for
the first time, the machine was booted after power-off without prior
shutdown. I didn't notice this until now, sorry for this. 

The mailbox is on XFS. As far as I remember, XFS in known for leaving
NULL bytes at the end of files after a system reset. At least, I found
72 bytes of NULL in a plain text log file on XFS after such an event.
Do you think this may be the source of the index corruption? 

Do you have any other suggestions for recovering the mailbox? 

Regards,
Milan Holzäpfel



-- 
Milan Holzäpfel lis...@mjh.name


Re: [Dovecot] Changing password for users

2012-10-26 Thread Mike John

On 2012-10-26 01:17, Mike John wrote:

Hello, I am using dovecot (2.0.9) and using virtual users using 
passdb

{ args = /etc/dovecot/dovecotpasswd driver = passwd-file } How can i
make my virtual users change their passwords using web interface ? 
My

users already uses squirrelmail to access their mail. is there a
program to add to squirrelmail to add this function to the clients ? 
or

should i user different separate website for password changing ? and
what program/tool can help me with this ? Any ideas is greatly
appreciated. Mike. Mike,


I don't know about forcing users to change their passwords however 
with
Squirrelmail there are several password change plugins available 
that

use poppasswd to actually c ssword. Of course poppasswd will

probably need to be modified to go

against your password data base, in my case it simply uses PAM. The
version I sion 1.8.5. Oh you probably want to restrict access to 
the

port from

the local host only since pas

ansmitted in clear


quot

eJeff

I know about poppassd , but it works only for /etc/passwd ,
/etc/shadow, but my dovecot virtual users password files
are in different location and i do not know how to modify poppassd, 
any

idea how can i do that? and is there another way other than poppassd?


i have googled every where, i can not find how to modify poppassd to 
modify virtual users passwords at /etc/dovecot/passwords
, Is there any other way ? i am sure that some one in this mailing list 
have virtual users and uses modified poppassd or other utils so that his 
clients can change their password


[Dovecot] public mailbox not showing up in web client

2012-10-26 Thread David Mehler
Hello,

I'm trying to set up a public mailbox where users can receive
notifications out of. I'm not getting any errors from Dovecot 2.1, but
nothing is showing up in my user's web clients. In each
/home/vmail/public/mailbox folder right now I just have one called
testbox I have a dovecot-acl file with:

user=testuser1 lr
user=user1 lr

etc.

I'd appreciate any suggestions.

Thanks.
Dave.

# 2.1.10: /etc/dovecot/dovecot.conf
# OS: Linux
dict {
  quota = mysql:/etc/dovecot/dovecot-dict-sql.conf.ext
}
first_valid_gid = 5000
first_valid_uid = 5000
hostname = xxx
last_valid_gid = 5000
last_valid_uid = 5000
log_path = /var/log/dovecot.error
mail_gid = vmail
mail_home = /home/vmail/%d/%n/home
mail_location = maildir:/home/vmail/%d/%n:LAYOUT=fs
mail_plugins =  acl quota zlib
mail_uid = vmail
maildir_very_dirty_syncs = yes
managesieve_notify_capability = mailto
managesieve_sieve_capability = fileinto reject envelope
encoded-character vacation subaddress comparator-i;ascii-numeric
relational regex imap4flags copy include variables body enotify
environment mailbox date ihave
namespace {
  list = yes
  location = maildir:/home/vmail/public:LAYOUT=fs
  prefix = Public/
  separator = /
  subscriptions = yes
  type = public
}
namespace inbox {
  hidden = no
  inbox = yes
  list = yes
  location =
  prefix =
  separator = /
  subscriptions = yes
  type = private
}
passdb {
  args = /etc/dovecot/dovecot-sql.conf.ext
  driver = sql
}
plugin {
  acl = vfile
  autocreate = Spam
  autosubscribe = Spam
  quota = dict:User quota::proxy::quota
  quota_rule = *:storage=1G
  quota_rule2 = Trash:storage=+100M
  quota_warning = storage=95%% quota-warning 95 %u
  quota_warning2 = storage=80%% quota-warning 80 %u
}
postmaster_address = postmaster@xxx
protocols = imap
service auth {
  unix_listener /var/spool/postfix/private/auth {
group = postfix
mode = 0660
user = postfix
  }
  unix_listener auth-userdb {
mode = 0600
user = vmail
  }
}
service dict {
  unix_listener dict {
mode = 0600
user = vmail
  }
}
service imap-login {
  inet_listener imap {
address = 127.0.0.1 ::1
  }
  inet_listener imaps {
address = xxx xxx
ssl = yes
  }
}
service quota-warning {
  executable = script /usr/local/bin/quota-warning.sh
  user = vmail
}
ssl_cert = /etc/ssl/certs/server.crt
ssl_key = /etc/ssl/private/server.key
userdb {
  args = /etc/dovecot/dovecot-sql.conf.ext
  driver = sql
}
protocol lda {
  mail_plugins =  acl quota zlib
}
protocol imap {
  mail_plugins =  acl quota zlib autocreate imap_acl imap_quota imap_zlib
}


Re: [Dovecot] Changing password for users

2012-10-26 Thread Tom Hendrikx
On 26-10-12 20:47, Mike John wrote:
 On 2012-10-26 01:17, Mike John wrote:
 
 Hello, I am using dovecot (2.0.9) and using virtual users using passdb
 { args = /etc/dovecot/dovecotpasswd driver = passwd-file } How can i
 make my virtual users change their passwords using web interface ? My
 users already uses squirrelmail to access their mail. is there a
 program to add to squirrelmail to add this function to the clients ? or
 should i user different separate website for password changing ? and
 what program/tool can help me with this ? Any ideas is greatly
 appreciated. Mike. Mike,

 I don't know about forcing users to change their passwords however with
 Squirrelmail there are several password change plugins available that
 use poppasswd to actually c ssword. Of course poppasswd will
 probably need to be modified to go
 against your password data base, in my case it simply uses PAM. The
 version I sion 1.8.5. Oh you probably want to restrict access to the
 port from
 the local host only since pas
 ansmitted in clear

 quot
 eJeff

 I know about poppassd , but it works only for /etc/passwd ,
 /etc/shadow, but my dovecot virtual users password files
 are in different location and i do not know how to modify poppassd, any
 idea how can i do that? and is there another way other than poppassd?
 
 i have googled every where, i can not find how to modify poppassd to
 modify virtual users passwords at /etc/dovecot/passwords
 , Is there any other way ? i am sure that some one in this mailing list
 have virtual users and uses modified poppassd or other utils so that his
 clients can change their password

Using a database for managing virtual users seems overkill, until you
run into issues like this.

I have a postgres backend for 20ish users, and I can plugin everything I
want. Postfixadmin works geat, and there are many password plugins for
squirrelmail/roundcube/etc that work with such a database.

Disclaimer: I tried the file-based approach too, but kept building
kludges for things that were a lot simpler with a database. In the end,
I joined the dark side.

--
Tom


Re: [Dovecot] Changing password for users

2012-10-26 Thread Joseph Tam



From: Mike John m...@alaadin.org


I know about poppassd , but it works only for /etc/passwd ,
/etc/shadow, but my dovecot virtual users password files
are in different location and i do not know how to modify poppassd,
any idea how can i do that?


I downloaded and examined it; it's just a wrapper for /usr/bin/passwd,
and there doesn't seem an easy way to modify it to use something other
than the system password file.

Maybe replace /usr/bin/passwd with htpasswd?


and is there another way other than poppassd?


Write your own PHP script -- it couldn't be more than a few dozen lines
of code for a working skeleton.  Or Google php change password htpasswd.

Joseph Tam jtam.h...@gmail.com


Re: [Dovecot] Changing password for users

2012-10-26 Thread Ben Morrow
At  3PM -0700 on 26/10/12 you (Joseph Tam) wrote:
 
  From: Mike John m...@alaadin.org
 
  I know about poppassd , but it works only for /etc/passwd ,
  /etc/shadow, but my dovecot virtual users password files
  are in different location and i do not know how to modify poppassd,
  any idea how can i do that?
 
 I downloaded and examined it; it's just a wrapper for /usr/bin/passwd,
 and there doesn't seem an easy way to modify it to use something other
 than the system password file.
 
 Maybe replace /usr/bin/passwd with htpasswd?

Try pam_pwdfile with poppwd or some other poppassd that supports PAM.

  and is there another way other than poppassd?
 
 Write your own PHP script -- it couldn't be more than a few dozen lines
 of code for a working skeleton.  Or Google php change password htpasswd.

It's not as simple as you seem to think. Quite apart from getting the
password-changing itself right (have you considered what happens when
two users change their passwords at the same time? when Dovecot tries to
read the password file at the same time as you are changing it? when the
system crashes when you are halfway through rewriting the password
file?), you really shouldn't be running PHP as a user with write access
to a password file (even a virtual password file) in any case.

Ben



Re: [Dovecot] Changing password for users

2012-10-26 Thread /dev/rob0
On Fri, Oct 26, 2012 at 11:04:13PM +0200, Tom Hendrikx wrote:
 Using a database for managing virtual users seems overkill,
 until you run into issues like this.
 
 I have a postgres backend for 20ish users, and I can plugin 
 everything I want. Postfixadmin works geat, and there are many 
 password plugins for squirrelmail/roundcube/etc that work with
 such a database.
 
 Disclaimer: I tried the file-based approach too, but kept
 building kludges for things that were a lot simpler with a
 database. In the end, I joined the dark side.

SQLite gives me the best of both worlds: file-based stability with 
SQL flexibility and easy backups. There is no Postfixadmin-type 
solution out there yet, but if you're fine with sqlite3(1) in the 
console, you won't miss it.
-- 
  http://rob0.nodns4.us/ -- system administration and consulting
  Offlist GMX mail is seen only if /dev/rob0 is in the Subject:


Re: [Dovecot] Locking /var/mail/user issue with postfix and dovecot

2012-10-26 Thread Stan Hoeppner
You are a well of accessible knowledge Ben.  (How have I missed your
posts in the past?)

On 10/26/2012 3:11 AM, Ben Morrow wrote:

 Assuming you have 
 
 mailbox_command = /.../dovecot-lda -a ${RECIPIENT}

I'm setup for system users so it's a simpler, but yes.

 or something equivalent in your Postfix configuration, dovecot-lda runs
 as a subprocess of local(8) under the uid of the delivered-to user.

Of course that makes sense given Postfix is doing the calling.  I would
have assumed this but my feeble use of tools wasn't showing anything.

 Filesystem locking, at least if NFS is not involved, is not that
 expensive. Successfully acquiring a flock or fcntl lock takes only a
 single syscall which doesn't have to touch the disk, and any form of IPC
 is going to need to do that. (Even something like a shared memory region
 will need a mutex for synchronisation, and acquiring the mutex has to go
 through the kernel.)

Thanks for this.  I was under the assumption flock/fcntl were more
expensive than they are.  Probably because all I'd read about them was
in relation to NFS (which I don't use, but I read alot like many do).

 Dotlocking *is* expensive, because acquiring a dotlock is a complicated
 process requiring lots of syscalls, some of which have to write to disk;
 and any scheme involving acquiring several locks on the same file is
 going to be more so, especially if you can end up getting the first lock
 but finding you can't get the second, so then you have to undo the first
 and try again.

Yeah, I knew dotlocks were the worst due to disk writes, but didn't know
the other details.

 More importantly, the biggest problem with mbox as a mailbox format is
 that any access at all has to lock the whole mailbox. If the LDA is
 trying to deliver a new message at the same time as an IMAP user is
 fetching a completely different message, or if two instances of the LDA
 are trying to deliver at the same time, they will be competing for the
 same lock even though they don't really need to be. A file-per-message
 format like Maildir avoids this, to the point of being mostly lockless,
 but that brings its own efficiency problems; the point of dbox is to
 find the compromise between these positions that works best.

mbox locking hasn't been problem here as I split the INBOX from the user
mailboxes containing IMAP folders (mbox files).  We make heavy use of
sieve scripts to sort on delivery, so there's not much concurrent access
to any one mbox file.

The efficiency issue is why I chose mbox over maildir.  Users here keep
a lot of (list) mail and FTS often.  The load on the spindles with
maildir is simply too great and would bog down all users.  The IOPS
benefit of mbox in this scenario outweighs any locking issues.

 I wouldn't look too hard at the details of the various ways there are of
 locking and parsing mbox files, or the ways in which they can go wrong.
 It's enough to make anyone swear off email for life :).

Heheh.

 The only reason for using mbox is for compatibility with other systems
 which use mbox, 

Not necessarily true.  See above.  I'm sure I'm not the only one using
mbox for this reason.  Dovecot is my only app hitting these mbox files.

 which means you have to do the locking the same way as
 they do (assuming you can work out what that is). If you're going to
 change the locking rules you might as well change the file format at the
 same time, both to remove the insanity and to make it actually suitable
 for use as an IMAP mailstore. That's what Timo did with dbox, so if
 you've got your systems to the point where nothing but Dovecot touches
 the mail files you should seriously consider switching.

If/when I do switch mailbox formats it'll be to mdbox so FTS doesn't
drop a big hammer on the spindles.

Thanks for the informative discussion Ben.

-- 
Stan



Re: [Dovecot] Rebuilding indexes fails on inconsistent mdbox

2012-10-26 Thread Stan Hoeppner
On 10/26/2012 1:29 PM, Milan Holzäpfel wrote:
 On Wed, 24 Oct 2012 09:01:24 -0500
 Stan Hoeppner s...@hardwarefreak.com wrote:
 
 On 10/24/2012 6:28 AM, Milan Holzäpfel wrote:

 I have a problem with an incosistent mdbox: 
 ...
 four hours after the problem initially appeared, I did a hard reset of
 the system because it was unresponsive.
 ...
 Can anybody say something about this? May the mdbox be repaired? 

 If the box is truly unresponsive, i.e. hard locked, then the corrupted
 indexes are only a symptom of the underlying problem, which is unrelated
 to Dovecot, UNLESS, the lack of responsiveness was due to massive disk
 access, which will occur when rebuilding indexes on a 6.6GB mailbox.
 You need to know the difference so we have accurate information to
 troubleshoot with.
 
 Thanks for your suggestion. I wasn't looking for a solution for the
 unresponsiveness, but I failed to make that clear. 

It's likely all related.  If you have already, or will continue to, hard
reset the box, you will lose inflight data in the buffer cache, which
may very likely corrupt your mdbox files and/or indexes.  I'm a bit
shocked you'd hard reset a *slow* responding server.  Especially one
that appears to be unresponsive due to massive disk IO.  That's a recipe
for disaster...

 I was not patient enough to debug the unresponsiveness issue. The box
 was not hard locked, but any command took very look if it would at all
 complete. I think that it could be massive swapping, but I wouldn't
 expect Dovecot to be the cause. 

This leads me to believe your filesystem root, swap partition, and
Dovecot mailbox storage are all on the same disk, or small RAID set.  Is
this correct?

 After the reboot, Dovecot would happily re-execute the failing index
 rebuild on each new incoming message, which suggests that Dovecot
 wasn't the cause for the unresponsiveness. 

This operation is a tiny IO pattern compared to the 6.6GB re-indexing
operation you mentioned before.  So you can't make the simple assumption
that Dovecot wasn't the cause for the unresponsiveness.  If fact
Dovecot likely instigated the problem, though it likely isn't the
cause.  I'll take a stab at that below.

 If the there's a kernel or hardware problem, you should see related
 errors in dmesg.  Please share those.
 
 The kernel had messages like
 
 INFO: task cron:2799 blocked for more than 120 seconds.

Now we're getting some meat on this plate.

 in the dmesg. But again, I didn't mean to ask for a solution to this
 problem.

blocked for more than 120 seconds is a kernel warning message, not an
error message.  We see this quite often on the XFS list.  Rarely, this
is related to a kernel bug.  Most often the cause of this warning is
saturated IO.  In this case it appears cron blocked for 120s because it
couldn't read /var/cron/crontabs/[user]

The most likely cause of this is that so many IO requests are piled up
in the queue that it took more than 2 minutes for the hardware (disks)
to complete them before servicing the cron process' IO requests.
Dovecot re-indexing a 6.6GB mailbox, with other IO occurring
concurrently, could easily cause this situation if you don't have
sufficient spindle IOPS.  I.e. this IO pattern will bring a single SATA
disk or mirror pair to its knees.

If you currently have everything on a single SATA disk or mirror pair,
the solution for eliminating the bogging down of the system, and likely
the Dovecot issues related to it, is to simply separate your root
filesystem, swap, and Dovecot data files onto different physical
devices.  For instance, moving the root filesystem and swap to a small
SSD will prevent the OS unresponsiveness, even if Dovecot is bogged down
with IO to the SATA disk.

With spinning rust storage, separation of root filesystem, swap, and
application data to different storage IO domains is system
administration 101 kind of stuff.  If you're using SSD this isn't (as)
critical as it's pretty hard to saturate the IO limits of an SSD.

-- 
Stan



Re: [Dovecot] Rebuilding indexes fails on inconsistent mdbox

2012-10-26 Thread Stan Hoeppner
On 10/26/2012 1:30 PM, Milan Holzäpfel wrote:
 On Wed, 24 Oct 2012 13:28:11 +0200
 Milan Holzäpfel lis...@mjh.name wrote:
 
 I have a problem with an incosistent mdbox: 
 [...]
 The problem appeared out of nowhere. [...]
 
 That's just wrong. Two minutes before the corruption occured for
 the first time, the machine was booted after power-off without prior
 shutdown. I didn't notice this until now, sorry for this. 

Ahh, more critical information.  Better late than never I guess.

 The mailbox is on XFS. As far as I remember, XFS in known for leaving
 NULL bytes at the end of files after a system reset. At least, I found
 72 bytes of NULL in a plain text log file on XFS after such an event.
 Do you think this may be the source of the index corruption? 

Very possibly.

 Do you have any other suggestions for recovering the mailbox? 

Other than restoring from a backup, I do not.  Others might.  But I will
offer this suggestion:  Never run a server without a properly
functioning UPS and shutdown scripts.

The system in question isn't a laptop is it?  I'm trying to ascertain
how many server 'rules' you're breaking before making any more
assumptions or giving any more advice.

-- 
Stan