I recently upgraded from 2.2.10 to 2.2.13, and also upgraded pigeonhole. Ever
since im seeing some empty emails appear in my inbox.
Return-Path: cric...@stats2.xs4all.net
Delivered-To: cor
Received: from imapdirector1.xs4all.net ([194.109.26.173])
by userimap9.xs4all.nl (Dovecot) with
On Oct 4, 2012, at 3:55 AM, Kelsey Cummings k...@corp.sonic.net wrote:
On 10/2/2012 2:39 PM, Cor Bosman wrote:
Anyone else with NFS mailspools seeing this?
Yes, it is like 1999 all over again. I haven't had a chance to track them
down or setup a cron job to rm them all. All of the ones
Hi all, we've started receiving complaints from users that seemingly use more
quota than they actually have. We noticed that these users have (in some cases
many) .nfs files in their mailspool. Some of our admins checked their own dirs,
and noticed them there as well. This could of course be
On Oct 3, 2012, at 12:35 AM, Timo Sirainen t...@iki.fi wrote:
On 3.10.2012, at 0.45, Timo Sirainen wrote:
On 3.10.2012, at 0.39, Cor Bosman wrote:
With NFS these files are created when a file gets unlinked, but another
process still has it open. It disappears as soon as the other
I'm also considering implementing an SMTP submission server, which works
only as a proxy to the real SMTP server. The benefits of it would mainly
be:
What would be really cool is if you also kept statistics on certain metrics,
like how many emails a specific sender has sent. If this is done
What would be really cool is if you also kept statistics on certain metrics,
like how many emails a specific sender has sent. If this is done right, it
could become a centralised spam sender back-off system over multiple smtp
servers. Maybe something for the future. We now pay for a
Hey all, im experimenting with dovecot stats service, and graphing the result.
My initial results are kind of interesting. Check out this graph showing
connected sessions and users:
http://grab.by/dReu
At first I thought maybe one of our 35 imap servers was having issues sending
data, but all
On May 29, 2012, at 2:21 PM, Timo Sirainen wrote:
On 29.5.2012, at 13.23, Cor Bosman wrote:
Hey all, im experimenting with dovecot stats service, and graphing the
result. My initial results are kind of interesting. Check out this graph
showing connected sessions and users:
http
On 29.5.2012, at 21.03, Cor Bosman wrote:
es, I am getting a list of sessions/users every 5 minutes through cron. Im
already using doveadm stats dump session/user connected
Actually that's not really correct behavior either, since it ignores all the
connections that happened during
Alternatively, does anyone have any experience with other redundant storage
options? Im thinking things like MooseFS, DRBD, etc?
You seem to be interested in multi-site clustering/failover solutions,
not simply redundant storage. These two are clustering software
solutions but DRBD is
Mail is always a random IO workload, unless your mailbox count is 1,
whether accessing indexes or mail files. Regarding the other two
questions, you'll likely need to take your own measurements.
Wait, maybe there is a misunderstanding. I mean the IO inside one
index file, not across the
Hey all, we're in the process of checking out alternatives to our index
storage. We're currently storing indexes on a NetApp Metrocluster which works
fine, but is very expensive. We're planning a few different setups and doing
some actual performance tests on them.
Does anyone know some of
Hi javier,
Indexes are very random, mostly read, some writes if using
dovecot-lda (ej: dbox). The average size is rather small, maybe 5 KB in
our setup. Bandwith is rather low, 20-30 MB/sec
Even without LDA/LMTP dovecot-imap needs to write right? It would
need to update the index every
My guess is that I need to recompile and reinstall dovecot pidgeon
(dovecot-2.1-pigeonhole-0.3.0) as well as dovecot but cannot find any
documentation on this.
Is my gues correct?
Is there anything else that is needed to upgrade from 2.1.1 to 2.1.5?
Your guess is correct, always
Well, normally you shouldn't be over quota I guess.. :) Anyway,
http://hg.dovecot.org/dovecot-2.1/rev/ec8564741aa8
http://hg.dovecot.org/dovecot-2.1/rev/dd3798681283
This indeed fixed the problem. Thank you,
Cor
Hey all, has anyone ever tried turning on the trash plugin when the
directory is already over quota? I see some messages being deleted, but it
seems it just deletes enough to fit the new email, not enough to go below
quota.
Well, normally you shouldn't be over quota I guess.. :)
http://hg.dovecot.org/dovecot-2.1/rev/4c8f79d1f9f1 should fix it with dict
quota.
Thank you, this fixed it with dict quota.
Cor
Hey all, has anyone ever tried turning on the trash plugin when the directory
is already over quota? I see some messages being deleted, but it seems it
just deletes enough to fit the new email, not enough to go below quota. Example:
. getquotaroot Spam
* QUOTAROOT Spam User quota Spam Quota
*
On Apr 22, 2012, at 10:03 AM, Jos Chrispijn wrote:
Can someone tell me how I can upgrade from Dovecot 1.x to 2.x best?
thanks for your reply,
Jos Chrispijn
Have you read this? http://wiki2.dovecot.org/Upgrading/2.0
Cor
Everything concerning sieve should be in the home dir.
Why? It can be anywhere you want as long as it doesnt conflict with the names
of your mailstore.
Cor
It looks like my quota isnt being calculated properly after I started
setting quota to a specific folder. The quota in that folder is always
starting out at 0, and only new email is being added to the quota. If I
remove the maildirsize file, and recalculate, it still starts at 0. Once
On Apr 21, 2012, at 12:32 PM, Timo Sirainen wrote:
On 21.4.2012, at 11.01, Cor Bosman wrote:
prefix = Spam/
.
quota2 = dict:Spam Quota::ns=spam:file:%h/spam-quota
.
Apr 21 10:00:11 lmtp1 dovecot: imap(cor): Error: quota: Unknown namespace:
spam
Oh. It would make more sense to have ns
It looks like my quota isnt being calculated properly after I started setting
quota to a specific folder. The quota in that folder is always starting out at
0, and only new email is being added to the quota. If I remove the maildirsize
file, and recalculate, it still starts at 0. Once email
Emails arrive with 2 Return-Paths when using lmtp director. Is this something
configurable in the director or is this a bug?
Return-Path: xxx...@xs4all.net
Delivered-To: +Spam
Received: from lmtpdirector1.xs4all.net ([194.109.26.176])
by lmtp2.xs4all.net (Dovecot) with LMTP id
The trash plugin docs say:
Normally if a message can't be saved/copied because it would bring user over
quota, the save/copy fails with Quota exceeded error. The trash plugin can be
used to avoid such situations by making Dovecot automatically expunge oldest
messages from configured mailboxes
hey all, is it possible to return the location of namespace from the userdb
lookup? The code is a bit unclear about it. There seems to be a part of the
docs saying:
If you want to override settings inside sections, you can separate the section
name and key with '/'. For example:
namespace
Here we have approx. 200K users with 4000 concurrent connections
(90% POP3 users) All servers in virtual environment Vmware, supermicro
servers and Netapp Metrocluster storage solutions (nfs storage with 10G
ethernet network) POP3 sessions take betwen 40 and 300 milisecons at
hey all, im getting the following error:
Apr 14 14:29:44 lmtpdirector1 dovecot: auth: Error: passdb(scorpio,127.0.0.1):
Auth client doesn't have permissions to do a PASS lookup:
/var/run/dovecot/auth-userdb mode=0666, but not owned by UID 112(dovecot)
Apr 14 14:29:44 lmtpdirector1 dovecot:
Of course the moment I post I seem to have figured it out..
service auth {
unix_listener auth-userdb {
mode = 0777
}
}
Is this safe if your servers are secure?
Cor
Apr 14 14:29:44 lmtpdirector1 dovecot: auth: Error:
passdb(scorpio,127.0.0.1): Auth client doesn't have permissions to do a
PASS lookup: /var/run/dovecot/auth-userdb mode=0666, but not owned by UID
112(dovecot)
Apr 14 14:29:44 lmtpdirector1 dovecot: lmtp(18298): Error: user scorpio:
On Wed, Apr 04, 2012 at 02:02:02PM +0200, Patrick Westenberg wrote:
Timo Sirainen schrieb:
Another director. They're meant to connect to each others and do LB/HA.
But what about my MTAs? How can I tell my two postfix servers that there
are two directors and it should/can use the other one
This also is not the kernel list, since updating to a kernel released in
the 21st century Cor's issue has gone away, so this thread is now rather
entirely pointless on the Dovecot list. So I'll my participation in
Actually, it hasn't. For the last few days we've been trying to pinpoint the
I installed a newer kernel on these boxes, and it's fixed. Seems to be
a problem with the stock debian squeeze kernel. Not a dovecot issue, but
others with a stable squeeze box might see similar problems so good
to have it in the archive :)
regards,
Cor
Hey all, I upgraded some servers today from Debian Lenny to Debian Squeeze, and
after the upgrade I started getting dovecot crashes. I was on 2.0.13 but got
these there as well, and hoped upgrading to 2.0.16 would help. It didn't.
Anyone have an idea?
Cor
Dec 18 23:32:21 userimap1 kernel:
Hey all, has anyone ever successfully implemented some form of OTP system with
dovecot? Im looking at setting up an OATH/HOTP-TOTP based OTP for our services,
but the webmail service (which uses dovecot) is a difficult one. Any info on
implementations would be appreciated,
Regards,
Cor
We use a setup as seen on http://grab.by/agCb for about 30.000 simultaneous(!)
imap connections.
We have 2 Foundry loadbalancers. They check the health of the directors. We
have 3 directors, and each one runs Brandon's poolmon script
(https://github.com/brandond/poolmon). This script removes
Hi all, ive been playing with squat indexes. Up to about 300.000 emails in a
single mailbox this was working flawlessly. The search index file is about
500MB at that time. Ive now added some more emails, and at 450.000 or so emails
im seeing a serious problem with squat index creation. It
On May 20, 2011, at 12:46 AM, Timo Sirainen wrote:
On 19.5.2011, at 18.40, Cor Bosman wrote:
Hey all, im experimenting with squat for a small project but am not having
much luck. Debugging tells me squat is being loaded, but the index.search
files are not appearing after TEXT/BODY
Hey all, im experimenting with squat for a small project but am not having much
luck. Debugging tells me squat is being loaded, but the index.search files are
not appearing after TEXT/BODY commands. Squat plugin was added to config as
well.
Anyone have an idea?
Cor
Hi all, anyone having any problems with restarting the director? Every time I
bring down 1 of the director servers, reboot it, or just restart it for
whatever reason, im seeing all kinds of problems. Dovecot generally always
gives me this error:
Jan 20 22:49:55 imapdirector3 dovecot: director:
This discussion has been in the context of _storing_ user email. The
assumption
is that an OP is smart/talented enough to get his spam filters/appliances
killing 99% before it reaches intermediate storage or mailboxes. Thus, in the
context of this discussion, the average size of a spam
Btw, our average mailsize last we checked was 30KB. Thats a pretty good average
as we're an ISP with a very wide user base. I think 4KB average is not a
normal mail load.
Cor
On Dec 28, 2010, at 3:55 AM, Henrique Fernandes wrote:
Can i ask how are you storing your mail ? like, NFS, gfs, ocfs2 etc
and with type, like mbox, maildir, sdbox etc..
In my system we are not usign director, using IPVS but having lots of IO
wait problems!
We're using NFS/maildir.
When I use my iphone to read my emails through IMAP
I can see ALL folders that are in my email home directory
not only those are listed in the .subscriptions file ...
This should be a real security problem
Anyone have the same problem ?
This is not something dovecot can change. Apple
Hey all, just wondering who here is running the director service in a larger
environment. I just switched our whole production setup to the director and am
quite pleased with the result. We're doing a peak of about 25000 tot 3
concurrent sessions on 3 servers. But ive shut 1 server down a
Oops, now I finally understand why Mail.app kept asking for my password for
each mail I sent: it helpfully decided to start signing mails with the only
client cert I had, without asking me.. Forget about those signatures in the
last two mails :)
Heh, is that the key you used to get into
It looks like Timo's patch fixes the problem. Context switches are now back to
normal, and the load graph is smooth and lower. I saw someone also posted a
patch to the LKML.
Cor
Timo, sorry to get back to this, but im still confused :) It seems every time I
want to add a director, or other need to restart one or work on its config, im
having a hard time making things work again without service disruption (or
minimal service disruption).
Lets say I have the following
For those interested, graph showing the difference before and after patch:
context switches: http://grab.by/7W9u
load:http://grab.by/7W9x
Cor
Correct. Only Linux is affected.
Anyway, Postfix's design has been like this forever, so no one would have
ever noticed context switch counts increasing. But changing this might make
them notice that context switches are dropping.
Im a little surprised we havent seen more reports on
Hi Timo, im getting some director errors.
Dec 13 10:11:25 imapdirector1 dovecot: director: Error: User hash 3608614717 is
being redirected to two hosts: 194.109.26.171 and 194.109.26.141
Dec 13 10:11:25 imapdirector1 dovecot: director: Error: User hash 755231546 is
being redirected to two
Would it be ok to set director_user_expire to a very high value? Say 1 day?
This would pretty much lock a user into a specific server right? What if I set
it to 1 year? :)
What happens when the server a user is locked into disappears (i remove it from
the pool). Then a day later, I add the
Is there a specific order in which one has to add servers to
director_mail_servers, Let's say I have 20 loadbalanced director servers (I
dont, but for arguments sake). I want to add 5 servers to
director_mail_servers. If I do this by restarting each director server 1 by 1
with an updated
Using gcc:
gcc version 4.4.5 (Debian 4.4.5-8)
We run gcc version 4.3.2
Im not using any configure options, except for a few that Timo had me try
during debugging.
Cor
Are the process pids also logged in the messages, so it's clear which
messages belong to which imap process? If not, add %p to mail_log_prefix.
Done. It wasnt logging this, now it is.
As I said previously, im not longer running the imap server=0 patch because
it caused these errors:
Great. If the logs aren't too huge I could look at the raw ones, or you could
try to write a script yourself to parse them. I'm basically interested in
things like:
1. How large are the first volcs entries for processes? (= Is the initial
process setup cost high?)
2. Find the last
On Dec 9, 2010, at 10:41 AM, Timo Sirainen wrote:
On 9.12.2010, at 9.13, Cor Bosman wrote:
If you want to have a quick look already, im mailing you the locations of 2
files, 1 with service_count=0 and one with service_count=1.
I see that about half the commands that do hundreds
Some preliminary findings..
Changing the kernel seems to have a positive effect on the load. I changed from
2.6.27.46 to 2.6.27.54 (sorry, im bound by locally available kernels due to a
kernel patch we created to fix some NFS problems in the linux kernel. Patch
should be available in the stock
I upgraded most of our servers from 1.2.x to 2.0.8 and am noticing a really big
increase in server load. This is across the board, not any specific server.
Check this screenshot of a load graph: http://grab.by/7N8V
Is there anything i should be looking at that could cause such a drastic load
Here's the doveconf -n output: http://wa.ter.net/download/doveconf-n.txt
Cor
It could be that you both are running a different Kernel from the Standard
Lenny Kernel 2.6.26. (this could be a clue ..)
It would be interesting to hear from people that aren't seeing a big load
increase. My initial guess was some kind of NFS problem, but since Ralf isn't
doing
On Dec 8, 2010, at 5:11 PM, David Ford wrote:
what's your task switch HZ compiled at? CONFIG_HZ_1000? you would probably
be better at 300 or 250. have you tried tickless? is your kernel compiled
for precisely your cpu type and smp/hyper options set correctly? what about
Timo and I found excessive numbers of context switches, factor 20-30.
But it's unclear why the IMAP process would do/cause this.
Im seeing this as well...http://grab.by/7Nni
Cor
I see you're using userdb passwd. Do your users have unique UIDs? If they
have, maybe it has to do with that..
Yes, we have about 1 million unique UIDs in the passwd file (actually NIS). I
did upgrade 4 machines to the latest kernel, but it's hard to tell if that
changed much as our
So you have tons of imap-login processes. Did you have that in v1.2 too? Try
setting service_count=0 (and same for pop3-login)
http://wiki2.dovecot.org/LoginProcess
Yes, we have tons of imap-login processes. I'll set service_count=0 for imap
(we dont do pop with dovecot) on a few servers.
Oh, and I dont know if we did in 1.2. I think so, but cant be positive. I
tried making the config the same. I have the config still around if you want
to see it.
Cor
login_process_per_connection = yes
So seems like we did have that set in 1.2.
Cor
We're running on bare metal, no VM involved.
Cor
Hope im doing what you want :) Getting kinda confusing.
Right now I have running:
3 servers with just a new kernel, no other changes
2 servers with service_count = 0, no other changes
1 server with service_count = 0, and src/imap/main.c patch
Is that ok?
I wont be seeing much impact until
1 server with service_count = 0, and src/imap/main.c patch
By this you mean service_count=0 for both service imap-login and service imap
blocks, right?
Yes, on both imap-login and imap,
The 2 servers without the patch only have it on imap-login.
Cor
1 server with service_count = 0, and src/imap/main.c patch
Is that ok?
Looks good!
I had to revert this patch because it's causes permission errors on our
filesystem. Directories are being created for user X with the euid of user Y
(which fails, so at least i didnt get corruption,
lcs values for v1.2 and v2.0. Wonder if you get different values?
If you don't mind some huge logs, you could also try the attached patch that
logs the voluntary context switch count for every executed IMAP command.
Maybe some command shows up that generates them much more than others.
Just for thoroughness ive started 2 servers with the logging patch. One with
service_count=0 and one with service_count=1
On 2010-09-29 5:52 PM, Timo Sirainen wrote:
Anyway, I can't give you any guarantees. I only know that it's not 100%
safe, but some people with POP3-only/mostly setups have been happy
enough anyway.
Timo, can you at least clarify this - my understanding is that the
problem here is not
Hi Noel,
I do take exception to be told this issue can be fixed, but NFS users
are not worth it, which is essentially what he told us, I dare say if
I guess he told you this in private? The way i understand it the
index-over-nfs problem can not be fixed 100%. I know for a fact Timo
tried very
Oops, not CRC32, but a 32bit hash based on MD5 of the username. Still,
it's 32bit so with lots of users there will be a lot of collisions
(which shouldn't be a problem, except doveadm director map will list
users who haven't really logged in).
Isn't it possible to keep a copy of the username
For those that are worried about the md5 hash sending out most users to 1
server and thus loadbalancing badly. This doesnt seem to be the case. I just
pointed one of our test webmail environments to a director cluster
(2 director servers behind a foundry, pointing to 2 real imap servers), and
I noticed 2 minor things with the director.
1) if I start 2 directors, but in 1 of them I dont add the other as a
director_servers, they both start spamming the logfiles really really fast. It
seems there is no backoff mechanism in the protocol between directors.
2)
user
Hi,
We use Dovecot only recent, but many I speak use for very many year, if
director was really need, why it only come about now and not 5 or more year
ago, all many mail network run broken for so many year? I no think so.
It might compliment some situation, but not suitable or advantage for
Director doesn't know what users are logged in, only CRC32 of their
username. So doveadm first gets a list of all users, adds them to hash
table, then gets the list of CRC32s of users from director and maps them
to usernames via the hash table. unknown means that the user wasn't
found from
Hi Noel, if you dont need the director, then thats great right? Why does
anyone need to justify anything? Just dont use it, end of discussion. Those
of us that do have a need for it, can use it anyways. Even without your
agreement? Is that such a big problem?
we have a some total of 2 imap
Hi,
Postfix has this issue as well. So does qmail. So does exim. It has nothing
to do with the software being used. It is a problem in the NFS protocol.
Just to be clear. Of course these programs wont have this issue when used
with dovecot-imap, because obviously they wont be updating any
Hi all, anyone know if it's safe to mix a 1.2 environment with 2.0 servers? Im
planning on adding some 2.0 servers for test purposes, but now im wondering if
thats going to mess up index files or other files for users selecting the test
server and then switching back to our normal servers.
Hi,
If you don't mind random Dovecot errors about index corruption I guess you're
fine with how it works now. I guess your mails are delivered to maildirs by
qmail? If you ever switch to Dovecot LDA you'll probably start getting more
errors. And if you ever plan to switch to dbox format
We might be a slightly larger install than you (60k users, mail on FAS 3170
Metrocluster), but we have noticed corruption issues and the director is
definitely going to see use in our shop. We still use Sendmail+procmail for
delivery, so no issue there... but we've got hordes of IMAP users
Noel, I think you just dont quite understand the problem the director is
solving.
The issue is that NFS is not lock-safe over multiple servers. We have 35
imap servers accessing a central NFS cluster. (we have over a million
mailboxes) We offer IMAP to end user clients, and through webmail. This
Hi all, i just investigated a user complaint and found a read-only
dovecot-uidlist. Since dovecot couldnt write it, the process failed. Users can
not reach this file, so how this became readonly is beyond me. Must be
something in dovecot. Maybe an older bug?
Doing a search now for more
On Jan 26, 2010, at 12:29 PM, Timo Sirainen wrote:
On Tue, 2010-01-26 at 10:24 -0400, Cor Bosman wrote:
Hi all, i just investigated a user complaint and found a read-only
dovecot-uidlist. Since dovecot couldnt write it, the process failed. Users
can not reach this file, so how this became
On Jan 22, 2010, at 1:19 PM, Timo Sirainen wrote:
On Fri, 2010-01-22 at 19:16 +0200, Timo Sirainen wrote:
2) Long term solution will be for Dovecot to not use NFS server for
inter-process communication, but instead connect to other Dovecot
servers directly via network.
Actually not NFS
Wow, that's almost the exact same setup we use, except we have 10 IMAP/POP
and a clustered pair of FAS920's with 10K drives which are getting replaced
in a few weeks. We also have a pair of clustered 3050's, but they're not
running dovecot (yet).
Pretty much the same as us as well. 35
You guys must serve a pretty heavy load. What's your peak connection count
across all those machines? How's the load? We recently went through a
hardware replacement cycle, and were targeting 25% utilization at peak
load so we can lose one of our sites (half of our machines are in each
We're seeing this same coredump on a few dozen customers as well. Also
Netapp/NFS.
cor
In both 1.2.6 and 1.2.7 (probably also before that, but dont have logs) im
seeing quite a few of these:
Nov 17 12:45:12 userimap10.xs4all.nl dovecot: IMAP(xxx):
/var/spool/mail/dovecot-control/c/c0/xxx/INBOX/.INBOX/dovecot-uidlist:
Duplicate file entry at line 2650:
Im getting a panic from a specific user:
Nov 11 13:53:29 userimap24.xs4all.nl dovecot: IMAP(xx): Panic:
file maildir-uidlist.c: line 1242
(maildir_uidlist_records_drop_expunges): assertion failed: (recs[i]-
uid rec-uid)
Nov 11 13:53:29 userimap24.xs4all.nl dovecot: IMAP(xx): Raw
Im seeing multiple users with this same panic now. Probably a few an
hour (out of tens of thousands of users, so nothing overly dramatic),
and a new panic as well:
Nov 11 14:34:59 userimap20.xs4all.nl dovecot: IMAP(yy): Panic:
file maildir-uidlist.c: line 403
Congratulations, sounds like a great place to be,
Cor
The other guy who also had a problem was using 2.6.27. If the 3rd guy
also replies that he's using 2.6.27 then that's pretty clearly the
problem. Might be worth asking about in Linux kernel mailing list.
We've had something similar happen on 2.6.24. Over time processes would
take more and more
Very cool to see Apple contribute to dovecot!
Cor
May be Outlook is buggy, but your mentioned Thunderbird has it's own
issue:
When I want to use german names for a user I setup his mail account
with a folder named Papierkorb as trash folder (german
translation). This can be used with many email clients without
problems. But a german
If there were no error messages logged with 1.1.5, there's nothing I
can think of that could explain it.
Im not sure when this started, but im seeing very high CPU use in dovecot.
I recently swapped our systems to Linux from FreeBSD, and ive also moved
versions up a few (now on 1.1.6).
1 - 100 of 136 matches
Mail list logo