[Dovecot] lmtp: Error: mmap failed with file ... dovecot.index.cache: Cannot allocate memory
Hello, I have a mailbox with an ungodly number of small messages in it. When new messages are delivered lmtp kicks up an error like the one below. I tried raising the vsz_limit for lmtp but that didn't see to help. Any ideas (besides deleting 400k messages)? Thanks! dovecot: lmtp(6068, u...@example.com): Error: mmap() failed with file /home/vmail/domains/example.com/user/Maildir/dovecot.index.cache: Cannot allocate memory # ls -lah dovecot.index.cache -rw--- 1 vmail vmail 128M Jul 29 20:16 dovecot.index.cache # doveadm mailbox status -u u...@example.com all inbox INBOX messages=434118 recent=59848 uidnext=434119 uidvalidity=1293568548 unseen=432625 highestmodseq=14023 vsize=1329486283 guid=f8d48232244a1a4dfe2ecb0ad7e0 # doveconf -n ... default_vsz_limit = 256 M service lmtp { inet_listener lmtp { port = 24 } vsz_limit = 256 M } smime.p7s Description: S/MIME Cryptographic Signature
Re: [Dovecot] lmtp: Error: mmap failed with file ... dovecot.index.cache: Cannot allocate memory
On Fri Aug 10 10:10:21 2012, Timo Sirainen wrote: On 10.8.2012, at 20.05, David Jonas wrote: I have a mailbox with an ungodly number of small messages in it. When new messages are delivered lmtp kicks up an error like the one below. I tried raising the vsz_limit for lmtp but that didn't see to help. That should help. # ls -lah dovecot.index.cache -rw--- 1 vmail vmail 128M Jul 29 20:16 dovecot.index.cache default_vsz_limit = 256 M service lmtp { inet_listener lmtp { port = 24 } vsz_limit = 256 M } Did you try higher than 256M? There might be some memory temporarily wasted which causes it to go over. I haven't tried higher. I'll make the change now though. Is there a way to test it besides delivering a new message? smime.p7s Description: S/MIME Cryptographic Signature
Re: [Dovecot] lmtp: Error: mmap failed with file ... dovecot.index.cache: Cannot allocate memory
On 8/10/12 10:10 AM, Timo Sirainen wrote: On 10.8.2012, at 20.05, David Jonas wrote: I have a mailbox with an ungodly number of small messages in it. When new messages are delivered lmtp kicks up an error like the one below. I tried raising the vsz_limit for lmtp but that didn't see to help. That should help. # ls -lah dovecot.index.cache -rw--- 1 vmail vmail 128M Jul 29 20:16 dovecot.index.cache default_vsz_limit = 256 M service lmtp { inet_listener lmtp { port = 24 } vsz_limit = 256 M } Did you try higher than 256M? There might be some memory temporarily wasted which causes it to go over. service lmtp { vsz_limit = 320 M } Yep. That seems to cover it. Delivered a message with lmtp and no error in the logs. dovecot.index.cache timestamp updated too. Thanks! Guess I should have tried something absurdly high before emailing, just to see if that was it. smime.p7s Description: S/MIME Cryptographic Signature
Re: [Dovecot] Dovecot for POP3S proxying
On Wed May 2 06:41:00 2012, Gilles Albusac wrote: I would like to configure Dovecot for POP3S proxying all users from the Internet to the internal Exchange Mail Server. Unless I'm missing something with your request, you don't need dovecot. Any ssl proxy can do that for you, such as stunnel (http://www.stunnel.org/). We use the hardware ssl termination on our load balancers for pop3s, imaps, and smtps.
[Dovecot] dovecot sasl with postfix: SASL LOGIN authentication failed: Connection lost to authentication server
When using dovecot (2.1.5) sasl with postfix (2.8.4) behind nginx smtp proxy I am seeing a ton of errors of the form: postfix/smtpd[7731]: warning: unknown[192.168.0.6]: SASL LOGIN authentication failed: Connection lost to authentication server Nothing is printed by dovecot in the logs regarding the error. It seems that dovecot just hung up on postfix. (side note: no, can't use xclient in nginx/postfix. But perhaps soon.) After much digging I thought I solved it with: login_trusted_networks = 172.20.20.0/24 mail_max_userip_connections = 0 This seems safe enough because dovecot is only providing sasl to postfix, no connections to the outside world. But the error is still happening. # doveadm penalty IP penalty last_penaltylast_update 172.20.20.61 1 2012-04-30 19:15:56 19:15:56 strace on the anvil process shows a lot of GETs and INCs: 18:54:06 read(14, PENALTY-GET\t172.20.20.61\n, 397) = 25 0.16 18:54:06 write(14, 1 1335837245\n, 13) = 13 0.29 A two minute survey showed penalty distribution: 0: 60% 1: 15% 2: 18% 3: 8% Finally I just disabled penalties with the info from http://www.dovecot.org/list/dovecot/2011-December/062631.html and that seemed to do it. Is there a better way? This took me a long time to run down so I tried to make this message detailed enough that others with similar problems will stumble upon it.
Re: [Dovecot] Maildir migration and uids
On 12/29/11 5:35 AM, Timo Sirainen wrote: On 22.12.2011, at 3.52, David Jonas wrote: I'm in the process of migrating a large number of maildirs to a 3rd party dovecot server (from a dovecot server). Tests have shown that using imap to sync the accounts doesn't preserve the uidl for pop3 access. My current attempt is to convert the maildir to mbox and add an X-UIDL header in the process. Run a second dovecot that serves the converted mbox. But dovecot's docs say, None of these headers are sent to IMAP/POP3 clients when they read the mail. That's rather complex. Thanks, Timo. Unfortunately I don't have shell access at the new dovecot servers. They have a migration tool that doesn't keep the uids intact when I sync via imap. Looks like I'm going to have to sync twice, once with POP3 (which maintains uids) and once with imap skipping the inbox. Ugh. Is there any way to sync these maildirs to the new server and maintain the uids? What Dovecot versions? dsync could do this easily. You could simply install the dsync binary even if you're using Dovecot v1.x. Good idea with dsync though, I had forgotten about that. Perhaps they'll do something custom for me. You could also log in with POP3 and get the UIDL list and write a script to add them to dovecot-uidlist.
[Dovecot] Dovecot imap proxy to nginx, incompatible
It appears that using dovecot to proxy to nginx imap proxy doesn't work. From tcpdump and browsing the source it appears dovecot sends, C CAPABILITY\r\nL LOGIN user pass\r\n and nginx only responds to the CAPABILITY command. Is this a problem with dovecot sending the two commands without waiting for the first to complete or is it nginx's trouble with not handling it correctly? A quick test with a perl script confirms: #!/usr/bin/perl -w $|++; use IO::Socket; use strict; my ($host, $user, $pass) = @ARGV; my $s = new IO::Socket::INET(Proto = 'tcp', PeerAddr = $host, PeerPort = 143); die Could not create socket $!\n unless $s; while($s) { print $_; last if /OK/; } print $s C CAPABILITY\r\nL LOGIN $user $pass\r\n; while($s) { print $_; last if /OK/; } print $s Q logout\r\n; while($s) { print $_; last if /OK/; } close($s); ## Output: * CAPABILITY IMAP4rev1 SASL-IR SORT THREAD=REFERENCES MULTIAPPEND UNSELECT LITERAL+ IDLE CHILDREN NAMESPACE LOGIN-REFERRALS UIDPLUS LIST-EXTENDED I18NLEVEL=1 QUOTA AUTH=PLAIN C OK completed * BYE Q OK completed
[Dovecot] Maildir migration and uids
I'm in the process of migrating a large number of maildirs to a 3rd party dovecot server (from a dovecot server). Tests have shown that using imap to sync the accounts doesn't preserve the uidl for pop3 access. My current attempt is to convert the maildir to mbox and add an X-UIDL header in the process. Run a second dovecot that serves the converted mbox. But dovecot's docs say, None of these headers are sent to IMAP/POP3 clients when they read the mail. Is there any way to sync these maildirs to the new server and maintain the uids? The real goal is keep customers email clients happy when they are pointed at the new server. Am I just wishing?
[Dovecot] Make entire mailbox read-only (ACLs?)
Hello, Is there a way of locking a mailbox, effectively making it read-only to IMAP clients? I've read through http://wiki2.dovecot.org/ACL. I created dovecot-acl with the content owner lr in .INBOX which seems to keep me from copying messages into the folder, but not out. I have the plugin configured correctly, it seems, since MYRIGHTS command shows the correct value: 30 MYRIGHTS INBOX * MYRIGHTS INBOX lr Am I barking up the wrong tree? Is there an easier way to lock a mailbox? post-login scripting perhaps? Thanks!
Re: [Dovecot] dsync: Invalid mailbox first_recent_uid
On 6/3/11 7:25 AM, Timo Sirainen wrote: On Thu, 2011-05-26 at 23:25 -0700, David Jonas wrote: dsync-local(djo...@vitalwerks.com): Error: Invalid mailbox input from worker server: Invalid mailbox first_recent_uid The local uid is 8989 and the remote uid is 89. I added first_valid_uid = 89 to the local conf but to no avail. Local version is 2.0.12, remote is 2.0.1. That's actually the problem. They talk slightly different protocols.. I guess I should have added a version number to the protocol. Although even then you would have only gotten protocol version mismatch error. I had no idea that would be an issue. A protocol version mismatch error actually would have helped me figure it out without bugging you on the list. I upgraded the remote server to 2.0.12 and had no problems with the sync. You could simply copy v2.0.12's dsync to the remote server and it should work fine, as long as you're not using any plugins. Thanks!
[Dovecot] dsync: Invalid mailbox first_recent_uid
For the life of me I can't get dsync to work. Please help! Remote server runs dovecot out of /usr/local/dovecot2. Everything makes sense until this line: dsync-local(djo...@vitalwerks.com): Error: Invalid mailbox input from worker server: Invalid mailbox first_recent_uid The local uid is 8989 and the remote uid is 89. I added first_valid_uid = 89 to the local conf but to no avail. Local version is 2.0.12, remote is 2.0.1. # dsync -Dv -u djo...@vitalwerks.com mirror \ ssh vmail@192.168.15.54 \ /usr/local/dovecot2/bin/dsync -Dv -u djo...@vitalwerks.com dsync(vmail): Debug: Effective uid=8989, gid=8989, home=/home/vmail/domains/vitalwerks.com/djonas dsync(vmail): Debug: Quota root: name=user backend=dict args=vitalwerks.com-djonas:proxy::quota dsync(vmail): Debug: dict quota: user=vitalwerks.com-djonas, uri=proxy::quota, noenforcing=0 dsync(vmail): Debug: Namespace : type=private, prefix=, sep=., inbox=yes, hidden=no, list=yes, subscriptions=yes location=maildir:~/Maildir:INBOX=~/Maildir dsync(vmail): Debug: maildir++: root=/home/vmail/domains/vitalwerks.com/djonas/Maildir, index=, control=, inbox=/home/vmail/domains/vitalwerks.com/djonas/Maildir dsync-local(djo...@vitalwerks.com): Debug: Namespace : Using permissions from /home/vmail/domains/vitalwerks.com/djonas/Maildir: mode=0770 gid=-1 dsync-local(djo...@vitalwerks.com): Error: Invalid mailbox input from worker server: Invalid mailbox first_recent_uid dsync-remote(djo...@vitalwerks.com): Error: read() from proxy client failed: EOF
Re: [Dovecot] SSD drives are really fast running Dovecot
On 1/12/11 , Jan 12, 11:46 PM, Stan Hoeppner wrote: David Jonas put forth on 1/12/2011 6:37 PM: I've been considering getting a pair of SSDs in raid1 for just the dovecot indexes. The hope would be to minimize the impact of pop3 users hammering the server. Proposed design is something like 2 drives (ssd or platter) for OS and logs, 2 ssds for indexes (soft raid1), 12 sata or sas drives in RAID5 or 6 (hw raid, probably 3ware) for maildirs. The indexes and mailboxes would be mirrored with drbd. Seems like the best of both worlds -- fast and lots of storage. Let me get this straight. You're moving indexes to locally attached SSD for greater performance, and yet, you're going to mirror the indexes and store data between two such cluster hosts over a low bandwidth, high latency GigE network connection? If this is a relatively low volume environment this might work. But, if the volume is high enough that you're considering SSD for performance, I'd say using DRBD here might not be a great idea. First, thanks for taking the time to respond! I appreciate the good information. Currently running DRBD for high availability over directly attached bonded GigE with jumbo frames. Works quite well. Though indexes and maildirs are on the same partition. The reason for mirroring the indexes is just for HA failover. I can only imagine the hit of rebuilding indexes for every connection after failover. Anyone have any improvements on the design? Suggestions? Yes. Go with a cluster filesystem such as OCFS or GFS2 and an inexpensive SAN storage unit that supports mixed SSD and spinning storage such as the Nexsan SATABoy with 2GB cache: http://www.nexsan.com/sataboy.php Get the single FC controller model, two Qlogic 4Gbit FC PCIe HBAs, one for each cluster server. Attach the two servers to the two FC ports on the SATABoy controller. Unmask each LUN to both servers. This enabling the cluster filesystem. Depending on the space requirements of your indexes, put 2 or 4 SSDs in a RAID0 stripe. RAID1 simply DECREASES the overall life of SSDs. SSDs don't have the failure modes of mechanical drives thus RAID'ing them is not necessary. You don't duplex your internal PCIe RAID cards do you? Same failure modes as SSDs. Interesting. I hadn't thought about it that way. We haven't had an SSD fail yet so I have no experience there yet. And I've been curious to try GFS2. Occupy the remaining 10 or 12 disk bays with 500GB SATA drives. Configure them as RAID10. RAID5/6 aren't suitable to substantial random write workloads such as mail and database. Additionally, rebuild times for parity RAID schemes (5/6) are up in the many hours, or even days category, and degraded performance of 5/6 is horrible. RAID10 rebuild times are a couple of hours and RAID10 suffers zero performance loss when a drive is down. Additionally, RAID10 can lose HALF the drives in the array as long as no two are both drives in a mirror pair. Thus, with a RAID10 of 10 disks, you could potentially lose 5 drives with no loss in performance. The probability of this is rare, but it demonstrates the point. With a 10 disk RAID 10 of 7.2k SATA drives, you'll have ~800 random read/write IOPS performance. That' may seem low, but that's an actual filesystem figure. The physical IOPS figure is double that, 1600. Since you'll have your indexes on 4 SSDs, and the indexes are where the bulk of IMAP IOPS take place (flags), you'll have over 50,000 random read/write IOPS. Raid10 is our normal go to, but giving up half the storage in this case seemed unnecessary. I was looking at SAS drives and it was getting pricy. I'll work SATA into my considerations. Having both SSD and spinning drives in the same SAN controller eliminates the high latency low bandwidth link you were going to use with drbd. It also eliminates buying twice as many SSDs, PCIe RAID cards, and disks, one set for each cluster server. Total cost may end up being similar between the drbd and SAN based solutions, but you have significant advantages with the SAN solution beyond those already mentioned, such as using an inexpensive FC switch and attaching a D2D or tape backup host, installing the cluster filesystem software on it, and directly backing up the IMAP store while the cluster is online and running, or snapshooting it after doing a freeze at the VFS layer. As long as the SATAboy is reliable I can see it. Probably would be easier to sell to the higher ups too. They won't feel like they're buying everything twice.
Re: [Dovecot] SSD drives are really fast running Dovecot
On 1/12/11 , Jan 12, 9:53 AM, Marc Perkel wrote: I just replaced my drives for Dovecot using Maildir format with a pair of Solid State Drives (SSD) in a raid 0 configuration. It's really really fast. Kind of expensive but it's like getting 20x the speed for 20x the price. I think the big gain is in the 0 seek time. I've been considering getting a pair of SSDs in raid1 for just the dovecot indexes. The hope would be to minimize the impact of pop3 users hammering the server. Proposed design is something like 2 drives (ssd or platter) for OS and logs, 2 ssds for indexes (soft raid1), 12 sata or sas drives in RAID5 or 6 (hw raid, probably 3ware) for maildirs. The indexes and mailboxes would be mirrored with drbd. Seems like the best of both worlds -- fast and lots of storage. Does anyone run a configuration like this? How does it work for you? Anyone have any improvements on the design? Suggestions?
[Dovecot] MD5 to CRAM-MD5 password conversion?
We have a plethora of accounts for which we would like to enable CRAM-MD5 but their passwords are stored as MD5 hashes. Is there anything we can do? Can we take a linux MD5 hashed password (e.g. $1$fac330ee$wd6Tll...) and convert it to dovecot's CRAM-MD5 format (e.g. {CRAM-MD5}b3f297...)? Thanks!
Re: [Dovecot] Dovecot 2.0.1 Quota dict timeout
On 9/8/10 , Sep 8, 8:05 AM, Timo Sirainen wrote: On Tue, 2010-09-07 at 18:07 -0700, David Jonas wrote: Well, see if this helps: http://hg.dovecot.org/dovecot-2.0/rev/902f008f17cf The patch didn't seem to make a difference. I'm still seeing the error. If you have any ideas on debugging I'm open to trying them. Dovecot 2.0.1 running on CentOS release 4.8 (Final) i386. MySQL AB's devel/client/shared rpms, 4.1.22. I can insert some logging probes (or whatever you like) if you give me pointers on how and where. Try the attached patch what it logs with it? Hm, nothing extra gets logged. I'm not sure this will help, but I inserted a few more probes wherever sql_not_connected_result comes up. One fired, src/lib-sql/driver-mysql.c around line 294: Sep 9 12:17:23 dovecot: dict: Warning: In driver_mysql_query_s, driver_mysql_do_query returned 0, query=BEGIN Sep 9 12:17:23 dovecot: dict: Error: sql dict: commit failed: Not connected to database Sep 9 12:17:23 dovecot: lda(u...@example.com): Error: dict quota: Quota update failed, it's now desynced
Re: [Dovecot] Dovecot 2.0.1 Quota dict timeout
On 9/9/10 , Sep 9, 12:23 PM, David Jonas wrote: On 9/8/10 , Sep 8, 8:05 AM, Timo Sirainen wrote: On Tue, 2010-09-07 at 18:07 -0700, David Jonas wrote: Well, see if this helps: http://hg.dovecot.org/dovecot-2.0/rev/902f008f17cf The patch didn't seem to make a difference. I'm still seeing the error. If you have any ideas on debugging I'm open to trying them. Dovecot 2.0.1 running on CentOS release 4.8 (Final) i386. MySQL AB's devel/client/shared rpms, 4.1.22. I can insert some logging probes (or whatever you like) if you give me pointers on how and where. Try the attached patch what it logs with it? Hm, nothing extra gets logged. I'm not sure this will help, but I inserted a few more probes wherever sql_not_connected_result comes up. One fired, src/lib-sql/driver-mysql.c around line 294: Sep 9 12:17:23 dovecot: dict: Warning: In driver_mysql_query_s, driver_mysql_do_query returned 0, query=BEGIN Sep 9 12:17:23 dovecot: dict: Error: sql dict: commit failed: Not connected to database Sep 9 12:17:23 dovecot: lda(u...@example.com): Error: dict quota: Quota update failed, it's now desynced A little more info: Sep 9 19:31:30 dovecot: dict: Warning: mysql_query failed, error=Lost connection to MySQL server during query, query=BEGIN Generated from this probe in src/lib-sql/driver-mysql.c near line 219, in driver_mysql_do_query(): if (mysql_query(db-mysql, query) == 0) return 1; /* failed */ i_warning(mysql_query failed, error=%s, query=%s, mysql_error(db-mysql), query);
Re: [Dovecot] Dovecot 2.0.1 Quota dict timeout
On 9/7/10 , Sep 7, 11:56 AM, Timo Sirainen wrote: On Sat, 2010-09-04 at 12:17 -0700, David Jonas wrote: Sep 4 11:27:47 dovecot: dict: Error: sql dict: commit failed: Not connected to database Hmm. dict process thinks that all of its SQL connections are in use. Although why that happens is slightly strange, because unless you changed the defaults one process can handle only a single client connection at a time, and normally one client wouldn't be sending multiple requests simultaneously. I didn't change any defaults regarding dict. service dict { unix_listener dict { group = vmail mode = 0600 user = vmail } } imap-login and pop3-login have service_count = 0 , but the protocol directives only have mail_plugins set, e.g.: protocol imap { mail_plugins = $mail_plugins imap_quota } There is anyway a potential problem with an asynchronous SQL query not being finished when a synchronous SQL query is started. Although that's a problem only with PostgreSQL, not MySQL. Anyway, should be fixed some day.. Well, see if this helps: http://hg.dovecot.org/dovecot-2.0/rev/902f008f17cf The patch didn't seem to make a difference. I'm still seeing the error. If you have any ideas on debugging I'm open to trying them. Dovecot 2.0.1 running on CentOS release 4.8 (Final) i386. MySQL AB's devel/client/shared rpms, 4.1.22. I can insert some logging probes (or whatever you like) if you give me pointers on how and where.
[Dovecot] Dovecot 2.0.1 Quota dict timeout
Since an upgrade to 2.0.1 from 1.2.x we're seeing this on (mostly) large mailboxes (Maildir) Sep 4 11:27:25 dovecot: dict: mysql: Connected to 192.168.dd.dd (accounts) ... Sep 4 11:27:46 kelly-a dovecot: lda(u...@example.com):): msgid=1...@xxx: saved mail to INBOX Sep 4 11:27:47 dovecot: dict: Error: sql dict: commit failed: Not connected to database Sep 4 11:27:47 dovecot: imap(u...@example.com): Error: dict quota: Quota update failed, it's now desynced The dict is MySQL on another server, same switch. It does appear that the quota gets out of sync. From the entries I checked an email is often being delivered shortly before the error (as above). But I also see it happen with the lda: Sep 4 08:27:11 dovecot: pop3(u...@example.com): Disconnected: Logged out... Sep 4 08:42:48 dovecot: lda(u...@example.com): msgid=1...@aaa: saved mail to INBOX Sep 4 08:43:26 dovecot: lda(u...@example.com): msgid=1...@bbb: saved mail to INBOX Sep 4 08:43:26 dovecot: dict: Error: sql dict: commit failed: Not connected to database Sep 4 08:43:26 dovecot: lda(u...@example.com): Error: dict quota: Quota update failed, it's now desynced Sep 4 08:46:14 dovecot: pop3-login: Login: user=u...@example.com,... This is happening on two different dovecot servers. The only thing they share is the MySQL server, but it has a load of 0.5 at peak. Is there something that can be tuned to keep this from happening? Possibly relevant entries from `doveconf -n`. Any tuning regarding dict can be assumed as default: maildir_very_dirty_syncs = yes mail_plugins = quota plugin { quota = dict:user:%LTd-%LTn:proxy::quota } Thanks! David
[Dovecot] auth socket goes away on reload
We're using the SQLite backend for authentication of Postfix SASL. When the db is replaced we HUP dovecot to close and reopen its connection. During this time it appears the socket file is removed and Postfix rejects the authentication attempt. From the logs: Jun 3 00:23:02 xxx dovecot: dovecot: SIGHUP received - reloading configuration Jun 3 00:23:02 xxx postfix/smtpd[14746]: warning: SASL: Connect to private/auth failed: Connection refused Jun 3 00:23:02 xxx postfix/smtpd[14746]: warning: unknown[dd.dd.dd.dd]: SASL LOGIN authentication failed: Jun 3 00:23:02 xxx postfix/smtpd[14746]: NOQUEUE: reject: RCPT from unknown[dd.dd.dd.dd]: 554 5.7.1 u...@example.com: Relay access denied; from=us...@example.com to=u...@example.com proto=ESMTP helo=localhost.localdomain Jun 3 00:23:02 xxx postfix/smtpd[14930]: warning: unknown[dd.dd.dd.dd]: SASL LOGIN authentication failed: Connection lost to authentication server Jun 3 00:23:02 xxx postfix/smtpd[14930]: lost connection after AUTH from unknown[dd.dd.dd.dd] Is there an obvious way around this? I know I could somehow merge the changes into the running sqlite db but that undermines the simplicity of the design I have. Maybe a patch to reopen the db if it's replaced? Or perhaps I should just switch to a different db format -- that's probably the quickest/easiest solution. Any other ideas? There are about 20k entries to deal with. Thanks, David
Re: [Dovecot] auth socket goes away on reload
On 6/3/10 , Jun 3, 9:45 AM, Jerrale Gayle wrote: On 6/3/2010 12:35 PM, David Jonas wrote: We're using the SQLite backend for authentication of Postfix SASL. When the db is replaced we HUP dovecot to close and reopen its connection. During this time it appears the socket file is removed and Postfix rejects the authentication attempt. From the logs: Jun 3 00:23:02 xxx dovecot: dovecot: SIGHUP received - reloading configuration Jun 3 00:23:02 xxx postfix/smtpd[14746]: warning: SASL: Connect to private/auth failed: Connection refused Jun 3 00:23:02 xxx postfix/smtpd[14746]: warning: unknown[dd.dd.dd.dd]: SASL LOGIN authentication failed: Jun 3 00:23:02 xxx postfix/smtpd[14746]: NOQUEUE: reject: RCPT from unknown[dd.dd.dd.dd]: 554 5.7.1u...@example.com: Relay access denied; from=us...@example.com to=u...@example.com proto=ESMTP helo=localhost.localdomain Jun 3 00:23:02 xxx postfix/smtpd[14930]: warning: unknown[dd.dd.dd.dd]: SASL LOGIN authentication failed: Connection lost to authentication server Jun 3 00:23:02 xxx postfix/smtpd[14930]: lost connection after AUTH from unknown[dd.dd.dd.dd] Is there an obvious way around this? I know I could somehow merge the changes into the running sqlite db but that undermines the simplicity of the design I have. Maybe a patch to reopen the db if it's replaced? Or perhaps I should just switch to a different db format -- that's probably the quickest/easiest solution. Any other ideas? There are about 20k entries to deal with. Thanks, David the HUP reload method is only available on dovecot 2.0. Make sure you're using that version to do that; otherwise, killall dovecot and then dovecot. I should have specified that this is dovecot 1.2.7. The sighup seems to work just fine except for the effect I've outlined.
Re: [Dovecot] auth socket goes away on reload
On 6/3/10 , Jun 3, 10:16 AM, William Blunn wrote: On 03/06/2010 17:35, David Jonas wrote: We're using the SQLite backend for authentication of Postfix SASL. When the db is replaced we HUP dovecot to close and reopen its connection. During this time it appears the socket file is removed and Postfix rejects the authentication attempt. From the logs: Jun 3 00:23:02 xxx dovecot: dovecot: SIGHUP received - reloading configuration Jun 3 00:23:02 xxx postfix/smtpd[14746]: warning: SASL: Connect to private/auth failed: Connection refused Jun 3 00:23:02 xxx postfix/smtpd[14746]: warning: unknown[dd.dd.dd.dd]: SASL LOGIN authentication failed: Jun 3 00:23:02 xxx postfix/smtpd[14746]: NOQUEUE: reject: RCPT from unknown[dd.dd.dd.dd]: 554 5.7.1u...@example.com: Relay access denied; from=us...@example.com to=u...@example.com proto=ESMTP helo=localhost.localdomain Jun 3 00:23:02 xxx postfix/smtpd[14930]: warning: unknown[dd.dd.dd.dd]: SASL LOGIN authentication failed: Connection lost to authentication server Jun 3 00:23:02 xxx postfix/smtpd[14930]: lost connection after AUTH from unknown[dd.dd.dd.dd] Is there an obvious way around this? I know I could somehow merge the changes into the running sqlite db but that undermines the simplicity of the design I have. Maybe a patch to reopen the db if it's replaced? Or perhaps I should just switch to a different db format -- that's probably the quickest/easiest solution. Any other ideas? There are about 20k entries to deal with. It sounds like your updates arrive in the shape of entire-table updates. That is no problem. You can easily apply entire-table updates to the database without having to re-create the SQLite database file, and without having to tell Dovecot. Just create a new table (with a different name) inside the SQLite database file, with the new content, then snap it into place using a pair of table renames inside a transaction; then delete the old table. That way you don't need to re-create the database file or HUP Dovecot, and Dovecot will only ever see the old data or the new data. That sounds reasonable, not sure why I didn't think of it! Thanks.
Re: [Dovecot] auth socket goes away on reload
On 6/3/10 , Jun 3, 11:17 AM, William Blunn wrote: On 03/06/2010 19:08, David Jonas wrote: On 6/3/10 , Jun 3, 10:16 AM, William Blunn wrote: On 03/06/2010 17:35, David Jonas wrote: We're using the SQLite backend for authentication of Postfix SASL. When the db is replaced we HUP dovecot to close and reopen its connection. During this time it appears the socket file is removed and Postfix rejects the authentication attempt. From the logs: Jun 3 00:23:02 xxx dovecot: dovecot: SIGHUP received - reloading configuration Jun 3 00:23:02 xxx postfix/smtpd[14746]: warning: SASL: Connect to private/auth failed: Connection refused Jun 3 00:23:02 xxx postfix/smtpd[14746]: warning: unknown[dd.dd.dd.dd]: SASL LOGIN authentication failed: Jun 3 00:23:02 xxx postfix/smtpd[14746]: NOQUEUE: reject: RCPT from unknown[dd.dd.dd.dd]: 554 5.7.1u...@example.com: Relay access denied; from=us...@example.com to=u...@example.com proto=ESMTP helo=localhost.localdomain Jun 3 00:23:02 xxx postfix/smtpd[14930]: warning: unknown[dd.dd.dd.dd]: SASL LOGIN authentication failed: Connection lost to authentication server Jun 3 00:23:02 xxx postfix/smtpd[14930]: lost connection after AUTH from unknown[dd.dd.dd.dd] Is there an obvious way around this? I know I could somehow merge the changes into the running sqlite db but that undermines the simplicity of the design I have. Maybe a patch to reopen the db if it's replaced? Or perhaps I should just switch to a different db format -- that's probably the quickest/easiest solution. Any other ideas? There are about 20k entries to deal with. It sounds like your updates arrive in the shape of entire-table updates. That is no problem. You can easily apply entire-table updates to the database without having to re-create the SQLite database file, and without having to tell Dovecot. Just create a new table (with a different name) inside the SQLite database file, with the new content, then snap it into place using a pair of table renames inside a transaction; then delete the old table. That way you don't need to re-create the database file or HUP Dovecot, and Dovecot will only ever see the old data or the new data. That sounds reasonable, not sure why I didn't think of it! Thanks. If you are going to keep the SQLite database around, you might want to look at vacuuming it periodically using either VACUUM or the auto_vacuum PRAGMA depending on what fits your context best. http://sqlite.org/lang_vacuum.html http://sqlite.org/pragma.html#pragma_auto_vacuum Bill Here is what I came up with. Seems to work pretty well. #!/bin/sh TMP=/home/config/tmp DATA=/home/config/data rsync -t -e ssh x...@xxx:mail/accounts.sqlite $TMP echo ATTACH '$TMP/accounts.sqlite' AS db2; BEGIN TRANSACTION; DROP TABLE IF EXISTS mytemp; CREATE TABLE mytemp ( 'user' varchar(128) COLLATE NOCASE, 'domain' varchar(128) COLLATE NOCASE, 'password' varchar(64) default NULL ); INSERT INTO mytemp (user,domain,password) SELECT user,domain,password FROM db2.accounts; DROP TABLE accounts; ALTER TABLE mytemp RENAME TO accounts; COMMIT TRANSACTION; | sqlite3 $DATA/accounts.sqlite No vacuuming needed really. It doubles the size of the db but only once. I didn't see a speed improvement on the consecutive runs but I also didn't see any additional latency. Looks good so far. Thanks for your help!
Re: [Dovecot] quick question
On 01/22/2010 10:15 AM, Brandon Davidson wrote: We've thought about enabling IP-based session affinity on the load balancer, but this would concentrate the load of our webmail clients, as well as not really solving the problem for users that leave clients open on multiple systems. Webmail and IMAP servers are on the same network for us so we don't have to go through the BigIP for this, we just use local round-robin DNS to avoid any sort of clumping. Imapproxy or dovecot proxy local to the webmail server would get around that too. I've done a small bit of looking at nginx's imap proxy support, but it's not really set up to do what we want, and would require moving the IMAP virtual server off our load balancers and on to something significantly less supportable. Having the dovecot processes 'talk amongst themselves' to synchronize things, or go into proxy mode automatically, would be fantastic. Though we aren't using NFS we do have a BigIP directing IMAP and POP3 traffic to multiple dovecot stores. We use mysql authentication and the proxy_maybe option to keep users on the correct box. My tests using an external proxy box didn't significantly reduce the load on the stores compared to proxy_maybe. And you don't have to manage another box/config. Since you only need to keep users on the _same_ box and not the _correct_ box, if you're using mysql authentication you could hash the username or domain to a particular IP address: SELECT CONCAT('192.168.1.', ORD(UPPER(SUBSTRING('%d', 1, 1))) AS host, 'Y' AS proxy_maybe, ... Just assign IP addresses 192.168.1.48-90 to your dovecot servers. Shift the range by adding or subtracting to the ORD. A mysql function would likely work just as well. If a server goes down, move it's IP. You could probably make pairs with heartbeat or some monitoring software to do it automatically. -David
Re: [Dovecot] [OT] preferred clients
On 11/21/2009 06:15 PM, Thomas wrote: Close TB. Delete your .msf to recreate indexes. Start TB again and let it re-index (it will take a while). Then everything should be fine. If not do a bug report. I submitted a bug report before I saw your post (Tested in TB3.0rc1, BuildID=20091121181041): https://bugzilla.mozilla.org/show_bug.cgi?id=530551 This seems like a bug to me as multiple people are experiencing it and it doesn't go away unless you convince TB to reindex the folder.
Re: [Dovecot] [OT] preferred clients
On 11/20/09 , Nov 20, 11:13 AM, Charles Marcus wrote: On 11/20/2009, Charles Sprickman (sp...@bway.net) wrote: It is very good compared to TB2. However, the exact problem the OP described still exists in 3.0b4. I've had the folks I setup with TB3 bugging the hell out of me about the all of a sudden a bunch of messages are marked new issue. This has to be some weird dovecot config problem... we just haven't ever seen anything even close to that here. Do you use mbox or maildir (we've always used maildir)? I have this problem quite frequently. Only seems to happen on folders that have a ton of messages (i.e. dovecot mailing list). I've checked in a webmail IMAP client and they don't appear as unseen, so I'm guessing it is a TB problem. Not very thorough I know, but it certainly points to TB. Using maildir DAS, dovecot 1.2.5, TB 3b4 And I thought it was just me :)
Re: [Dovecot] Aborted: Worker is buggy
On 8/24/09 , Aug 24, 11:10 AM, Timo Sirainen wrote: On Thu, 2009-08-20 at 08:03 -0700, David Jonas wrote: Aug 20 05:34:38 kelly dovecot: auth(default): BUG: Worker sent reply with id 53, expected 54 http://hg.dovecot.org/dovecot-1.2/rev/0827941c0e7c fixes it, I think. Applied to 1.2.4. It may be a long time until it happens again since I've moved the slow databases, but I'll try to break the old version and the patched version and post the results. Thanks, David
[Dovecot] Aborted: Worker is buggy
I upgraded from 1.1 to 1.2 a couple months ago and now we are seeing this: Aug 20 05:34:38 kelly dovecot: auth(default): auth workers: Auth request was queued for 7 seconds, 10 left in queue ... Aug 20 05:34:38 kelly dovecot: auth(default): BUG: Worker sent reply with id 53, expected 54 Aug 20 05:34:38 kelly dovecot: auth(default): worker-server(x...@example.com,dd.dd.dd.dd): Aborted: Worker is buggy Aug 20 05:34:38 kelly dovecot: auth(default): passdb(x...@example.com,dd.dd.dd.dd): Fallbacking to expired data from cache When I attempt to restart dovecot there is one dovecot-auth process that won't die and keeps the server from starting. A simple 'killall dovecot-auth' takes care of it and the server is able to start. I'm not using anything exotic, dovecot config below. The most curious fact is that we have identical servers running the same version of dovecot (1.2.3, happened under 1.2.1 as well) talking to the same MySQL database and both develop the problem at the same time. The database is under considerable stress at the time this occurs as the policyd tables are being cleaned. But this happens twice per hour and only sometimes does dovecot have trouble. I'm working to move the policyd database elsewhere, but of course this shouldn't happen in any case. Any help would be much appreciated. Thanks! David # 1.2.3: /etc/dovecot/dovecot.conf # OS: Linux 2.6.9-42.0.10.ELsmp i686 Red Hat Enterprise Linux ES release 4 (Nahant Update 7) protocols: imap imaps pop3 pop3s ssl_cert_file: /etc/dovecot/dovecot.pem ssl_key_file: /etc/dovecot/dovecot.pem ssl_cipher_list: ALL:!LOW:!SSLv2 disable_plaintext_auth: no login_dir: /usr/local/dovecot-1.2/var/run/dovecot/login login_executable(default): /usr/local/dovecot-1.2/libexec/dovecot/imap-login login_executable(imap): /usr/local/dovecot-1.2/libexec/dovecot/imap-login login_executable(pop3): /usr/local/dovecot-1.2/libexec/dovecot/pop3-login login_process_per_connection: no verbose_proctitle: yes first_valid_uid: 89 maildir_stat_dirs: yes mail_executable(default): /usr/local/dovecot-1.2/libexec/dovecot/imap mail_executable(imap): /usr/local/dovecot-1.2/libexec/dovecot/imap mail_executable(pop3): /usr/local/dovecot-1.2/libexec/dovecot/pop3 mail_plugins(default): quota imap_quota mail_plugins(imap): quota imap_quota mail_plugins(pop3): quota mail_plugin_dir(default): /usr/local/dovecot-1.2/lib/dovecot/imap mail_plugin_dir(imap): /usr/local/dovecot-1.2/lib/dovecot/imap mail_plugin_dir(pop3): /usr/local/dovecot-1.2/lib/dovecot/pop3 pop3_uidl_format(default): %08Xu%08Xv pop3_uidl_format(imap): %08Xu%08Xv pop3_uidl_format(pop3): %f pop3_client_workarounds(default): pop3_client_workarounds(imap): pop3_client_workarounds(pop3): outlook-no-nuls oe-ns-eoh lda: postmaster_address: postmas...@no-ip.com hostname: no-ip.com mail_plugins: quota auth_socket_path: /var/run/dovecot/auth-master auth default: mechanisms: login plain cram-md5 cache_size: 5000 cache_ttl: 1800 cache_negative_ttl: 60 user: nobody username_chars: abcdefghijklmnopqrstuvwxyz ABCDEFGHIJKLMNOPQRSTUVWXYZ01234567890.-_@ username_translation: %@ username_format: %TLu verbose: yes passdb: driver: sql args: /etc/dovecot/dovecot-sql.conf userdb: driver: prefetch userdb: driver: sql args: /etc/dovecot/dovecot-sql.conf socket: type: listen master: path: /var/run/dovecot/auth-master mode: 384 user: vmail group: vmail plugin: quota: dict:user:%LTd-%LTn:proxy::quota quote_rule: *:storage=100G dict: quota: mysql:/etc/dovecot/dovecot-dict-sql.conf ## dovecot-sql.conf driver = mysql connect = host=192.168.dd.dd dbname=xxx user=xxx password=xxx user_query=... password_query=... ## dovecot-dict-sql.conf connect = host=192.168.dd.dd dbname=xxx user=xxx password=xxx map { pattern = priv/quota/storage table = quota2 username_field = username value_field = bytes } map { pattern = priv/quota/messages table = quota2 username_field = username value_field = messages }
[Dovecot] CRAM-MD5 and proxy_maybe
When using proxy_maybe CRAM-MD5 authentication fails when the connection is proxied. Is this expected behavior? Is proxy_maybe too simplified for this case? We're using SQL so I could rewrite the query with IFs to fake proxy_maybe and return the password as NULL and nologin as Y, but if it works that way couldn't it work with proxy_maybe? This works: password_query = \ SELECT NULL AS password, host, CONCAT(user,'@',domain) AS destuser \ 'Y' AS nologin, 'Y' AS nodelay, 'Y' AS proxy \ FROM accounts WHERE class='pop' AND domain='%d' This doesn't work if proxied and CRAM-MD5 auth: password_query = \ SELECT \ CONCAT(user,'@',domain) AS user, password, \ host, 'Y' AS proxy_maybe, \ target AS userdb_home, uid AS userdb_uid, gid AS userdb_gid \ FROM accounts WHERE \ class='pop' AND domain='%d' AND user='%n' \ LIMIT 1 Thanks, David
Re: [Dovecot] CRAM-MD5 and proxy_maybe
Timo Sirainen wrote: On Nov 19, 2008, at 8:14 PM, David Jonas wrote: When using proxy_maybe CRAM-MD5 authentication fails when the connection is proxied. Is this expected behavior? Is proxy_maybe too simplified for this case? Fails how? We're using SQL so I could rewrite the query with IFs to fake proxy_maybe and return the password as NULL and nologin as Y, but if it works that way couldn't it work with proxy_maybe? This works: password_query = \ SELECT NULL AS password, host, CONCAT(user,'@',domain) AS destuser \ 'Y' AS nologin, 'Y' AS nodelay, 'Y' AS proxy \ FROM accounts WHERE class='pop' AND domain='%d' So all servers are using this authentication? It works because it lets users log in using any password. This doesn't work if proxied and CRAM-MD5 auth: password_query = \ SELECT \ CONCAT(user,'@',domain) AS user, password, \ host, 'Y' AS proxy_maybe, \ target AS userdb_home, uid AS userdb_uid, gid AS userdb_gid \ FROM accounts WHERE \ class='pop' AND domain='%d' AND user='%n' \ LIMIT 1 The problem is that Dovecot doesn't know the plaintext password for logging into the remote server. It can't be extracted from CRAM-MD5 authentication. Are you storing plaintext passwords in the password field? If so, you could return password as pass (as well as the password field itself). If you're not using plaintext passwords, you'll have to use a master password (see http://wiki.dovecot.org/PasswordDatabase/ExtraFields/Proxy). (I suppose in theory Dovecot could also implement CRAM-MD5 client functionality and use the stored CRAM-MD5 hash to log into the remote server, but this wouldn't work with many other auth mechanisms, such as DIGEST-MD5, so I don't want waste time on coding a CRAM-MD5-only solution.) Thanks Timo. I've been playing with the query for the past few hours and I have a workable solution which turns out to be what you suggested (passing pass) since we have to store plaintext anyway. It took me awhile to glean it from the docs. Thanks for the support. We really appreciate it. Here's our query, for posterity: SELECT \ IF('%l' != host, host, NULL) AS host, \ IF('%l' != host, 'Y', NULL) AS proxy, \ IF('%l' != host, 'Y', NULL) AS nodelay, \ IF('%l' != host, 'Y', NULL) AS nologin, \ IF('%l' != host, [EMAIL PROTECTED], NULL) AS destuser, \ IF('%l' != host, password, NULL) AS pass, \ CONCAT(user,'@',domain) AS user, \ password, \ target AS userdb_home, uid AS userdb_uid, gid AS userdb_gid \ FROM accounts WHERE \ class='pop' AND domain='%d' AND user='%n' \ LIMIT 1
[Dovecot] New Dovecot server buildout review
Hello Everyone, I have the opportunity to do a new mail system build out for about 20k email boxes over a few hundred domains. I already run a system like this, but this time I can do it right :). My thought on storage is to have a 12 drive JBOD system attached via SAS to a 3ware 9690SA raid controller. Drives would be RAID'd (10 or 5) to appear as one for message storage. Dovecot would run on that box and deliver directly from postfix. POP and IMAP connections would come in directly (no proxy). Postfix head boxes in front of this do anti-virus and spamassassin, so this box won't be doing any heavy crunching, just delivery and serving of messages. Does this sound reasonable? Is there details that I'm missing? An obvious bottleneck I'm overlooking? Is anyone running a similar config with success? It all sounds very standard, but I haven't run something like this before. Thanks for the help! David
Re: [Dovecot] Trim trailing whitespace from username
Timo Sirainen wrote: On Fri, 2008-05-16 at 00:48 -0700, David Jonas wrote: Recently we changed Postfix to use Dovecot for our SASL authentication and we ran into trouble with some of our clients having extraneous spaces at the end of their usernames. The quick fix was to add a space to username_chars. The slightly longer fix was a pretty simple patch to Dovecot. I put the trimming in auth_request_fix_username. I didn't think it warranted a full strfuncs function. If there is a better way to do this I'm all ears. I don't really like patching with my own code, even if I did essentially steal if from the kernel's strstrip(). How about this: http://hg.dovecot.org/dovecot-1.1/rev/15ddb7513e2d Then you can use auth_username_format = %Tu I spoke too soon. Dovecot still complains about the invalid character. While testing I had forgotten to update to remove space from username_chars. I should have known really, since the invalid chars check is done before var_expand() in auth_request_fix_username(). Any other ideas? Adding space to the username_chars list doesn't seem like a security threat, but honestly I don't know much about that. David ### From the log: dovecot: auth(default): client in: AUTH 1 LOGIN service=smtp resp=ZGpvbmFzQHZpdGFsd2Vya3MuY29tIA== dovecot: auth(default): auth(?): Invalid username: [EMAIL PROTECTED] dovecot: auth(default): login(?): Username contains disallowed character: 0x20 dovecot: auth(default): client out: FAIL1 # dovecot -n # 1.1.rc5: /usr/local/dovecot-1.1/etc/dovecot-auth.conf ... disable_plaintext_auth: no ... auth default: mechanisms: login plain cram-md5 ... username_chars: [EMAIL PROTECTED] username_translation: %@ username_format: %LTu verbose: yes debug: yes debug_passwords: yes passdb: driver: sql args: /usr/local/dovecot-1.1/etc/dovecot-sql.conf userdb: driver: prefetch socket: type: listen client: path: /var/spool/postfix-smtp-auth/private/auth mode: 432 user: postfix group: postfix
Re: [Dovecot] Trim trailing whitespace from username
Cassidy Larson wrote: If you're using MySQL for your database driver you can easily use the TRIM() function in your query to strip off leading and ending whitespace characters. I do that and a LCASE() to forcehttp://dev.mysql.com/doc/refman/5.0/en/string-functions.html#function_trimthe usernames to lowercase in the query. Yes, I tried that. MySQL(4.x) actually returns the same for SELECT * WHERE user='[EMAIL PROTECTED] ' and SELECT * WHERE user='[EMAIL PROTECTED]' so TRIM() is only necessary if the values are CONCAT'd. This is really just an issue with invalid chars in the username. And it's a rather small issue, but for some reason a ton of our clients who use Exchange all have spaces at the end of their usernames. As long as having a space in username_chars isn't going to open me up to any exploits (I can't imagine how) I'll stick with it. I spoke too soon. Dovecot still complains about the invalid character. While testing I had forgotten to update to remove space from username_chars. I should have known really, since the invalid chars check is done before var_expand() in auth_request_fix_username(). Any other ideas? Adding space to the username_chars list doesn't seem like a security threat, but honestly I don't know much about that. David ### From the log: dovecot: auth(default): client in: AUTH 1 LOGIN service=smtp resp=ZGpvbmFzQHZpdGFsd2Vya3MuY29tIA== dovecot: auth(default): auth(?): Invalid username: [EMAIL PROTECTED] dovecot: auth(default): login(?): Username contains disallowed character: 0x20 dovecot: auth(default): client out: FAIL1 # dovecot -n # 1.1.rc5: /usr/local/dovecot-1.1/etc/dovecot-auth.conf ... disable_plaintext_auth: no ... auth default: mechanisms: login plain cram-md5 ... username_chars: [EMAIL PROTECTED] username_translation: %@ username_format: %LTu verbose: yes debug: yes debug_passwords: yes passdb: driver: sql args: /usr/local/dovecot-1.1/etc/dovecot-sql.conf userdb: driver: prefetch socket: type: listen client: path: /var/spool/postfix-smtp-auth/private/auth mode: 432 user: postfix group: postfix -- No-IP.com
[Dovecot] Trim trailing whitespace from username
Recently we changed Postfix to use Dovecot for our SASL authentication and we ran into trouble with some of our clients having extraneous spaces at the end of their usernames. The quick fix was to add a space to username_chars. The slightly longer fix was a pretty simple patch to Dovecot. I put the trimming in auth_request_fix_username. I didn't think it warranted a full strfuncs function. If there is a better way to do this I'm all ears. I don't really like patching with my own code, even if I did essentially steal if from the kernel's strstrip(). diff -u dovecot-1.1.rc5/src/auth/auth-request.c dovecot-1.1.rc5-patched/src/auth/auth-request.c --- dovecot-1.1.rc5/src/auth/auth-request.c 2008-05-04 15:01:52.0 -0700 +++ dovecot-1.1.rc5-patched/src/auth/auth-request.c 2008-05-16 00:44:15.0 -0700 @@ -22,6 +22,7 @@ #include stdlib.h #include sys/stat.h +#include ctype.h struct auth_request * auth_request_new(struct auth *auth, const struct mech_module *mech, @@ -750,6 +751,7 @@ { unsigned char *p; char *user; + size_t size; if (strchr(username, '@') == NULL request-auth-default_realm != NULL) { @@ -759,6 +761,16 @@ user = p_strdup(request-pool, username); } + /* Trim trailing whitespace from the username */ + size = strlen((unsigned char*)user); + if(size) { + p = user + size - 1; + while (p != user isspace(*p)) + p--; + *(p + 1) = '\0'; + p = NULL; + } + for (p = (unsigned char *)user; *p != '\0'; p++) { if (request-auth-username_translation[*p 0xff] != 0) *p = request-auth-username_translation[*p 0xff];
Re: [Dovecot] Trim trailing whitespace from username
Timo Sirainen wrote: On Fri, 2008-05-16 at 00:48 -0700, David Jonas wrote: Recently we changed Postfix to use Dovecot for our SASL authentication and we ran into trouble with some of our clients having extraneous spaces at the end of their usernames. The quick fix was to add a space to username_chars. The slightly longer fix was a pretty simple patch to Dovecot. I put the trimming in auth_request_fix_username. I didn't think it warranted a full strfuncs function. If there is a better way to do this I'm all ears. I don't really like patching with my own code, even if I did essentially steal if from the kernel's strstrip(). How about this: http://hg.dovecot.org/dovecot-1.1/rev/15ddb7513e2d Then you can use auth_username_format = %Tu Ah, a much better place to put it. Applied cleaningly, seems to be working well. Thanks! I've added it to the wiki, http://wiki.dovecot.org/Variables
Re: [Dovecot] Couldn't init INBOX: Can't sync mailbox: Messages keep getting expunged
Timo Sirainen wrote: I finally figured out what's happening: v1.0 writes expunges as plain expunged records to dovecot.index.log file. v1.1 treats such records as expunge requests, while the actual expunging is done by writing another expunged (ext) record. With v1.1 dovecot.index file isn't always required, because its contents can be produced by reading dovecot.index.log file, assuming that is the first generated log file (prev log file sequence=0 in header). So what happens is that for some reason v1.1 doesn't read dovecot.index file, but reads dovecot.index.log file and ignores all expunge records. Maildir new/ and cur/ mtimes are all the same as in index files, so Dovecot doesn't bother reading the maildir. I was able to reproduce this by deleting dovecot.index file, but not without deleting it. So if it exists and you still have this problem, I don't really know why it wouldn't have read it.. Anyway, this fixes the problem: http://hg.dovecot.org/dovecot-1.1/rev/b776f2b8d827 Ha! I spent all afternoon making you a beautiful reproduction of the error. And I was nearly finished, too. Applied the patch to rc5 and everything looks good. Thanks!
Re: [Dovecot] Couldn't init INBOX: Can't sync mailbox: Messages keep getting expunged
Timo Sirainen wrote: I finally figured out what's happening: v1.0 writes expunges as plain expunged records to dovecot.index.log file. v1.1 treats such records as expunge requests, while the actual expunging is done by writing another expunged (ext) record. With v1.1 dovecot.index file isn't always required, because its contents can be produced by reading dovecot.index.log file, assuming that is the first generated log file (prev log file sequence=0 in header). So what happens is that for some reason v1.1 doesn't read dovecot.index file, but reads dovecot.index.log file and ignores all expunge records. Maildir new/ and cur/ mtimes are all the same as in index files, so Dovecot doesn't bother reading the maildir. I was able to reproduce this by deleting dovecot.index file, but not without deleting it. So if it exists and you still have this problem, I don't really know why it wouldn't have read it.. Anyway, this fixes the problem: http://hg.dovecot.org/dovecot-1.1/rev/b776f2b8d827 Well, it looked good but pop3 processes started accumulating and pretty soon the load was over 50. No errors were getting logged. strace showed some processes waiting for disk and others being served their mail. IO-wait was bouncing off 100%. Applied the patch to a different box (exact same config) that had been running rc4 for about three days and it seems okay.
Re: [Dovecot] Couldn't init INBOX: Can't sync mailbox: Messages keep getting expunged
Timo Sirainen wrote: On Mon, 2008-05-05 at 17:46 -0700, David Jonas wrote: Anyway, this fixes the problem: http://hg.dovecot.org/dovecot-1.1/rev/b776f2b8d827 Well, it looked good but pop3 processes started accumulating and pretty soon the load was over 50. No errors were getting logged. strace showed some processes waiting for disk and others being served their mail. IO-wait was bouncing off 100%. I don't think that patch has the ability to cause such a large performance problem. Sounds more like Dovecot is recalculating virtual message sizes for existing files. It should have looked up the values from dovecot.index.cache files though. Or did you delete them? I didn't delete any files. Switching back to 1.0.2 (after some major cujoling) it's now running at a fine clip under load 1. I'll try it again. Perhaps it was a freak coincidence.
[Dovecot] 1.1rc4 file-offset-size unsupported?
Is --with-file-offset-size no longer supported in 1.1? # ./configure \ --prefix=/usr/local/dovecot-1.1 \ --with-mysql \ --with-file-offset-size=32 \ --with-ioloop=best \ --with-pop3d \ --with-ssl \ --with-deliver \ ... checking for _FILE_OFFSET_BITS value needed for large files... 64 ... Install prefix .. : /usr/local/dovecot-1.1 File offsets : 64bit I/O loop method . : epoll File change notification method . : dnotify Building with SSL support ... : yes (OpenSSL) Building with IPv6 support .. : yes Building with pop3 server ... : yes Building with mail delivery agent .. : yes Building with GSSAPI support : no Building with user database modules . : static prefetch passwd passwd-file checkpassword sql nss Building with password lookup modules : passwd passwd-file shadow pam checkpassword sql Building with SQL drivers : mysql # cat /etc/redhat-release Red Hat Enterprise Linux ES release 4 (Nahant Update 6) # uname -r 2.6.9-42.0.10.ELsmp
Re: [Dovecot] 1.1rc4 file-offset-size unsupported?
Timo Sirainen wrote: On Fri, 2008-05-02 at 14:57 -0700, David Jonas wrote: Is --with-file-offset-size no longer supported in 1.1? No. I thought no-one cared, and I found this nice AC_SYS_LARGEFILE autoconf macro that did all the work for me, so I decided to use it. Hmm. I guess it would be possible to change the option to --enable-large-files and put the check inside if block. It doesn't really matter to me. I think the reason it was in my ./configure was that I had problems going from .99 to 1.0 without it or moving from an old pIII to Xeon. If dovecot auto-corrects for this I'll just go with the newer default (64). Thanks for the quick reply.
[Dovecot] Couldn't init INBOX: Can't sync mailbox: Messages keep getting expunged
Upon upgrading from 1.0.2 to 1.1rc4 I see this error for many of our users: Getting size of message UID=1 failed Couldn't init INBOX: Can't sync mailbox: Messages keep getting expunged Logging in with IMAP I would see a bunch of messages with no subject or time and blank bodies (usine horde/imp). Removing dovecot.index.log, dovecot.index.log.2, or dovecot-uidlist fixes the problem for the account (I chose dovecot.index.log). I imagine that forces a rebuild of the indexes. Removing other files in the directory didn't seem to make a difference, including dovecot.index or some arbitrary non-dovecot file. Is there a more conventional way around this problem?
[Dovecot] Dovecot 1.1b2: Enet_connect_unix(/usr/local/dovecot-1.1/var/run/dovecot/dict-server) failed: Permission denied
On startup of dovecot 1.1b2 I seem to have some permission trouble. Dovecot was configured with --prefix=/usr/local/dovecot-1.1 for testing purposes while dovecot 1.0.2 is in production. # cd /usr/local/dovecot-1.1/sbin/ # ./dovecot -F -c ../etc/dovecot.conf Enet_connect_unix(/usr/local/dovecot-1.1/var/run/dovecot/dict-server) failed: Permission denied # ls -la /usr/local/dovecot-1.1/var/run/dovecot/dict-server srwxrwxrwx 1 root root 0 Oct 2 12:18 /usr/local/dovecot-1.1/var/run/dovecot/dict-server # tail -f /var/log/maillog.err Oct 2 12:23:25 kelly-a dovecot: auth(default): net_connect_unix(/usr/local/dovecot-1.1/var/run/dovecot/auth-worker.12891) failed: Permission denied Oct 2 12:23:25 kelly-a dovecot: Auth process died too early - shutting down Oct 2 12:23:25 kelly-a dovecot: child 12891 (auth) returned error 89 Changing base_dir and setting the path in the plugin params doesn't seem to make a difference either. Any ideas of what I'm doing wrong here? Thanks, David # ./dovecot -c ../etc/dovecot.conf -n # 1.1.beta2: ../etc/dovecot.conf protocols: imap pop3 listen(default): *:9143 listen(imap): *:9143 listen(pop3): *:9110 ssl_disable: yes login_dir: /var/run/dovecot-1.1/login login_executable(default): /usr/local/dovecot-1.1/libexec/dovecot/imap-login login_executable(imap): /usr/local/dovecot-1.1/libexec/dovecot/imap-login login_executable(pop3): /usr/local/dovecot-1.1/libexec/dovecot/pop3-login login_greeting: postoffice.no-ip.com (1.1) login_log_format_elements: user=[EMAIL PROTECTED] method=%m rip=%r lip=%l %c login_process_per_connection: no verbose_proctitle: yes first_valid_uid: 89 mail_uid: vmail mail_gid: vmail mail_executable(default): /usr/local/dovecot-1.1/libexec/dovecot/imap mail_executable(imap): /usr/local/dovecot-1.1/libexec/dovecot/imap mail_executable(pop3): /usr/local/dovecot-1.1/libexec/dovecot/pop3 mail_plugins(default): quota imap_quota mail_plugins(imap): quota imap_quota mail_plugins(pop3): quota mail_plugin_dir(default): /usr/local/dovecot-1.1/lib/dovecot/imap mail_plugin_dir(imap): /usr/local/dovecot-1.1/lib/dovecot/imap mail_plugin_dir(pop3): /usr/local/dovecot-1.1/lib/dovecot/pop3 mail_log_prefix: %Us(%u)[%p]: pop3_uidl_format(default): %08Xu%08Xv pop3_uidl_format(imap): %08Xu%08Xv pop3_uidl_format(pop3): %f pop3_client_workarounds(default): pop3_client_workarounds(imap): pop3_client_workarounds(pop3): outlook-no-nuls oe-ns-eoh auth default: mechanisms: plain login digest-md5 cram-md5 user: nobody username_translation: %@ username_format: %Lu verbose: yes passdb: driver: sql args: /usr/local/dovecot-1.1/etc/dovecot-sql.conf userdb: driver: prefetch plugin: quota: dict:user::proxy::quota dict: quota: mysql:/usr/local/dovecot-1.1/etc/dovecot-dict-quota.conf
Re: [Dovecot] 1.1b1 initial pop3 login: assertion failed
Timo Sirainen wrote: On Fri, 2007-09-28 at 16:12 -0700, David Jonas wrote: Sep 28 15:51:14 kelly-a dovecot: POP3([EMAIL PROTECTED])[8112]: Fixed index file /home/vmail/x/x.com/xxx/Maildir/dovecot.index: first_recent_uid 0 - 1 Fixed now: http://hg.dovecot.org/dovecot/rev/392a49f0c69a Sep 28 15:51:14 kelly-a dovecot: POP3([EMAIL PROTECTED])[8112]: file mail-index-view-sync.c: line 304 (mail_index_view_sync_begin): assertion failed: (view-index-map-hdr.messages_count = ctx-finish_min_msg_count) I guess you don't have a core file from this crash? It probably had something to do with upgrading index files from v1.0, but it still shouldn't happen and I'd like to fix it. I dug around the filesystem and didn't see a core file hanging around. What do I need to do to get a core dump? I can set a script to login into a bunch of accounts until it gets one that core dumps, if that'd be helpful. I just need to know how to make it produce the core and where the core might be written.
[Dovecot] 1.1b1 initial pop3 login: assertion failed
Currently running 1.0.2 with 1.1b1 listening on a different port. On first pop3 login on 1.1b1 I received an immediate disconnect (I was testing with telnet) with the log entries below. Subsequent logins work fine. Other accounts that I tried did not have the same problem. So this is probably isolated to some bad indexes or something, but I thought I should let Timo know. Regards, David Here is what I found in the logs: - Sep 28 15:51:14 kelly-a dovecot: POP3([EMAIL PROTECTED])[8112]: Fixed index file /home/vmail/x/x.com/xxx/Maildir/dovecot.index: first_recent_uid 0 - 1 Sep 28 15:51:14 kelly-a dovecot: POP3([EMAIL PROTECTED])[8112]: file mail-index-view-sync.c: line 304 (mail_index_view_sync_begin): assertion failed: (view-index-map-hdr.messages_count = ctx-finish_min_msg_count) Sep 28 15:51:14 kelly-a dovecot: POP3([EMAIL PROTECTED])[8112]: Raw backtrace: pop3 [0x80cd42e] - pop3(i_fatal+0) [0x80cce93] - pop3(mail_index_view_sync_begin+0x228) [0x80a91d2] - pop3(index_mailbox_sync_init+0xac) [0x8093abe] - pop3(maildir_storage_sync_init+0x11e) [0x80646a1] - pop3(mailbox_sync_init+0x1b) [0x80bb428] - pop3(mailbox_sync+0x18) [0x80bb49b] - pop3 [0x80582f5] - pop3(client_create+0x237) [0x80587ac] - pop3 [0x805afe8] - pop3(main+0x95) [0x805b0e4] - /lib/tls/libc.so.6(__libc_start_main+0xd3) [0x477de3] - pop3 [0x80581ed] Sep 28 15:51:14 kelly-a dovecot: child 8112 (pop3) killed with signal 6 Other information $ cat /etc/redhat-release Red Hat Enterprise Linux ES release 4 (Nahant Update 5) $ uname -r 2.6.9-42.0.10.ELsmp $ ./dovecot -n # 1.1.beta1: /usr/local/dovecot-1.1/etc/dovecot.conf base_dir: /var/run/dovecot-1.1/ protocols: imap pop3 listen(default): *:9143 listen(imap): *:9143 listen(pop3): *:9110 ssl_disable: yes login_dir: /var/run/dovecot-1.1/login login_executable(default): /usr/local/dovecot-1.1/libexec/dovecot/imap-login login_executable(imap): /usr/local/dovecot-1.1/libexec/dovecot/imap-login login_executable(pop3): /usr/local/dovecot-1.1/libexec/dovecot/pop3-login login_greeting: postoffice.no-ip.com (1.1) login_log_format_elements: user=[EMAIL PROTECTED] method=%m rip=%r lip=%l %c login_process_per_connection: no verbose_proctitle: yes first_valid_uid: 89 mail_uid: vmail mail_gid: vmail mail_executable(default): /usr/local/dovecot-1.1/libexec/dovecot/imap mail_executable(imap): /usr/local/dovecot-1.1/libexec/dovecot/imap mail_executable(pop3): /usr/local/dovecot-1.1/libexec/dovecot/pop3 mail_plugin_dir(default): /usr/local/dovecot-1.1/lib/dovecot/imap mail_plugin_dir(imap): /usr/local/dovecot-1.1/lib/dovecot/imap mail_plugin_dir(pop3): /usr/local/dovecot-1.1/lib/dovecot/pop3 mail_log_prefix: %Us(%u)[%p]: pop3_uidl_format(default): %08Xu%08Xv pop3_uidl_format(imap): %08Xu%08Xv pop3_uidl_format(pop3): %f pop3_client_workarounds(default): pop3_client_workarounds(imap): pop3_client_workarounds(pop3): outlook-no-nuls oe-ns-eoh auth default: mechanisms: plain login digest-md5 cram-md5 user: nobody username_translation: %@ username_format: %Lu verbose: yes passdb: driver: sql args: /usr/local/dovecot-1.1/etc/dovecot-sql.conf userdb: driver: prefetch socket: type: listen client: path: /var/run/dovecot-1.1/auth-client mode: 432 plugin: quota: dict:user::proxy::quota dict: quota: mysql:/usr/local/dovecot-1.1/etc/dovecot-dict-quota.conf
Re: [Dovecot] o/s tuning for imap
Marcin Michal Jessa wrote: Russell E. Meek wrote: Quoting Ken A [EMAIL PROTECTED]: I'm switching from a pop3 only dovecot install to a pop3/imap install and I'm wondering how many connections every 100 'normal' imap users might have/keep open? I'm wondering if I need to tweak any o/s related things, like time_wait, etc. Any pointers would be greatly appreciated. Thanks, Ken A. OS related tweaks, probably not. However you could utilize a imap proxy such as up-imapproxy which if using FreeBSD is in ports. A propos proxy. Is it possible to run dovecot as an IMAP proxy with load balancing the same way it is possible with Courier and Cyrus? If not, is it on the TODO list? http://wiki.dovecot.org/HowTo/ImapProxy Works quite well here.
Re: [Dovecot] Quota plugin, Maildir, dict backend loop problem
Timo Sirainen wrote: On 25.7.2007, at 0.09, David Jonas wrote: I've been working with the quota plugin(s) the past few days and have been having some real trouble. Only a few entries are appearing in the database, a couple more when I restarted dovecot. None of the current entries were being updated either. .. INSERT INTO quota (current, path, username) VALUES (0, 'quota/storage', '[EMAIL PROTECTED]') ON DUPLICATE KEY UPDATE current = current + 0; You do have a unique index key on path+username and nothing else, right? Sure do. Just like the docs. It worked with me last I tried.. I think I found the problem. I had an aggressive exported CFLAGS line from a recent memcache compile: CFLAGS=-O3 -march=pentium4 -mmmx -msse2 Recompiled after clearing CFLAGS and it seems to work fine now. Guess something important got optimized away. I'm having some other issues, like the entries being wildly inaccurate (1.6G for an account with no mail) and some updating issues. But quotas aren't overly pressing, so should I just wait for v1.1? Is v1.1alpha1 stable enough for some production time? I haven't seen much noise about it on the list. Perhaps I should toss it in the water... Thanks for your help, David
Re: [Dovecot] Quota transaction bug?
Jasper Bryant-Greene wrote: On Tue, Jul 24, 2007 at 02:09:23PM -0700, David Jonas wrote: SELECT current FROM quota WHERE ... 513965019 BEGIN INSERT INTO quota (current, ... COMMIT ?? (error_r appears to be (null)) Unrelated to the original post, but the above would appear to be a bug to me. Because the SELECT is done before the transaction starts, the value in the INSERT which is based on the SELECT may no longer be consistent with the actual value in that table. I don't think it uses the value. The INSERT statement that comes next does an UPDATE if the key exists. The value represented as 0 here is the change in quota value: INSERT INTO quota (current, path, username) VALUES (0, 'quota/storage', '[EMAIL PROTECTED]') ON DUPLICATE KEY UPDATE current = current + 0; So it's only added to or inserted, never changed then replaced. I had a look through the code as I wanted to include a patch to move the SELECT into the transaction with this email, but I'm not familiar enough with the Dovecot codebase to find the code that performs the above SQL. If anyone can point me in the right direction I'd be happy to submit a patch. Jasper
Re: [Dovecot] Quota transaction bug?
Jasper Bryant-Greene wrote: On Tue, Jul 24, 2007 at 02:46:11PM -0700, David Jonas wrote: Jasper Bryant-Greene wrote: On Tue, Jul 24, 2007 at 02:09:23PM -0700, David Jonas wrote: SELECT current FROM quota WHERE ... 513965019 BEGIN INSERT INTO quota (current, ... COMMIT ?? (error_r appears to be (null)) Unrelated to the original post, but the above would appear to be a bug to me. Because the SELECT is done before the transaction starts, the value in the INSERT which is based on the SELECT may no longer be consistent with the actual value in that table. I don't think it uses the value. The INSERT statement that comes next does an UPDATE if the key exists. The value represented as 0 here is the change in quota value: INSERT INTO quota (current, path, username) VALUES (0, 'quota/storage', '[EMAIL PROTECTED]') ON DUPLICATE KEY UPDATE current = current + 0; So it's only added to or inserted, never changed then replaced. OK, so absolutely nothing about the INSERT (including whether or not it is performed) depends on anything about the result from the SELECT? I won't pretend to know for sure, but as far as I can tell, no. I never poked around to find out why the SELECT is done. I'm guessing it was done to see if the quota needed to be recalculated (negative values cause a recalculation, according to the comments). Jasper
Re: [Dovecot] Will pay $500 towards a Dovecot feature
Frank Cusack wrote: On May 23, 2007 11:54:20 AM -0700 Marc Perkel [EMAIL PROTECTED] wrote: IMAP establishes a connection between the client and the server. Wouldn't it be great if it could be a conduit to let custom Thunderbird plugins talk to custom server application over the IMAP interface? For example, personalized server settings. Suppose for example I want Thunderbird to edit my server side white lists or black lists or any other setting? Wouldn't it be nice if IMAP supported these changes? The connection is made. It's a secure connection that's been authenticated. Lets use it! Sounds like a job for ACAP. It's rumoured to be the most complex Internet Engineering Task Force designed protocol ever... -- http://en.wikipedia.org/wiki/Application_Configuration_Access_Protocol
Re: [Dovecot] logfile consistency
David Lee wrote: We do some routine logfile (syslog) gathering and analysis. I've been looking at extending this to parse the syslog output of dovecot. Hmmm... ... For instance: 1. All lines, including the deliver, to begin dovecot:; 2. The IMAP(): Disconnected to become imap: disconnected user=; ... Overall this would make it more consistently amenable to perl-like pattern processing, at least with a reasonably hierarchical structure to the messages. Perhaps something like: dovecot: subprogram: event, key1=value1, key2=value2 ... where: subprogram is {imap,pop,deliver,...}; event is {login,disconnected, ...}; and one of the key=value will usually be user=. Or perhaps similar to postfix, like dovecot/deliver[pid]: That would really make post-processing of logging information (whether offline, or 'live' via piped syslog) considerably easier. I strongly agree. I've written some nice graphing (rrdtool) scripts and they would have been much simpler with a standard syslog format. Though really, it's not that big of a deal.