Re: [Dovecot] ssl_cert_username_field and subjectAltName?
On Tue, Mar 20, 2012 at 02:55:12PM +0100, Nicolas KOWALSKI wrote: Does dovecot support the subject Alternative Name email value [1] as ssl_cert_username_field? If so, how should it be specified in the configuration? Well, I just found the wiki states no: The text is looked up from subject DN's specified field (http://wiki2.dovecot.org/SSL/DovecotConfiguration) Sorry for the noise, -- Nicolas
Re: [Dovecot] dovecot runs from shell, but not xinetd
On 3/20/2012 11:26 PM, Mark Jeghers wrote: Hi Stan Afraid it did not help. Here is what I got: *** entered into a telnet session... user ann +OK pass -ERR [IN-USE] Couldn't open INBOX: Internal error occurred. Refer to server log for more information. [2012-03-20 21:16:05] Connection closed by foreign host. [root@t4pserver2 mailpop3]# *** resulted in maillog... Mar 20 21:16:05 t4pserver2 dovecot: auth: Debug: passwd-file(ann,::1): lookup: user=ann file=/etc/passwd.dovecot Mar 20 21:16:05 t4pserver2 dovecot: auth: Debug: client out: OK#0112#011user=ann Mar 20 21:16:05 t4pserver2 dovecot: auth: Debug: master in: REQUEST#0113180593153#01113546#0112#0116c9a0569dcd246a9f9e7a94dbe852843 Mar 20 21:16:05 t4pserver2 dovecot: auth: Debug: passwd(ann,::1): lookup Mar 20 21:16:05 t4pserver2 dovecot: auth: Debug: master out: USER#0113180593153#011ann#011system_groups_user=ann#011uid=501#011gid=501#011home=/home/ann Mar 20 21:16:05 t4pserver2 dovecot: pop3-login: Login: user=ann, method=PLAIN, rip=::1, lip=::1, mpid=13549, secured Mar 20 21:16:05 t4pserver2 dovecot: pop3(ann): Debug: Effective uid=501, gid=501, home=/home/ann Mar 20 21:16:05 t4pserver2 dovecot: pop3(ann): Debug: fs: root=/var/spool/mailpop3, index=, control=, inbox=/var/spool/mailpop3/ann Mar 20 21:16:05 t4pserver2 dovecot: pop3(ann): Error: stat(/var/spool/mailpop3/ann) failed: Permission denied Mar 20 21:16:05 t4pserver2 dovecot: pop3(ann): Debug: Namespace : Using permissions from /var/spool/mailpop3: mode=0777 gid=-1 Mar 20 21:16:05 t4pserver2 dovecot: pop3(ann): Error: stat(/var/spool/mailpop3/ann) failed: Permission denied Mar 20 21:16:05 t4pserver2 dovecot: pop3(ann): Debug: Namespace : Using permissions from /var/spool/mailpop3: mode=0777 gid=-1 Mar 20 21:16:05 t4pserver2 dovecot: pop3(ann): Error: stat(/var/spool/mailpop3/ann) failed: Permission denied Mar 20 21:16:05 t4pserver2 dovecot: pop3(ann): Error: Couldn't open INBOX: Internal error occurred. Refer to server log for more information. [2012-03-20 21:16:05] Mar 20 21:16:05 t4pserver2 dovecot: pop3(ann): Couldn't open INBOX top=0/0, retr=0/0, del=0/0, size=0 *** file permissions... [root@t4pserver2 mailpop3]# ls -al total 248652 drwxrwxrwx. 2 root mail 4096 Mar 20 21:11 . drwxr-xr-x. 17 root root 4096 Mar 18 18:22 .. -rw-rw-r--. 1 ann mail 58739 Mar 17 04:26 ann -rw-rw-r--. 1 annphone mail 2708345 Mar 17 05:22 annphone -rw-rw-r--. 1 root mail 127272960 Mar 18 18:28 backups.tar -rw-rw-r--. 1 crimsonblues mail327563 Dec 3 14:38 crimsonblues -rw-rw-r--. 1 mark mail 0 Mar 18 13:09 mark -rw-rw-r--. 1 markphonemail 124147068 Mar 18 04:21 markphone -rw-rw-r--. 1 nathan mail 5119 Dec 22 18:52 nathan -rw-rw-r--. 1 root mail 0 Mar 18 13:13 root -rw-rw-r--. 1 testuser mail 58739 Mar 18 18:42 testuser -rw-rw-r--. 1 tim mail 16212 Mar 18 15:51 tim My CentOS installation created a user mail so I am hesitant to remove it, but it is no longer in use here. Any other ideas? What user does dovecot run as in the shell? Under xinetd? -- Stan
[Dovecot] ldap userdb warning in v2.1.1
Hi, I've upgraded from 2.0.13 to 2.1.1 and when I started the service, I got the following warning: Mar 21 10:07:23 imapserver dovecot: master: Dovecot v2.1.1 starting up (core dumps disabled) Mar 21 10:08:17 imapserverdovecot: auth: Warning: ldap: Ignoring changed user_attrs in /etc/dovecot/dovecot-passdb-ldap.conf, because userdb ldap not used. (If this is intentional, set userdb_warning_disable=yes) I didn't see such warnings in 2.0.13. I guess I should/could remove the user_attrs line from dovecot-passdb-ldap.conf because it's not needed? (I could also set userdb_warning_disable=yes as advised, but I'm trying to figure out what's the real cause of the warning.) The config follows below. Thanks, Nick = protocols = imap pop3 mail_location = maildir:~/Maildir/ mail_gid = 502 mail_uid = 502 auth_mechanisms = plain login auth_username_format = %Lu auth_verbose = yes disable_plaintext_auth = no mail_plugins = quota protocol imap { imap_client_workarounds = delay-newmail mail_plugins = quota imap_quota } protocol pop3 { mail_max_userip_connections = 3 mail_plugins = quota pop3_client_workarounds = outlook-no-nuls oe-ns-eoh pop3_uidl_format = %08Xu%08Xv } protocol lda { auth_socket_path = /var/run/dovecot/auth-master info_log_path = log_path = mail_plugins = quota postmaster_address = sysad...@example.com sendmail_path = /usr/lib/sendmail } userdb { args = /etc/dovecot/dovecot-usrdb-ldap.conf driver = ldap } passdb { args = /etc/dovecot/dovecot-passdb-ldap.conf driver = ldap } plugin { quota = maildir:User quota quota_rule = *:storage=4G quota_rule2 = Trash:storage=+3%% quota_warning = storage=75%% quota-warning 75 %u quota_warning2 = storage=90%% quota-warning 90 %u } service quota-warning { executable = script /opt/mail1.sh user = vmail unix_listener quota-warning { user = vmail } } service auth { unix_listener /var/spool/postfix/private/auth { group = postfix mode = 0660 user = postfix } unix_listener auth-master { group = vmail mode = 0660 user = vmail } user = root } service imap-login { service_count = 1 vsz_limit = 64 M } service pop3-login { service_count = 1 vsz_limit = 64 M } ssl_ca = /etc/pki/tls/certs/chain-650.pem ssl_cert = /etc/pki/tls/certs/cert-650.pem ssl_key = /etc/pki/tls/private/key-650.pem syslog_facility = local1 = and dovecot-usrdb-ldap.conf is identical to dovecot-passdb-ldap.conf: = hosts = localhost tls = no base = ou=people, dc=example, dc=com scope = onelevel ldap_version = 3 dn = uid=authenticate,ou=System,dc=example,dc=com dnpass = secret auth_bind = yes user_filter = (uid=%u) pass_filter = (uid=%u) pass_attrs = uid=user,userPassword=password auth_bind_userdn = uid=%u,ou=people,dc=example,dc=com user_attrs = roomNumber=quota_rule=*:bytes=%$,uid=home=/home/vmail/%u iterate_filter = (objectClass=*) =
Re: [Dovecot] auth tcp socket, Authentication client gave a PID 7542 of existing connection
On 19.3.2012, at 21.16, Alex Ha wrote: dovecot: auth: Error: BUG: Authentication client gave a PID 7542 of existing connection Oh, right, PIDs of course aren't unique when you're using multiple servers. Try if the attached patch fixes your troubles. If it does, I'll commit it to hg. Thanks Timo! I will try the patch and report to you. Hi Timo! I tried the patch with 2.0.19 and the dovecot error messages disappeared. OK, it's going to be included in v2.1.3 and v2.0.20 (if that ever gets released). I still get a lot of this postfix warnings: SASL LOGIN authentication failed: Connection lost to authentication server but only for ips which tried a sasl brute force attack. Connection lost to authentication server could this be because of the dovecot auth penalties? so far i did not get any complaints from users. The auth penalties wait for max. 17 seconds I think. Looks like Postfix has a timeout of 10 seconds. You could disable auth penalties, or perhaps Postfix should use 20 second limit.
Re: [Dovecot] auth tcp socket, Authentication client gave a PID 7542 of existing connection
On 2012-03-21 7:48 AM, Timo Sirainen t...@iki.fi wrote: On 19.3.2012, at 21.16, Alex Ha wrote: dovecot: auth: Error: BUG: Authentication client gave a PID 7542 of existing connection Oh, right, PIDs of course aren't unique when you're using mulitiple servers. Try if the attached patch fixes your troubles. If it does, I'll commit it to hg. Thanks Timo! I will try the patch and report to you. I tried the patch with 2.0.19 and the dovecot error messages disappeared. OK, it's going to be included in v2.1.3 and v2.0.20 (if that ever gets released). Presumably you mean 2.1.4 (since 2.1.3 is already released)? -- Best regards, Charles
Re: [Dovecot] dovecot runs from shell, but not xinetd
On 20.3.2012, at 20.29, Mark Jeghers wrote: Below is my config. When I run dovecot from xinetd, I get these errors in the log: You can't run Dovecot v2.x via inetd. You could run it via systemd though.
Re: [Dovecot] auth tcp socket, Authentication client gave a PID 7542 of existing connection
On 21.3.2012, at 13.55, Charles Marcus wrote: On 2012-03-21 7:48 AM, Timo Sirainen t...@iki.fi wrote: On 19.3.2012, at 21.16, Alex Ha wrote: dovecot: auth: Error: BUG: Authentication client gave a PID 7542 of existing connection Oh, right, PIDs of course aren't unique when you're using mulitiple servers. Try if the attached patch fixes your troubles. If it does, I'll commit it to hg. Thanks Timo! I will try the patch and report to you. I tried the patch with 2.0.19 and the dovecot error messages disappeared. OK, it's going to be included in v2.1.3 and v2.0.20 (if that ever gets released). Presumably you mean 2.1.4 (since 2.1.3 is already released)? Ah, yes. :)
Re: [Dovecot] ldap userdb warning in v2.1.1
On 21.3.2012, at 11.00, Nikolaos Milas wrote: Mar 21 10:07:23 imapserver dovecot: master: Dovecot v2.1.1 starting up (core dumps disabled) Mar 21 10:08:17 imapserverdovecot: auth: Warning: ldap: Ignoring changed user_attrs in /etc/dovecot/dovecot-passdb-ldap.conf, because userdb ldap not used. (If this is intentional, set userdb_warning_disable=yes) I didn't see such warnings in 2.0.13. I guess I should/could remove the user_attrs line from dovecot-passdb-ldap.conf because it's not needed? Hmm. Yes, if dovecot-usrdb-ldap.conf is a separate file from dovecot-passdb-ldap.conf you can just remove it. But this reminds me that in several places I've suggested to make one of them a symlink to the other, and you can't really do it then. Perhaps I'll need to remove this warning, or maybe make it recognize the symlink case. Anyway I added it for both LDAP and SQL hoping that it would reduce questions like: I changed user_attrs, but it doesn't do anything!
Re: [Dovecot] 2.1: Error: Maildir filename has wrong S value, renamed the file from
On 20.3.2012, at 16.55, Patrick Domack wrote: but in .Trash/cur since I upgraded from 2.0.19 to 2.1 they have double S and W tags. 1331941500.M220929P17982.5013,S=24845,W=25526,S=24845,W=25526:2,Sa This is happening for all folder moves. Fixed: http://hg.dovecot.org/dovecot-2.1/rev/3599790da3d7
Re: [Dovecot] 2.1: Error: Maildir filename has wrong S value, renamed the file from
Thanks, applied it to 2.1.3 and going to test. You didn't even give me enough time to look at the source myself to find the issue. Quoting Timo Sirainen t...@iki.fi: On 20.3.2012, at 16.55, Patrick Domack wrote: but in .Trash/cur since I upgraded from 2.0.19 to 2.1 they have double S and W tags. 1331941500.M220929P17982.5013,S=24845,W=25526,S=24845,W=25526:2,Sa This is happening for all folder moves. Fixed: http://hg.dovecot.org/dovecot-2.1/rev/3599790da3d7
[Dovecot] sysconfdir depreacted
The purpose of any build scripts --sysconfdir is to tell the configuration to build in a path for its binaries configuration file(s). Dovecot 2.1.3, seems to insist that that directory is now /etc/dovecot/ ignoring --sysconfdir=/etc as in 1.2.x and previous majors before that, is this a bug? if not, then I see no point of sysconfdir any more and it should be removed, if dovecot deliberately ignores what it is told to use. signature.asc Description: This is a digitally signed message part
Re: [Dovecot] sysconfdir depreacted
On 21.3.2012, at 15.26, Noel Butler wrote: The purpose of any build scripts --sysconfdir is to tell the configuration to build in a path for its binaries configuration file(s). Dovecot 2.1.3, seems to insist that that directory is now /etc/dovecot/ ignoring --sysconfdir=/etc as in 1.2.x and previous majors before that, is this a bug? if not, then I see no point of sysconfdir any more and it should be removed, if dovecot deliberately ignores what it is told to use. --sysconfdir=/etc uses /etc/dovecot/ --sysconfdir=/opt/dovecot/etc uses /opt/dovecot/etc/dovecot/ There is now always the dovecot/ suffix, but the the /etc part is still configurable.
[Dovecot] dovecot 2.0.19 Panic: file mail-index-sync-ext.c: line 209 (sync_ext_reorder): assertion failed: (offset (uint16_t)-1)
Had a user who couldn't access his INBOX: Mar 21 09:21:17 penguina dovecot: imap([USER]): Panic: file mail-index-sync-ext.c: line 209 (sync_ext_reorder): assertion fai led: (offset (uint16_t)-1) Mar 21 09:21:17 penguina dovecot: imap([USER]): Error: Raw backtrace: /usr/lib/dovecot/libdovecot.so.0 [0x342683c660] - /usr /lib/dovecot/libdovecot.so.0 [0x342683c6b6] - /usr/lib/dovecot/libdovecot.so.0 [0x342683bb73] - /usr/lib/dovecot/libdovecot -storage.so.0 [0x3426c966a8] - /usr/lib/dovecot/libdovecot-storage.so.0(mail_index_sync_ext_intro+0x240) [0x3426c979c0] - / usr/lib/dovecot/libdovecot-storage.so.0(mail_index_sync_record+0x401) [0x3426c99151] - /usr/lib/dovecot/libdovecot-storage.s o.0(mail_index_sync_map+0x245) [0x3426c99c55] - /usr/lib/dovecot/libdovecot-storage.so.0(mail_index_map+0x71b) [0x3426c8afbb ] - /usr/lib/dovecot/libdovecot-storage.so.0 [0x3426c85d8b] - /usr/lib/dovecot/libdovecot-storage.so.0(mail_index_open+0x1c e) [0x3426c8617e] - /usr/lib/dovecot/libdovecot-storage.so.0(index_storage_mailbox_open+0xb5) [0x3426c4d865] - /usr/lib/dov ecot/libdovecot-storage.so.0 [0x3426c75eab] - /usr/lib/dovecot/libdovecot-storage.so.0 [0x3426c31006] - dovecot/imap [hdtod d 10.245.30.58 SELECT](cmd_ Stack trace made it look like it was the INBOX, so I deleted the index files for his INBOX and everything was OK. doveconf -n: # OS: Linux 2.6.18-274.18.1.el5 x86_64 Red Hat Enterprise Linux Server release 5.8 (Tikanga) auth_gssapi_hostname = penguina.uvm.edu auth_krb5_keytab = /etc/krb5.keytab.dovecot auth_master_user_separator = * auth_mechanisms = plain login gssapi base_dir = /var/run/dovecot/ default_process_limit = 250 first_valid_uid = 50 lock_method = flock login_trusted_networks = [REDACTED] mail_location = mbox:~/mail:INBOX=/var/spool/mail/%1u/%1.1u/%u mail_max_lock_timeout = 30 secs mail_max_userip_connections = 100 mbox_read_locks = flock mbox_write_locks = flock mmap_disable = yes namespace { inbox = yes location = prefix = separator = / type = private } namespace { hidden = yes list = no location = prefix = mail/ separator = / type = private } namespace { hidden = yes list = no location = prefix = ~/mail/ separator = / type = private } namespace { hidden = yes list = no location = prefix = ~%u/mail/ separator = / type = private } passdb { args = /etc/dovecot/passwd.masterusers driver = passwd-file master = yes } passdb { driver = pam } service imap { process_limit = 4096 } service lmtp { client_limit = 1 inet_listener lmtp { port = 24 } } ssl_cert = [REDACTED] ssl_key = [REDACTED] userdb { driver = passwd } verbose_proctitle = yes Any questions/suggestions welcome. Jim
Re: [Dovecot] dovecot 2.0.19 Panic: file mail-index-sync-ext.c: line 209 (sync_ext_reorder): assertion failed: (offset (uint16_t)-1)
On 21.3.2012, at 15.53, Jim Lawson wrote: Had a user who couldn't access his INBOX: Mar 21 09:21:17 penguina dovecot: imap([USER]): Panic: file mail-index-sync-ext.c: line 209 (sync_ext_reorder): assertion fai led: (offset (uint16_t)-1) I kind of remember that this was fixed by http://hg.dovecot.org/dovecot-2.1/rev/b4d8e950eb9d but I'm not entirely sure. I guess I should have included in the commit the error message it fixed. Stack trace made it look like it was the INBOX, so I deleted the index files for his INBOX and everything was OK. If it happens again, get a copy of the indexes.
Re: [Dovecot] squat not working in 2.1
On 2012-02-29 9:30 AM, Ralf Hildebrandt ralf.hildebra...@charite.de wrote: * Morten Stevensmstev...@imt-systems.com: This is a Fedora-specific problem, because clucene (build requirement) is not correctly packaged. Well, debian showed the same packaging (wrong place). I just attempted to update to 2.1.3 on gentoo and received the same error: /usr/include/CLucene/SharedHeader.h:18:36: fatal error: CLucene/clucene-config.h: No such file or directory So, is this also a packaging error that I need to report to gentoo? -- Best regards, Charles
Re: [Dovecot] 2.1: Error: Maildir filename has wrong S value, renamed the file from
* Timo Sirainen t...@iki.fi: On 20.3.2012, at 16.55, Patrick Domack wrote: but in .Trash/cur since I upgraded from 2.0.19 to 2.1 they have double S and W tags. 1331941500.M220929P17982.5013,S=24845,W=25526,S=24845,W=25526:2,Sa This is happening for all folder moves. Fixed: http://hg.dovecot.org/dovecot-2.1/rev/3599790da3d7 That doesn't seem to work: Mar 21 15:32:50 postamt dovecot: imap(jkamp): Error: Maildir filename has wrong S value, renamed the file from /home/j/k/jkamp/Maildir/cur/1330501473.M742455P30506.postamt.charite.de,S=36307:2,S to /home/j/k/jkamp/Maildir/cur/1330501473.M742455P30506.postamt.charite.de,S=36307:2,S Mar 21 15:32:50 postamt dovecot: imap(jkamp): Error: read(/home/j/k/jkamp/Maildir/cur/1330501473.M742455P30506.postamt.charite.de,S=36307:2,S) failed: Input/output error (uid=5270) It's renaming itself to itself again? -- Ralf Hildebrandt Geschäftsbereich IT | Abteilung Netzwerk Charité - Universitätsmedizin Berlin Campus Benjamin Franklin Hindenburgdamm 30 | D-12203 Berlin Tel. +49 30 450 570 155 | Fax: +49 30 450 570 962 ralf.hildebra...@charite.de | http://www.charite.de
[Dovecot] Advice for new dovecot / imap proxy? setup
Hello list. I'm planning a new mail servers for our company's customers to replace the oldish Courier-IMAP based one, we already started to deploy some mail accounts on a dovecot-2.0 server as an early test. I'd like to implement the new system with dovecot-2 (I'll probably go straight to dovecot-2.1.x) and I'd like to get it right from the beginning so I'm here asking for some advice. The issue I'm investigating right now is how to manage a single IMAP / POP / SMTP / webmail entry point for multiple mail servers... in other words an IMAP proxy. It would be desirable for multiple reasons: - graceful migration from the current system: we'd make the mailserver hostname point to the proxy (along with its SSL certificates) and then the proxy would route each domain to the correct IMAP non-ssl server on our LAN. No need to update customer's systems configuration and we can move one domain at a time from the old to the new server, behind the scenes - be ready for similar migrations in the future (eg. right now we're still keeping the imap servers with the qmail MTA, but we'd like to switch to postfix+dovecot in the future) - be ready for sharding mail domains on multiple IMAP servers (if/when current hardware reach its capacity or needs to be swapped out for new gear) - be ready to serve traffic over IPv6 without touching our precious mailbox servers - isolate the mailbox servers from direct external access and just run IMAP on them, let other systems run ssl, pop3, smtp, webmail, etc... Ideally the 'proxy' system would run dovecot imap and pop3 (SSL protected) and Roundcube webmail (PHP, on https) and just speak IMAP to the underlying mail servers on our internal LAN. We'd like to support all the recent IMAP goodies to make modern users happy (IMAP IDLE, LEMONADE, etc) and possibly implement Maildir quota on the new backend mailbox server to improve our operations (currently we just run du in a cronjob once a day on the current mailserver, IMAP clients including the webmail do not know about quota and thus cannot show amount of free space). In addition to that, customer's will hit the SMTP server running on that 'proxy' system and this is good to keep its configuration separated from the SMTP server of the actual mail servers (which has a different configuration and is restricted to get connections only from our MX systems and not from outside sources). I'd like to know if that plan sounds reasonable or if there's something stupid in it. Also, is the proxy going to support all kind of IMAP stuff of the backend server (IDLE, CONDSTORE, Maildir quota, immediate notification of IDLE clients thanks to linux inotify, etc...) or will it limit me somehow? thanks, -- Luca Lesinigo
Re: [Dovecot] 2.1: Error: Maildir filename has wrong S value, renamed the file from
On 21.3.2012, at 16.33, Ralf Hildebrandt wrote: but in .Trash/cur since I upgraded from 2.0.19 to 2.1 they have double S and W tags. 1331941500.M220929P17982.5013,S=24845,W=25526,S=24845,W=25526:2,Sa This is happening for all folder moves. Fixed: http://hg.dovecot.org/dovecot-2.1/rev/3599790da3d7 That doesn't seem to work: It fixed only the duplicate S= and W= values. Mar 21 15:32:50 postamt dovecot: imap(jkamp): Error: Maildir filename has wrong S value, renamed the file from /home/j/k/jkamp/Maildir/cur/1330501473.M742455P30506.postamt.charite.de,S=36307:2,S to /home/j/k/jkamp/Maildir/cur/1330501473.M742455P30506.postamt.charite.de,S=36307:2,S Mar 21 15:32:50 postamt dovecot: imap(jkamp): Error: read(/home/j/k/jkamp/Maildir/cur/1330501473.M742455P30506.postamt.charite.de,S=36307:2,S) failed: Input/output error (uid=5270) It's renaming itself to itself again? Hmm. Yeah, this is a bit problematic for compressed mails. If the S=size isn't correct, Dovecot fixes it by stat()ing the file and using it as the size. And that's of course wrong. Also Dovecot can't simply remove the S=size, because the current Maildir code assumes that it always exists for compressed mails. There's no easy and efficient way to fix this.. Maybe you could just manually rename the files to have correct S=size? :) zcat file | wc should give the right size.
Re: [Dovecot] 2.1.2 (pop3|imap)-login crash
Hi, On 20.3.2012, at 11.09, Luca Palazzo wrote: Hi Timo, hi all, after upgrading my server (both backends and load balancer) to 2.1.2 (from 2.0.17), I'm getting a log of login processes crashed in load balancer. 0xb77cd176 in ssl_proxy_is_handshaked (proxy=0x0) at ssl-proxy-openssl.c:710 710 { (gdb) bt #0 0xb77cd176 in ssl_proxy_is_handshaked (proxy=0x0) at ssl-proxy-openssl.c:710 #1 0xb77c7295 in client_get_log_str (client=0x807b830, msg=0x804e290 proxy(aaa...@dd.it): disconnecting x.x.x.x (Disconnected by server)) at client-common.c:469 #2 0xb77c73c6 in client_log (client=0x807b830, msg=0x804e290 proxy(...@dd.it): disconnecting x.x.x.x (Disconnected by server)) at client-common.c:553 #3 0xb77c9a45 in login_proxy_free_reason (_proxy=value optimized out, reason=0x804e248 Disconnected by server) at login-proxy.c:373 Interesting. This happens because client_destroy() has already been called at the time login_proxy_free_reason() gets called. I'll need to look further into it, but for a quick workaround use the attached patch. diff Description: Binary data
Re: [Dovecot] mdbox and pop3 locking
On 3/21/2012 6:59 AM, Timo Sirainen wrote: On 20.3.2012, at 17.26, Ken A wrote: With mdbox, what does dovecot lock when pop3_lock_session(pop3): yes? Specifically, I'm wondering if Dovecot LDA is able to deliver mail when a session is locked, if using mdbox, or if it will tempfail until the session is unlocked? Unfortunately it will tempfail. This is something I'm planning on changing soon. There should be a separate POP3-only lock. Awesome! I haven't migrated to mdbox yet, but in testing with it on a dev server, it looks like it will solve a huge problem. Users seem to want ever larger mailboxes, and mdbox gives them that, without asking more than additional disk space. Fixing the pop locking would be an additional benefit! Thanks, Ken Pacific.Net
Re: [Dovecot] issues migration from dovecot 1.2 to version 2
On 20.3.2012, at 9.35, Rajesh M wrote: i migrated my email server with around 5000 users from dovecot version 1.2 to version 2 i have two separate 2 tb hdd's storing webmail data of these users. You mean you simply upgraded the Dovecot version, the server is exactly the same? the load on the server goes very high over 100 during peak load times and the imap connections get dropped frequently, webmail becomes very slow. There shouldn't be much performance difference between v1.2 and v2.x. in the dovecot log file i get errors as such Warning: Maildir /homebackup/domains///Maildir/.ALL_INBOX MAIL: Synchronization took 71 seconds (20 new msgs, 0 flag change attempts, 0 expunge attempts) This simply means that the disk IO usage is very high. i am a bit confused as to what settings are to be done for a very busy server Show dovecot -n output of the new server, and if you have the old configuration available that could be helpful also to compare their differences.
Re: [Dovecot] replication howto
On 19.3.2012, at 12.50, Matteo Cazzador wrote: Hi, i've a simple question, what do you mean for dovecot director setup? 'i've a doubt. The solution that i'm testing is using 3 mail server in different geoghrapic locations. An user can travel in varius location, and i want his imap mail reside on mail server in every locations. Sò i use you solution about replication. First server (by dns record) that receive mail sync it on the other servers, and when user consult is mail by imap protocol everything is sync on all servers. Do you suggest to use a horizontal structure for it like i explain or is better to have a single node external mail server and customer locations server like slave? Dovecot director isn't really meant to be used for geographic user distribution. Also the replication doesn't yet support more than two servers. A master-slave setup wouldn't have the UID conflict problems that multi-master dsync replication has, but the UID conflicts probably won't be a big problem. Anyway, difficult to give recommendations about an unfinished feature..
[Dovecot] Dovecot 1.2.9 Crash, NFS
Hi, I'm stuck with a problem we have with dovecot. My suspicion is, that it has to do with accessing the same mailbox/mail stored on a NFS-share from two machines at the same time. setup We have to mail servers running, both run a Ubuntu 10.04, Postfix 2.70 and Dovecot 1.2.9. The mailboxes are stored in maildir format on a NFS-Share. In front of those to mail servers we have a load balancer. Unfortunately it can't be set up to use the same server for each domain, but it uses the same server for the same source-ip for at least 1 hour. Here is the output of dovecot -n: # 1.2.9: /etc/dovecot/dovecot.conf # OS: Linux 2.6.32-33-server x86_64 Ubuntu 10.04.3 LTS nfs log_timestamp: %Y-%m-%d %H:%M:%S protocols: imap imaps pop3 pop3s ssl_ca_file: /etc/ssl/ca-bundle/SSL123_CA_Bundle.pem ssl_cert_file: /etc/ssl/mail.newmedia.ch/mail.newmedia.ch.crt ssl_key_file: /etc/ssl/mail.newmedia.ch/mail.newmedia.ch.key ssl_verify_client_cert: yes disable_plaintext_auth: no login_dir: /var/run/dovecot/login login_executable(default): /usr/lib/dovecot/imap-login login_executable(imap): /usr/lib/dovecot/imap-login login_executable(pop3): /usr/lib/dovecot/pop3-login mail_max_userip_connections: 25 mail_privileged_group: mail mail_location: maildir:/data/vmail/%d/%n:INDEX=/data/vmail/%d/%n/indexes mmap_disable: yes dotlock_use_excl: no mail_nfs_storage: yes mail_nfs_index: yes mbox_write_locks: fcntl dotlock mail_executable(default): /usr/lib/dovecot/imap mail_executable(imap): /usr/lib/dovecot/imap mail_executable(pop3): /usr/lib/dovecot/pop3 mail_plugins(default): quota imap_quota mail_plugins(imap): quota imap_quota mail_plugins(pop3): mail_plugin_dir(default): /usr/lib/dovecot/modules/imap mail_plugin_dir(imap): /usr/lib/dovecot/modules/imap mail_plugin_dir(pop3): /usr/lib/dovecot/modules/pop3 imap_client_workarounds(default): outlook-idle delay-newmail imap_client_workarounds(imap): outlook-idle delay-newmail imap_client_workarounds(pop3): auth default: passdb: driver: sql args: /etc/dovecot/dovecot-mysql.conf userdb: driver: passwd userdb: driver: sql args: /etc/dovecot/dovecot-mysql.conf plugin: quota: maildir:storage=409600 sieve_global_path: /data/vmail/globalsieverc dict: quotadict: mysql:/etc/dovecot-dict-quota.conf problem the problem happens with a client's mailbox that is used by multiple users. From time to time he cannot see any Emails in the mailbox, neither with his mail clients (Apple Mail) nor with in the webmail (Roundcube). Around this time I get the following entries in the log files: Mar 6 08:26:31 mail02 dovecot: IMAP(u...@example.com): fdatasync(/data/vmail/example.com/user/dovecot-uidlist) failed: Input/output error Mar 6 08:42:29 mail02 dovecot: IMAP(u...@example.com): Maildir /data/vmail/example.com/user: Expunged message reappeared, giving a new UID (old uid=1522, file=1326961561.V15I4d8562M567017.mail02:2,Sad) Mar 6 08:42:29 mail02 dovecot: IMAP(u...@example.com): Maildir /data/vmail/example.com/user: Expunged message reappeared, giving a new UID (old uid=1523, file=1326705103.V15I90105M613353.mail01:2,Sad) Mar 6 08:42:29 mail02 dovecot: IMAP(u...@example.com): /data/vmail/example.com/user/dovecot-uidlist: Duplicate file entry at line 4: 1326961561.V15I4d8562M567017.mail02:2,Sad (uid 1522 - 1598) Mar 6 08:42:29 mail02 dovecot: IMAP(u...@example.com): /data/vmail/example.com/user/dovecot-uidlist: Duplicate file entry at line 5: 1326705103.V15I90105M613353.mail01:2,Sad (uid 1523 - 1599) Mar 6 08:42:30 mail02 dovecot: IMAP(u...@example.com): /data/vmail/example.com/user/dovecot-uidlist: Duplicate file entry at line 4: 1326961561.V15I4d8562M567017.mail02:2,Sad (uid 1522 - 1598) Mar 6 08:42:30 mail02 dovecot: IMAP(u...@example.com): /data/vmail/example.com/user/dovecot-uidlist: Duplicate file entry at line 5: 1326705103.V15I90105M613353.mail01:2,Sad (uid 1523 - 1599) Mar 6 08:42:30 mail02 dovecot: IMAP(u...@example.com): /data/vmail/example.com/user/dovecot-uidlist: Duplicate file entry at line 4: 1326961561.V15I4d8562M567017.mail02:2,Sad (uid 1522 - 1598) Mar 6 08:42:30 mail02 dovecot: IMAP(u...@example.com): /data/vmail/example.com/user/dovecot-uidlist: Duplicate file entry at line 5: 1326705103.V15I90105M613353.mail01:2,Sad (uid 1523 - 1599) Mar 6 08:42:31 mail02 dovecot: IMAP(u...@example.com): Maildir /data/vmail/example.com/user: Expunged message reappeared, giving a new UID (old uid=1524, file=1327500903.V15I5722c8M210039.mail01:2,Se) Mar 6 08:42:31 mail02 dovecot: IMAP(u...@example.com): /data/vmail/example.com/user/dovecot-uidlist: Duplicate file entry at line 6: 1327500903.V15I5722c8M210039.mail01:2,Se (uid 1524 - 1600) Mar 6 08:42:31 mail02 dovecot: IMAP(u...@example.com): Panic: file maildir-uidlist.c: line 403 (maildir_uidlist_records_array_delete): assertion failed: (pos != NULL) Mar 6 08:42:31 mail02 dovecot: IMAP(u...@example.com): Raw backtrace: imap(+0xaeb5a) [0x7f37602b8b5a] - imap(+0xaebc7)
Re: [Dovecot] Dovecot 1.2.9 Crash, NFS
On 21.3.2012, at 17.45, Müller Lukas wrote: Mar 6 08:26:31 mail02 dovecot: IMAP(u...@example.com): fdatasync(/data/vmail/example.com/user/dovecot-uidlist) failed: Input/output error Mar 6 08:42:29 mail02 dovecot: IMAP(u...@example.com): Maildir /data/vmail/example.com/user: Expunged message reappeared, giving a new UID (old uid=1522, file=1326961561.V15I4d8562M567017.mail02:2,Sad) Mar 6 08:42:29 mail02 dovecot: IMAP(u...@example.com): Maildir /data/vmail/example.com/user: Expunged message reappeared, giving a new UID (old uid=1523, file=1326705103.V15I90105M613353.mail01:2,Sad) Mar 6 08:42:29 mail02 dovecot: IMAP(u...@example.com): /data/vmail/example.com/user/dovecot-uidlist: Duplicate file entry at line 4: 1326961561.V15I4d8562M567017.mail02:2,Sad (uid 1522 - 1598) .. My suspicion/speculation what happens is the following: Multiple users are accessing the Mailbox from their offices (all on the same server), one (or more) uses the Webmail or accesses the Mailbox from a different IP. Somehow this leads to problems with Locks on NFS, which leads to the crash. Yes, most likely this is what's happening. Although your errors are more severe than what normally happens. I guess your NFS server is also partially to blame (microsecond resolution timestamps are at least helpful). I have no idea how to solve this problem and any help is greatly appreciated. The only way to fully fix this is: http://wiki2.dovecot.org/Director
Re: [Dovecot] 2.1: Error: Maildir filename has wrong S value, renamed the file from
* Timo Sirainen t...@iki.fi: It's renaming itself to itself again? Hmm. Yeah, this is a bit problematic for compressed mails. If the S=size isn't correct, Dovecot fixes it by stat()ing the file and using it as the size. And that's of course wrong. Also Dovecot can't simply remove the S=size, because the current Maildir code assumes that it always exists for compressed mails. There's no easy and efficient way to fix this.. Maybe you could just manually rename the files to have correct S=size? :) zcat file | wc should give the right size. Right now the whole system is down because nobody can acces his/her mails due to this. -- Ralf Hildebrandt Geschäftsbereich IT | Abteilung Netzwerk Charité - Universitätsmedizin Berlin Campus Benjamin Franklin Hindenburgdamm 30 | D-12203 Berlin Tel. +49 30 450 570 155 | Fax: +49 30 450 570 962 ralf.hildebra...@charite.de | http://www.charite.de
[Dovecot] distributed mdbox
Anyone know how to setup dovecot with mdbox so that it can be used through shared storage from multiple hosts? I've setup a gluster volume and am sharing it between 2 test clients. I'm using postfix/dovecot LDA for delivery and I'm using postal to send mail between 40 users. In doing this, I'm seeing these errors in the logs Mar 21 09:36:29 test-gluster-client2 dovecot: lda(testuser34): Error: Fixed index file /mnt/testuser34/mdbox/storage/dovecot.map.index: messages_count 272 - 271 Mar 21 09:36:30 test-gluster-client2 dovecot: lda(testuser28): Error: Log synchronization error at seq=4,offset=3768 for /mnt/testuser28/mdbox/storage/dovecot.map.index: Append with UID 516, but next_uid = 517 Mar 21 09:36:30 test-gluster-client2 dovecot: lda(testuser28): Error: Log synchronization error at seq=4,offset=4220 for /mnt/testuser28/mdbox/storage/dovecot.map.index: Extension record update for invalid uid=517 Mar 21 09:36:30 test-gluster-client2 dovecot: lda(testuser28): Error: Log synchronization error at seq=4,offset=5088 for /mnt/testuser28/mdbox/storage/dovecot.map.index: Extension record update for invalid uid=517 Mar 21 09:36:30 test-gluster-client2 dovecot: lda(testuser28): Warning: fscking index file /mnt/testuser28/mdbox/storage/dovecot.map.index Mar 21 09:36:30 test-gluster-client2 dovecot: lda(testuser34): Warning: fscking index file /mnt/testuser34/mdbox/storage/dovecot.map.index This is my dovecot config currently: jdevine@test-gluster-client2:~ dovecot -n # 2.0.13: /etc/dovecot/dovecot.conf # OS: Linux 3.0.0-13-server x86_64 Ubuntu 11.10 lock_method = dotlock mail_fsync = always mail_location = mdbox:~/mdbox mail_nfs_index = yes mail_nfs_storage = yes mmap_disable = yes passdb { driver = pam } protocols = imap ssl_cert = /etc/ssl/certs/dovecot.pem ssl_key = /etc/ssl/private/dovecot.pem userdb { driver = passwd }
Re: [Dovecot] 2.1: Error: Maildir filename has wrong S value, renamed the file from
On 21.3.2012, at 17.52, Ralf Hildebrandt wrote: * Timo Sirainen t...@iki.fi: It's renaming itself to itself again? Hmm. Yeah, this is a bit problematic for compressed mails. If the S=size isn't correct, Dovecot fixes it by stat()ing the file and using it as the size. And that's of course wrong. Also Dovecot can't simply remove the S=size, because the current Maildir code assumes that it always exists for compressed mails. There's no easy and efficient way to fix this.. Maybe you could just manually rename the files to have correct S=size? :) zcat file | wc should give the right size. Right now the whole system is down because nobody can acces his/her mails due to this. All of your mails are compressed and have wrong S=size in the filename? You can disable the check with the attached patch, but I'm not sure if there are other places where it fails. At least quota calculations won't be correct. diff Description: Binary data
Re: [Dovecot] 2.1.2 (pop3|imap)-login crash
It worked. We have no more sigsegv on *-login process. Thanks Luca Nella citazione in data Wed Mar 21 16:17:56 2012, Timo Sirainen ha scritto: Hi, On 20.3.2012, at 11.09, Luca Palazzo wrote: Hi Timo, hi all, after upgrading my server (both backends and load balancer) to 2.1.2 (from 2.0.17), I'm getting a log of login processes crashed in load balancer. 0xb77cd176 in ssl_proxy_is_handshaked (proxy=0x0) at ssl-proxy-openssl.c:710 710 { (gdb) bt #0 0xb77cd176 in ssl_proxy_is_handshaked (proxy=0x0) at ssl-proxy-openssl.c:710 #1 0xb77c7295 in client_get_log_str (client=0x807b830, msg=0x804e290 proxy(aaa...@dd.it): disconnecting x.x.x.x (Disconnected by server)) at client-common.c:469 #2 0xb77c73c6 in client_log (client=0x807b830, msg=0x804e290 proxy(...@dd.it): disconnecting x.x.x.x (Disconnected by server)) at client-common.c:553 #3 0xb77c9a45 in login_proxy_free_reason (_proxy=value optimized out, reason=0x804e248 Disconnected by server) at login-proxy.c:373 Interesting. This happens because client_destroy() has already been called at the time login_proxy_free_reason() gets called. I'll need to look further into it, but for a quick workaround use the attached patch.
Re: [Dovecot] distributed mdbox
On 21.3.2012, at 17.56, James Devine wrote: Anyone know how to setup dovecot with mdbox so that it can be used through shared storage from multiple hosts? I've setup a gluster volume and am sharing it between 2 test clients. I'm using postfix/dovecot LDA for delivery and I'm using postal to send mail between 40 users. In doing this, I'm seeing these errors in the logs Dovecot assumes that the filesystem behaves the same way as regular local filesystems. Mar 21 09:36:29 test-gluster-client2 dovecot: lda(testuser34): Error: Fixed index file /mnt/testuser34/mdbox/storage/dovecot.map.index: messages_count 272 - 271 Mar 21 09:36:30 test-gluster-client2 dovecot: lda(testuser28): Error: Log synchronization error at seq=4,offset=3768 for /mnt/testuser28/mdbox/storage/dovecot.map.index: Append with UID 516, but next_uid = 517 Looks like gluster doesn't fit that assumption. So, the solution is the same as with NFS: http://wiki2.dovecot.org/Director
Re: [Dovecot] 2.1.2 (pop3|imap)-login crash
The log messages are now wrong though. It logs SSL/TLS connections as being non-SSL/TLS. Oh, right, this must have started happening because of this recent change: http://hg.dovecot.org/dovecot-2.1/rev/49b832c5de0e I'll figure out a proper fix soon. On 21.3.2012, at 18.04, Luca Palazzo wrote: It worked. We have no more sigsegv on *-login process. Thanks Luca Nella citazione in data Wed Mar 21 16:17:56 2012, Timo Sirainen ha scritto: Hi, On 20.3.2012, at 11.09, Luca Palazzo wrote: Hi Timo, hi all, after upgrading my server (both backends and load balancer) to 2.1.2 (from 2.0.17), I'm getting a log of login processes crashed in load balancer. 0xb77cd176 in ssl_proxy_is_handshaked (proxy=0x0) at ssl-proxy-openssl.c:710 710 { (gdb) bt #0 0xb77cd176 in ssl_proxy_is_handshaked (proxy=0x0) at ssl-proxy-openssl.c:710 #1 0xb77c7295 in client_get_log_str (client=0x807b830, msg=0x804e290 proxy(aaa...@dd.it): disconnecting x.x.x.x (Disconnected by server)) at client-common.c:469 #2 0xb77c73c6 in client_log (client=0x807b830, msg=0x804e290 proxy(...@dd.it): disconnecting x.x.x.x (Disconnected by server)) at client-common.c:553 #3 0xb77c9a45 in login_proxy_free_reason (_proxy=value optimized out, reason=0x804e248 Disconnected by server) at login-proxy.c:373 Interesting. This happens because client_destroy() has already been called at the time login_proxy_free_reason() gets called. I'll need to look further into it, but for a quick workaround use the attached patch.
Re: [Dovecot] distributed mdbox
On Wed, Mar 21, 2012 at 10:05 AM, Timo Sirainen t...@iki.fi wrote: On 21.3.2012, at 17.56, James Devine wrote: Anyone know how to setup dovecot with mdbox so that it can be used through shared storage from multiple hosts? I've setup a gluster volume and am sharing it between 2 test clients. I'm using postfix/dovecot LDA for delivery and I'm using postal to send mail between 40 users. In doing this, I'm seeing these errors in the logs Dovecot assumes that the filesystem behaves the same way as regular local filesystems. Mar 21 09:36:29 test-gluster-client2 dovecot: lda(testuser34): Error: Fixed index file /mnt/testuser34/mdbox/storage/dovecot.map.index: messages_count 272 - 271 Mar 21 09:36:30 test-gluster-client2 dovecot: lda(testuser28): Error: Log synchronization error at seq=4,offset=3768 for /mnt/testuser28/mdbox/storage/dovecot.map.index: Append with UID 516, but next_uid = 517 Looks like gluster doesn't fit that assumption. So, the solution is the same as with NFS: http://wiki2.dovecot.org/Director What filesystem mechanisms might not be working in this case?
Re: [Dovecot] distributed mdbox
Also I don't seem to get these errors with a single dovecot machine using the shared storage and it looks like there are multiple simultaneous delivery processes running On Wed, Mar 21, 2012 at 10:25 AM, James Devine fxmul...@gmail.com wrote: On Wed, Mar 21, 2012 at 10:05 AM, Timo Sirainen t...@iki.fi wrote: On 21.3.2012, at 17.56, James Devine wrote: Anyone know how to setup dovecot with mdbox so that it can be used through shared storage from multiple hosts? I've setup a gluster volume and am sharing it between 2 test clients. I'm using postfix/dovecot LDA for delivery and I'm using postal to send mail between 40 users. In doing this, I'm seeing these errors in the logs Dovecot assumes that the filesystem behaves the same way as regular local filesystems. Mar 21 09:36:29 test-gluster-client2 dovecot: lda(testuser34): Error: Fixed index file /mnt/testuser34/mdbox/storage/dovecot.map.index: messages_count 272 - 271 Mar 21 09:36:30 test-gluster-client2 dovecot: lda(testuser28): Error: Log synchronization error at seq=4,offset=3768 for /mnt/testuser28/mdbox/storage/dovecot.map.index: Append with UID 516, but next_uid = 517 Looks like gluster doesn't fit that assumption. So, the solution is the same as with NFS: http://wiki2.dovecot.org/Director What filesystem mechanisms might not be working in this case?
Re: [Dovecot] distributed mdbox
The problem is most likely the same as with NFS: Server A caches data - server B modifies data - server A modifies data using stale cached state - corruption. Glusterfs works with FUSE, and FUSE has quite similar problems as NFS. With director you guarantee that the same mailbox isn't accessed simultaneously by multiple servers, so this problem goes away. On 21.3.2012, at 18.47, James Devine wrote: Also I don't seem to get these errors with a single dovecot machine using the shared storage and it looks like there are multiple simultaneous delivery processes running On Wed, Mar 21, 2012 at 10:25 AM, James Devine fxmul...@gmail.com wrote: On Wed, Mar 21, 2012 at 10:05 AM, Timo Sirainen t...@iki.fi wrote: On 21.3.2012, at 17.56, James Devine wrote: Anyone know how to setup dovecot with mdbox so that it can be used through shared storage from multiple hosts? I've setup a gluster volume and am sharing it between 2 test clients. I'm using postfix/dovecot LDA for delivery and I'm using postal to send mail between 40 users. In doing this, I'm seeing these errors in the logs Dovecot assumes that the filesystem behaves the same way as regular local filesystems. Mar 21 09:36:29 test-gluster-client2 dovecot: lda(testuser34): Error: Fixed index file /mnt/testuser34/mdbox/storage/dovecot.map.index: messages_count 272 - 271 Mar 21 09:36:30 test-gluster-client2 dovecot: lda(testuser28): Error: Log synchronization error at seq=4,offset=3768 for /mnt/testuser28/mdbox/storage/dovecot.map.index: Append with UID 516, but next_uid = 517 Looks like gluster doesn't fit that assumption. So, the solution is the same as with NFS: http://wiki2.dovecot.org/Director What filesystem mechanisms might not be working in this case?
Re: [Dovecot] dovecot 2.0.19 Panic: file mail-index-sync-ext.c: line 209 (sync_ext_reorder): assertion failed: (offset (uint16_t)-1)
On 3/21/12 10:02 AM, Timo Sirainen wrote: On 21.3.2012, at 15.53, Jim Lawson wrote: Had a user who couldn't access his INBOX: Mar 21 09:21:17 penguina dovecot: imap([USER]): Panic: file mail-index-sync-ext.c: line 209 (sync_ext_reorder): assertion fai led: (offset (uint16_t)-1) I kind of remember that this was fixed by http://hg.dovecot.org/dovecot-2.1/rev/b4d8e950eb9d but I'm not entirely sure. I guess I should have included in the commit the error message it fixed. This applies cleanly against 2.0.19; should I try it on that version, or not recommended? Stack trace made it look like it was the INBOX, so I deleted the index files for his INBOX and everything was OK. If it happens again, get a copy of the indexes. I sent them, encrypted, to your email address/GPG key 0x40558AC9. Jim
Re: [Dovecot] dovecot runs from shell, but not as service -- MY MISTAKE, not xinetd
All, I was mistaken in how I described my problem, please forgive this dovecot newbie for describing the problem incorrectly! It is not under xinitd, it is trying to run as an init.d service. Ok, let's try again... I am able to run it from a root shell prompt, but the errors below occur if it was started as a SERVICE, e.g. from the init.d script. So now the question is: what is different in those two environments...? Thanks, hope this clarifies things, /Mark -Original Message- From: Stan Hoeppner [mailto:s...@hardwarefreak.com] Sent: Wednesday, March 21, 2012 1:42 AM To: Mark Jeghers Cc: dovecot@dovecot.org Subject: Re: [Dovecot] dovecot runs from shell, but not xinetd On 3/20/2012 11:26 PM, Mark Jeghers wrote: Hi Stan Afraid it did not help. Here is what I got: *** entered into a telnet session... user ann +OK pass -ERR [IN-USE] Couldn't open INBOX: Internal error occurred. Refer to server log for more information. [2012-03-20 21:16:05] Connection closed by foreign host. [root@t4pserver2 mailpop3]# *** resulted in maillog... Mar 20 21:16:05 t4pserver2 dovecot: auth: Debug: passwd-file(ann,::1): lookup: user=ann file=/etc/passwd.dovecot Mar 20 21:16:05 t4pserver2 dovecot: auth: Debug: client out: OK#0112#011user=ann Mar 20 21:16:05 t4pserver2 dovecot: auth: Debug: master in: REQUEST#0113180593153#01113546#0112#0116c9a0569dcd246a9f9e7a94dbe852843 Mar 20 21:16:05 t4pserver2 dovecot: auth: Debug: passwd(ann,::1): lookup Mar 20 21:16:05 t4pserver2 dovecot: auth: Debug: master out: USER#0113180593153#011ann#011system_groups_user=ann#011uid=501#011gid=501#011home=/home/ann Mar 20 21:16:05 t4pserver2 dovecot: pop3-login: Login: user=ann, method=PLAIN, rip=::1, lip=::1, mpid=13549, secured Mar 20 21:16:05 t4pserver2 dovecot: pop3(ann): Debug: Effective uid=501, gid=501, home=/home/ann Mar 20 21:16:05 t4pserver2 dovecot: pop3(ann): Debug: fs: root=/var/spool/mailpop3, index=, control=, inbox=/var/spool/mailpop3/ann Mar 20 21:16:05 t4pserver2 dovecot: pop3(ann): Error: stat(/var/spool/mailpop3/ann) failed: Permission denied Mar 20 21:16:05 t4pserver2 dovecot: pop3(ann): Debug: Namespace : Using permissions from /var/spool/mailpop3: mode=0777 gid=-1 Mar 20 21:16:05 t4pserver2 dovecot: pop3(ann): Error: stat(/var/spool/mailpop3/ann) failed: Permission denied Mar 20 21:16:05 t4pserver2 dovecot: pop3(ann): Debug: Namespace : Using permissions from /var/spool/mailpop3: mode=0777 gid=-1 Mar 20 21:16:05 t4pserver2 dovecot: pop3(ann): Error: stat(/var/spool/mailpop3/ann) failed: Permission denied Mar 20 21:16:05 t4pserver2 dovecot: pop3(ann): Error: Couldn't open INBOX: Internal error occurred. Refer to server log for more information. [2012-03-20 21:16:05] Mar 20 21:16:05 t4pserver2 dovecot: pop3(ann): Couldn't open INBOX top=0/0, retr=0/0, del=0/0, size=0 *** file permissions... [root@t4pserver2 mailpop3]# ls -al total 248652 drwxrwxrwx. 2 root mail 4096 Mar 20 21:11 . drwxr-xr-x. 17 root root 4096 Mar 18 18:22 .. -rw-rw-r--. 1 ann mail 58739 Mar 17 04:26 ann -rw-rw-r--. 1 annphone mail 2708345 Mar 17 05:22 annphone -rw-rw-r--. 1 root mail 127272960 Mar 18 18:28 backups.tar -rw-rw-r--. 1 crimsonblues mail327563 Dec 3 14:38 crimsonblues -rw-rw-r--. 1 mark mail 0 Mar 18 13:09 mark -rw-rw-r--. 1 markphonemail 124147068 Mar 18 04:21 markphone -rw-rw-r--. 1 nathan mail 5119 Dec 22 18:52 nathan -rw-rw-r--. 1 root mail 0 Mar 18 13:13 root -rw-rw-r--. 1 testuser mail 58739 Mar 18 18:42 testuser -rw-rw-r--. 1 tim mail 16212 Mar 18 15:51 tim My CentOS installation created a user mail so I am hesitant to remove it, but it is no longer in use here. Any other ideas? What user does dovecot run as in the shell? Under xinetd? -- Stan
Re: [Dovecot] dovecot runs from shell, but not as service -- MY MISTAKE, not xinetd
On 21.3.2012, at 20.59, Mark Jeghers wrote: I am able to run it from a root shell prompt, but the errors below occur if it was started as a SERVICE, e.g. from the init.d script. So now the question is: what is different in those two environments...? .. Mar 20 21:16:05 t4pserver2 dovecot: pop3(ann): Error: stat(/var/spool/mailpop3/ann) failed: Permission denied Mar 20 21:16:05 t4pserver2 dovecot: pop3(ann): Debug: Namespace : Using permissions from /var/spool/mailpop3: mode=0777 gid=-1 Permission errors point to SELinux being the problem. Try disabling it.
[Dovecot] Dovecot 2.1.3 on solaris with mysql - make fails
I'm trying to build 2.1.3 on solaris 11 11/11 with gcc 4.5.2 sun studio 12.2 12.3 CPPFLAGS=-I/opt/openssl/include -I/usr/mysql/include/mysql \ LDFLAGS=-L/opt/openssl/lib -L/usr/mysql/lib/mysql -R/opt/openssl/lib:/usr/mysql/lib/mysql \ ./configure --prefix=/opt/dovecot \ --sysconfdir=/etc/opt \ --with-ssl=openssl \ --with-mysql make fails with both solaris standard openssl and my build of openssl. I'm also getting the same error using sunstudio mysql version is 5.1.37 The relevant output of make is on pastebin http://pastebin.com/aALHG0yL I have seen some reference to this with google but nothing thats very recent and no solutions. Anyone know how to get past this? Any tips on building dovecot on solaris? Pointers would be much appreciated. -- Robert
Re: [Dovecot] 2.1: Error: Maildir filename has wrong S value, renamed the file from
Quoting Timo Sirainen t...@iki.fi: On 21.3.2012, at 17.52, Ralf Hildebrandt wrote: * Timo Sirainen t...@iki.fi: It's renaming itself to itself again? Hmm. Yeah, this is a bit problematic for compressed mails. If the S=size isn't correct, Dovecot fixes it by stat()ing the file and using it as the size. And that's of course wrong. Also Dovecot can't simply remove the S=size, because the current Maildir code assumes that it always exists for compressed mails. There's no easy and efficient way to fix this.. Maybe you could just manually rename the files to have correct S=size? :) zcat file | wc should give the right size. Right now the whole system is down because nobody can acces his/her mails due to this. All of your mails are compressed and have wrong S=size in the filename? You can disable the check with the attached patch, but I'm not sure if there are other places where it fails. At least quota calculations won't be correct. The issue only started happening since I upgraded to 2.1.1, it didn't exist before then, I have check my system, and files before the date of upgrade are fine, only files/emails moved after upgrading to 2.1.1 have lost the S= value. I have made something that can pretty easily fix the issue, but it only stays fixed till another email gets moved and looses it's S= value. Sorry, I haven't had time to test out 2.1.3 yet. This will print out the commands needed to fix the files though. find . -name '*hostname:*' -exec 'gzip' '-l' '{}' ';' | awk '/hostname/ {for(x=4;xNF;x++) { fn=$x ; } fn=fn $x; split(fn,a,:); print mv \ fn \ \ a[1] ,S= $2 : a[2] \;}' Just change hostname to dovecot is using in your files.
Re: [Dovecot] sysconfdir depreacted
On Wed, 2012-03-21 at 15:46 +0200, Timo Sirainen wrote: On 21.3.2012, at 15.26, Noel Butler wrote: The purpose of any build scripts --sysconfdir is to tell the configuration to build in a path for its binaries configuration file(s). Dovecot 2.1.3, seems to insist that that directory is now /etc/dovecot/ ignoring --sysconfdir=/etc as in 1.2.x and previous majors before that, is this a bug? if not, then I see no point of sysconfdir any more and it should be removed, if dovecot deliberately ignores what it is told to use. --sysconfdir=/etc uses /etc/dovecot/ --sysconfdir=/opt/dovecot/etc uses /opt/dovecot/etc/dovecot/ There is now always the dovecot/ suffix, but the the /etc part is still configurable. perhaps it should be renamed then, given it violates the known normal for SYSCONF dir, you've just created another form of --datadir from gnu.org: sysconfdir The directory for installing read-only data files that pertain to a single machine–that is to say, files for configuring a host. Mailer and network configuration files, ‘/etc/passwd’, and so forth belong here. All the files in this directory should be ordinary ASCII text files. This directory should normally be ‘/usr/local/etc’, but write it as ‘$(prefix)/etc’. (If you are using Autoconf, write it as ‘@sysconfdir@’.) datadir The directory for installing idiosyncratic read-only architecture-independent data files for this program. This is usually the same place as ‘datarootdir’, but we use the two separate variables so that you can move these program-specific files without altering the location for Info files, man pages, etc. signature.asc Description: This is a digitally signed message part
Re: [Dovecot] doveadm sync impac problem
On 3/21/2012 10:17 PM, Martin Schitter wrote: Am 16.3.2011 20:59, schrieb Gedalya: Starting program: /usr/bin/doveadm -o imapc_user=jedi at example.com -o imapc_password= backup -u jedi at example.com -R imapc: Timo, we have a problem, somewhere between 2.1.rc7 and 2.1.1. Current versions are putting the body of the last message in Sent Items in place of every single email in INBOX. In other words, for every email that sits in INBOX in the source, I get a copy of the last email in Sent Items instead. This happens for every account I try to migrate. Very strange. I noticed this only now, and the last package I have left in the local apt cache which still works is 2.1.rc7-0~auto+0. i see the same regression (2.1.3-0~auto+4) :( doveadm sync/backup via impac puts the same message all over the place... Thanks Martin, I've set up a test platform to investigate this further but I've been short on time...
Re: [Dovecot] distributed mdbox
On 3/21/2012 12:04 PM, Timo Sirainen wrote: The problem is most likely the same as with NFS: Server A caches data - server B modifies data - server A modifies data using stale cached state - corruption. Glusterfs works with FUSE, and FUSE has quite similar problems as NFS. With director you guarantee that the same mailbox isn't accessed simultaneously by multiple servers, so this problem goes away. If using real shared storage i.e. an FC or iSCSI SAN LUN, you could use a true cluster file system such as OCFS or GFS. Both will eliminate this problem, and without requiring Dovecot director. And you'll get better performance than with Gluster, which, BTW, isn't really suitable as a transactional filesystem, was not designed for such a use case. -- Stan