Re: Notifications to over-quota accounts

2024-03-15 Thread Urban Loesch via dovecot

Hi,

I'm sending warnings to accounts when their quota gets up to 80% and again on 
95%.

Relevant parts from "doveconf -n":

...
service quota-warning {
  executable = script /usr/local/bin/quota-warning.sh
  unix_listener quota-warning {
user = mailstore
  }
  user = mailstore
}


plugin {
...
  quota = count:User quota
  quota_rule2 = Trash:storage=+100M
  quota_status_nouser = DUNNO
  quota_status_overquota = 552 5.2.2 Mailbox is full
  quota_status_success = DUNNO
  quota_vsizes = yes
  quota_warning = storage=95%% quota-warning 95 %u
  quota_warning2 = storage=80%% quota-warning 80 %u
...
}

Bash script:
...
#!/bin/sh
PERCENT=$1
USER=$2
DATE=`date`
MSGID_P1=`date '+%Y%m%d%H%M%S'`
MSGID_P2=`hostname`
MSGID=$MSGID_P1@$MSGID_P2
logger -p mail.info -t dovecot "$PERCENT% Quota-warning sent to $USER"
cat << EOF | /usr/lib/dovecot/dovecot-lda -d $USER -o "plugin/quota=count:User 
quota:noenforcing"
From: 
To: <$USER>
Subject: $PERCENT% Mail quota warning
Message-ID: <$MSGID>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Date: $DATE

Your text
...


Best
Urban




Am 15.03.24 um 12:58 schrieb N V:

Hello!
I'm trying to allow a system email address to send notifications to over-quota
accounts.
Is there a way to do it?

Thanks in advance!


___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org

___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Re: dovecot and oauth2 (with keycloak) not working

2023-11-20 Thread Urban Loesch via dovecot

Hi,

I'm running dovecot with keycloak without problems since 1 month.

>>Nov 20 08:20:30 auth: Error: oauth2(fran...@mydomain.com,10.10.40.30,): oauth2 failed: connect(10.10.100.10:443) failed: 
Connection refused


It seem's that your keycloak is not responding to connection requests on port 443. You 
can try "telnet 10.10.100.10 443" from your dovecot server?

Regards
Urban




Am 20.11.23 um 08:29 schrieb Francis Augusto Medeiros-Logeay via dovecot:

Hi,

I successfully configured Roundcube to use keycloak for oauth2.

However, I am having trouble to make it work with dovecot. My configuration is 
this:

cat dovecot-oauth2.conf.ext
tokeninfo_url = 
https://auth.mydomain.com/realms/myrealm/protocol/openid-connect/userinfo
introspection_url = 
https://auth.mydomain.com/realms/myrealm/protocol/openid-connect/token/introspect
introspection_mode = post
username_attribute = postfixMailAddress
debug = yes
scope = openid Roundcube_email

This is what I am getting from the logs:


Nov 20 08:20:30 auth: Error: 
ldap(fran...@mydomain.com,10.10.40.30,): ldap_bind() failed: 
Constraint violation
Nov 20 08:20:30 auth: Debug: http-client: host auth.mydomain.com: Host created
Nov 20 08:20:30 auth: Debug: http-client: host auth.mydomain.com: Host session 
created
Nov 20 08:20:30 auth: Debug: http-client: host auth.mydomain.com: IPs have 
expired; need to refresh DNS lookup
Nov 20 08:20:30 auth: Debug: http-client: host auth.mydomain.com: Performing 
asynchronous DNS lookup
Nov 20 08:20:30 auth: Debug: http-client[1]: request [Req1: GET 
https://auth.mydomain.com/realms/med-lo/protocol/openid-connect/userinfoeyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICJaYTFXcXhxb0RULXBSc2o1WXZFdUJfLUxBVUtGNk5SeFFrUS1mNmdTUGs4In0.eyJleHAiOjE3MDA0NjUxMzAsImlhdCI6MTcwMDQ2NDgzMCwiYXV0aF90aW1lIjoxNzAwNDY0Njg5LCJqdGkiOiIzZTk5YWI4Yi0xZTkyLTRlMDYtYjg0NC1kODc4ZDZjODZjOWMiLCJpc3MiOiJodHRwczovL2F1dGgubWVkLWxvLmV1L3JlYWxtcy9tZWQtbG8iLCJhdWQiOiJhY2NvdW50Iiwic3ViIjoiZGE5MDk4NDQtNjlmOS00ZjkzLWI1NjctMGZjOWQ3YzA3MTZmIiwidHlwIjoiQmVhcmVyIiwiYXpwIjoicm91bmRjdWJlIiwic2Vzc2lvbl9zdGF0ZSI6ImZkY2I2YTczLTNjNjgtNDlhNS1hOTlkLTdkYmE4ODNlNjg4NiIsImFjciI6IjAiLCJhbGxvd2VkLW9yaWdpbnMiOlsiaHR0cHM6Ly9tYWlsLm1lZC1sby5ldSJdLCJyZWFsbV9hY2Nlc3MiOnsicm9sZXMiOlsib2ZmbGluZV9hY2Nlc3MiLCJ1bWFfYXV0aG9yaXphdGlvbiIsImRlZmF1bHQtcm9sZXMtbWVkLWxvIl19LCJyZXNvdXJjZV9hY2Nlc3MiOnsiYWNjb3VudCI6eyJyb2xlcyI6WyJtYW5hZ2UtYWNjb3VudCIsIm1hbmFnZS1hY2NvdW50LWxpbmtzIiwidmlldy1wcm9maWxlIl19fSwic2NvcGUiOiJvcGVuaWQgUm91bmRjdWJlX2VtYWlsIHByb2ZpbGUgZW1haWwgb3BlbmVkIiwic2lkIjoiZmRjYjZhNzMtM2M2OC00OWE1LWE5OWQtN2RiYTg4M2U2ODg2IiwiZW1haWxfdmVyaWZpZWQiOnRydWUsInBvc3RmaXhNYWlsQWRkcmVzcyI6ImZyYW5jaXNAbWVkLWxvLmV1IiwicG9zdGZpeE1haWxib3giOiJmcmFuY2lzQG1lZC1sby5ldSIsIm5hbWUiOiJGcmFuY2lzIEF1Z3VzdG8gTWVkZWlyb3MtTG9nZWF5IiwicHJlZmVycmVkX3VzZXJuYW1lIjoiZnJhbmNpcyIsImdpdmVuX25hbWUiOiJGcmFuY2lzIEF1Z3VzdG8iLCJmYW1pbHlfbmFtZSI6Ik1lZGVpcm9zLUxvZ2VheSIsImVtYWlsIjoiZnJhbmNpc0BtZWQtbG8uZXUifQ.Cehd8sbCTihfq1SKQitLTPfZZAWHx31sy8I6YydY_3eZvyHRellhQz1F9NxFt0uHaFk3KeddHV6U9z14qT7fStDp18ECJodSdcDt4k6J7geNjSbO3jSXOfk5JTbNPv0agi9e767E54g2ZkStPEezrAYY83msx7JSVpEmwKItSrDyyAWH44jp0OsnaLVCOZP1gBklTgiDt7uVsFwL9kpGamsMt62jNADnIAt6qLapHofiXi7GuIKdQP8-IG_7cCcpY6bEvcHiSgqhIpk5UHgMsljNQOkCKDpQ5rrTmRxloVF1y1zE7LYPNcugC_ZF_5TzxhVTEdEOLL9Q5epdlJvtvQ]:
 Submitted (requests left=1)
Nov 20 08:20:30 auth: Debug: http-client: host auth.mydomain.com: DNS lookup 
successful; got 1 IPs
Nov 20 08:20:30 auth: Debug: http-client: peer 10.10.100.10:443 (shared): Peer 
created
Nov 20 08:20:30 auth: Debug: http-client: peer 10.10.100.10:443: Peer pool 
created
Nov 20 08:20:30 auth: Debug: http-client[1]: peer 10.10.100.10:443: Peer created
Nov 20 08:20:30 auth: Debug: http-client[1]: queue 
https://auth.mydomain.com:443: Setting up connection to 10.10.100.10:443 
(SSL=auth.mydomain.com) (1 requests pending)
Nov 20 08:20:30 auth: Debug: http-client[1]: peer 10.10.100.10:443: Linked 
queue https://auth.mydomain.com:443 (1 queues linked)
Nov 20 08:20:30 auth: Debug: http-client[1]: queue 
https://auth.mydomain.com:443: Started new connection to 10.10.100.10:443 
(SSL=auth.mydomain.com)
Nov 20 08:20:30 auth: Debug: http-client[1]: peer 10.10.100.10:443: Creating 1 
new connections to handle requests (already 0 usable, connecting to 0, closing 
0)
Nov 20 08:20:30 auth: Debug: http-client[1]: peer 10.10.100.10:443: Making new 
connection 1 of 1 (0 connections exist, 0 pending)
Nov 20 08:20:30 auth: Debug: http-client: conn 10.10.100.10:443 [1]: Connecting
Nov 20 08:20:30 auth: Debug: http-client: conn 10.10.100.10:443 [1]: Waiting 
for connect (fd=23) to finish for max 0 msecs
Nov 20 08:20:30 auth: Debug: http-client: conn 10.10.100.10:443 [1]: HTTPS 
connection created (1 parallel connections exist)
Nov 20 08:20:30 auth: Debug: http-client: conn 10.10.100.10:443 [1]: Client 
connection failed (fd=23)
Nov 20 08:20:30 auth: Debug: http-client[1]: peer 10.10.100.10:443: Connection 
failed (1 connections exist, 0 pending)
Nov 20 08:20:30 auth: Debug: 

Re: Minimum configuration for Dovecot SASL only?

2023-11-06 Thread Urban Loesch via dovecot

Hi,

I use the same setup with the following packages:

# dpkg -l |grep dovecot
ii  dovecot-core 2:2.3.21-1+debian10   
amd64secure POP3/IMAP server - core files
ii  dovecot-mysql2:2.3.21-1+debian10   
amd64secure POP3/IMAP server - MySQL support

postfix main.cf Parameters:
...
smtpd_sasl_type = dovecot
smtpd_sasl_path = private/auth
smtpd_sasl_auth_enable = yes
smtpd_sasl_security_options = noanonymous
smtpd_sasl_local_domain = $myhostname
broken_sasl_auth_clients = yes
smtpd_sasl_authenticated_header = no
...

And it works without problems.

Best
Urban



Am 03.11.23 um 17:55 schrieb Nick Lockheart:

I have a Dovecot IMAP server and a Postfix server on separate machines.
The user information is stored in a MariaDB database that is replicated
on both servers.

Postfix needs to authenticate outgoing mail against our valid user
database. I believe this requires us to install a "dummy" Dovecot on
the Postfix server so that Dovecot SASL can provide authentication to
Postfix from the database.

I think Cyrus had a standalone Cyrus-SASL package, but Dovecot doesn't?

If I wanted to setup a Dovecot instance on the Postfix server just for
the purposes of SMTP authentication, and not use it to handle any mail,
what is the minimum configuration required to make that work?

Is the dovecot-common package (Debian) enough? Or do I need the full
dovecot-imap package?

What protocols go in the protocols directive? Can you just make it
"protocols = auth" to disable IMAP connections?

___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org

___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Re: auth: Panic: file oauth2-request.c assertion failed

2023-09-05 Thread Urban Loesch via dovecot

Hi,

no one has received the same error?
In the meantime I upgraded to dovecot 2.3.20. But the error is still here.

ii  dovecot-core 2:2.3.20-3+debian10   
amd64secure POP3/IMAP server - core files
ii  dovecot-dbg  2:2.3.20-3+debian10   
amd64secure POP3/IMAP server - debug symbols
ii  dovecot-imapd2:2.3.20-3+debian10   
amd64secure POP3/IMAP server - IMAP daemon
ii  dovecot-lmtpd2:2.3.20-3+debian10   
amd64secure POP3/IMAP server - LMTP server
ii  dovecot-managesieved 2:2.3.20-3+debian10   
amd64secure POP3/IMAP server - ManageSieve server
ii  dovecot-mysql2:2.3.20-3+debian10   
amd64secure POP3/IMAP server - MySQL support
ii  dovecot-pop3d2:2.3.20-3+debian10   
amd64secure POP3/IMAP server - POP3 daemon
ii  dovecot-sieve2:2.3.20-3+debian10   
amd64secure POP3/IMAP server - Sieve filters support

The error produces also "lmtp" and "quota-status" delays:

...
Sep  5 07:43:18 dcot-server-1 dovecot: auth: Panic: file oauth2-request.c: line 201 (oauth2_request_start): assertion failed: 
(oauth2_valid_token(input->token))
Sep  5 07:43:18 dcot-server-1 dovecot: auth: Error: Raw backtrace: /usr/lib/dovecot/libdovecot.so.0(backtrace_append+0x3d) [0x7f216adab85d] -> 
/usr/lib/dovecot/libdovecot.so.0(backtrace_get+0x1e) [0x7f216adab97e] -> /usr/lib/dovecot/libdovecot.so.0(+0x10091b) [0x7f216adb891b] -> 
/usr/lib/dovecot/libdovecot.so.0(+0x1009b1) [0x7f216adb89b1] -> /usr/lib/dovecot/libdovecot.so.0(+0x54b7c) [0x7f216ad0cb7c] -> 
/usr/lib/dovecot/libdovecot.so.0(+0x4605a) [0x7f216acfe05a] -> /usr/lib/dovecot/libdovecot.so.0(oauth2_passwd_grant_start+0xfa) [0x7f216ad1576a] -> 
dovecot/auth(db_oauth2_lookup+0x2ca) [0x55f3c150968a] -> dovecot/auth(auth_request_default_verify_plain_continue+0x2d6) [0x55f3c14ec436] -> 
dovecot/auth(auth_policy_check+0x2b) [0x55f3c14e671b] -> dovecot/auth(+0x29f72) [0x55f3c14f5f72] -> dovecot/auth(+0x1ad83) [0x55f3c14e6d83] -> 
/usr/lib/dovecot/libdovecot.so.0(+0x83db0) [0x7f216ad3bdb0] -> /usr/lib/dovecot/libdovecot.so.0(io_loop_call_io+0x69) [0x7f216adcee59] -> 
/usr/lib/dovecot/libdovecot.so.0(io_loop_handler_run_internal+0x131) [0x7f216add0481] -> /usr/lib/dovecot/libdovecot.so.0(io_loop_handler_run+0x4c) 
[0x7f216adceefc] -> /usr/lib/dovecot/libdovecot.so.0(io_loop_run+0x40) [0x7f216adcf080] -> /usr/lib/dovecot/libdovecot.so.0(master_service_run+0x13) 
[0x7f216ad41e73] -> dovecot/auth(main+0x415) [0x55f3c14e0bf5] -> /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xeb) [0x7f216aa6c09b] -> 
dovecot/auth(_start+0x2a) [0x55f3c14e0d8a]
Sep  5 07:43:18 dcot-server-1 dovecot: imap(4188908): Error: auth-master: login: request [959315969]: Login auth request failed: Disconnected from 
auth server, aborting (auth connected 4432 msecs ago, request took 21 msecs, client-pid=4184929 client-id=1741)
Sep  5 07:43:18 dcot-server-1 dovecot: auth: Fatal: master: service(auth): child 4184931 killed with signal 6 (core dumps disabled - 
https://dovecot.org/bugreport.html#coredumps)
Sep  5 07:43:18 dcot-server-1 dovecot: lmtp(u...@domain.com pid:4185087 session:): Error: auth-master: userdb 
lookup(u...@domain.com): write(auth socket) failed: Broken pipe
Sep  5 07:43:18 dcot-server-1 dovecot: lmtp(4185087): Error: lmtp-server: conn unix:pid=4188719,uid=101 [68]: rcpt u...@domain.com: Failed to lookup 
user u...@domain.com: Internal error occurred. Refer to server log for more information.
Sep  5 07:43:18 dcot-server-1 dovecot: quota-status(us...@domain.com pid:4185003 session:<>): Error: auth-master: userdb lookup(us...@domain.com): 
write(auth socket) failed: Broken pipe
Sep  5 07:43:18 dcot-server-1 dovecot: quota-status(4185003): Error: Failed to lookup user us...@domain.com: Internal error occurred. Refer to server 
log for more information.

...

Have you any idea how i can fix this? Perhaps there is something wrong with my 
configuration.


Many Thanks
Urban



Am 07.12.22 um 14:49 schrieb Urban Loesch:

Hi,

I'm running a postfix smtp relay server on which users are getting authenticated trough sasl via the dovecot authentication service. This works 
without problems.


Recently I extended my configuration to authenticate users with the PLAIN 
mechanism against Azure AD B2C with oauth2 in the future.
Important to mention is that currently no users are stored in Azure B2C. I only prepared the hole configuration and authentiucation falls back to 
mysql database.


Now sometimes I get the error below in my logs (not permanently):
...
Dec  7 11:36:49 relay-out1 dovecot: auth: Panic: file oauth2-request.c: line 201 (oauth2_request_start):

auth: Panic: file oauth2-request.c assertion failed

2022-12-07 Thread Urban Loesch

Hi,

I'm running a postfix smtp relay server on which users are getting authenticated trough sasl via the dovecot authentication service. This works 
without problems.


Recently I extended my configuration to authenticate users with the PLAIN 
mechanism against Azure AD B2C with oauth2 in the future.
Important to mention is that currently no users are stored in Azure B2C. I only prepared the hole configuration and authentiucation falls back to 
mysql database.


Now sometimes I get the error below in my logs (not permanently):
...
Dec  7 11:36:49 relay-out1 dovecot: auth: Panic: file oauth2-request.c: line 201 (oauth2_request_start): assertion failed: 
(oauth2_valid_token(input->token))
Dec  7 11:36:49 relay-out1 dovecot: auth: Error: Raw backtrace: /usr/lib/dovecot/libdovecot.so.0(backtrace_append+0x3d) [0x7f399402e62d] -> 
/usr/lib/dovecot/libdovecot.so.0(backtrace_get+0x1e) [0x7f399402e74e] -> /usr/lib/dovecot/libdovecot.so.0(+0x1006eb) [0x7f399403b6eb] -> 
/usr/lib/dovecot/libdovecot.so.0(+0x100781) [0x7f399403b781] -> /usr/lib/dovecot/libdovecot.so.0(+0x54b7c) [0x7f3993f8fb7c] -> 
/usr/lib/dovecot/libdovecot.so.0(+0x4605a) [0x7f3993f8105a] -> /usr/lib/dovecot/libdovecot.so.0(oauth2_passwd_grant_start+0xfa) [0x7f3993f9870a] -> 
dovecot/auth(db_oauth2_lookup+0x2ca) [0x560d0804368a] -> dovecot/auth(auth_request_default_verify_plain_continue+0x2d6) [0x560d08026436] -> 
dovecot/auth(auth_policy_check+0x2b) [0x560d0802071b] -> dovecot/auth(+0x29f72) [0x560d0802ff72] -> 
dovecot/auth(auth_request_handler_auth_continue+0xf7) [0x560d0802a197] -> dovecot/auth(+0x16c62) [0x560d0801cc62] -> 
/usr/lib/dovecot/libdovecot.so.0(io_loop_call_io+0x69) [0x7f3994051c29] -> /usr/lib/dovecot/libdovecot.so.0(io_loop_handler_run_internal+0x131) 
[0x7f3994053251] -> /usr/lib/dovecot/libdovecot.so.0(io_loop_handler_run+0x4c) [0x7f3994051ccc] -> /usr/lib/dovecot/libdovecot.so.0(io_loop_run+0x40) 
[0x7f3994051e50] -> /usr/lib/dovecot/libdovecot.so.0(master_service_run+0x13) [0x7f3993fc4df3] -> dovecot/auth(main+0x415) [0x560d0801abf5] -> 
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xeb) [0x7f3993cee09b] -> dovecot/auth(_start+0x2a) [0x560d0801ad8a]

...


After the error above postfix says the connection to the authentication server 
was lost:
...
Dec  7 11:36:49 relay-out1 rolrelayout1/smtpd[1655601]: warning: domain.com[12.12.12.12]: SASL LOGIN authentication failed: Connection lost to 
authentication server

...


I was able to make a core dump.
...
# gdb /usr/lib/dovecot/auth core.auth.1654357.1670409409
GNU gdb (Debian 8.2.1-2+b3) 8.2.1
...
For help, type "help".
Type "apropos word" to search for commands related to "word"...
Reading symbols from /usr/lib/dovecot/auth...Reading symbols from 
/usr/lib/debug/.build-id/bc/f14ed9cc2dda3ee9565fbd7ccd835460008438.debug...done.
done.
[New LWP 1654357]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
Core was generated by `dovecot/auth'.
Program terminated with signal SIGABRT, Aborted.
#0  __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
50  ../sysdeps/unix/sysv/linux/raise.c: Datei oder Verzeichnis nicht 
gefunden.
(gdb) bt full
#0  __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
set = {__val = {0, 1415, 1416, 94613970086496, 10, 139885273538955, 155, 139885273347109, 140724590457408, 118, 206158430224, 
140724590457744, 140724590457536, 16888407779496842752, 94613970086472, 139885273131758}}

pid = 
tid = 
ret = 
#1  0x7f3993cec535 in __GI_abort () at abort.c:79
save_stage = 1
act = {__sigaction_handler = {sa_handler = 0x73f, sa_sigaction = 0x73f}, sa_mask = {__val = {1024, 18446744073709551615, 94613971926672, 
94613971631792, 139885273108347, 94613970084312, 139885273090236, 94613970084312,
  16888407779496842752, 94613970084288, 139885273346402, 140724590457744, 94613970084312, 140724590457744, 139885273346761, 
94613970084312}}, sa_flags = -1811749030, sa_restorer = 0x5}

sigs = {__val = {32, 0 }}
#2  0x7f3993f8feaf in default_fatal_finish (status=0, type=LOG_TYPE_PANIC) 
at failures.c:465
backtrace = 0x560d0813e220 "/usr/lib/dovecot/libdovecot.so.0(backtrace_append+0x3d) [0x7f399402e62d] -> 
/usr/lib/dovecot/libdovecot.so.0(backtrace_get+0x1e) [0x7f399402e74e] -> /usr/lib/dovecot/libdovecot.so.0(+0x1006eb) [0x7f39"...

backtrace = 
recursed = 0
#3  fatal_handler_real (ctx=, format=, 
args=) at failures.c:477
status = 0
#4  0x7f399403b781 in i_internal_fatal_handler (ctx=, 
format=, args=) at failures.c:879
No locals.
#5  0x7f3993f8fb7c in i_panic (format=format@entry=0x7f399407a0e8 "file %s: line 
%d (%s): assertion failed: (%s)") at failures.c:530
ctx = {type = LOG_TYPE_PANIC, exit_status = 0, timestamp = 0x0, 
timestamp_usecs = 0, log_prefix = 0x0, log_prefix_type_pos = 0}
args = {{gp_offset = 40, fp_offset = 48, 

Re: disable pop3 ports?

2021-05-04 Thread Urban Loesch

Hi,

you can try to insert "protocols = imap lmtp" ath the end of your 
"dovecot.conf" file.
That works for me.

Regards
Urban

Am 04.05.21 um 10:29 schrieb Dan Egli:

For gentoo, there is only one package.  And here's your output:

# 2.3.13 (89f716dc2): /etc/dovecot/dovecot.conf
# Pigeonhole version 0.5.13 (cdd19fe3)
# OS: Linux 5.11.16-gentoo-x86_64 x86_64 Gentoo Base System release 2.7 xfs
# Hostname: jupiter.newideatest.site
auth_debug = yes
auth_mechanisms = plain login
auth_socket_path = /run/dovecot/auth-userdb
auth_verbose = yes
debug_log_path = /var/log/dovecot/debug.log
default_vsz_limit = 1 G
disable_plaintext_auth = no
first_valid_uid = 114
hostname = jupiter.newideatest.site
info_log_path = /var/log/dovecot/info.log
log_path = /var/log/dovecot/error.log
mail_debug = yes
mail_gid = exim4u
mail_location = 
maildir:/var/mail/%d/%n/Maildir:INDEX=/var/mail/indexes/%d/%1n/%n
mail_plugins = fts
mail_privileged_group = mail
mail_server_admin = 
mail_uid = exim4u
managesieve_notify_capability = mailto
managesieve_sieve_capability = fileinto reject envelope encoded-character vacation subaddress comparator-i;ascii-numeric relational regex imap4flags 
copy include variables body enotify environment mailbox date index ihave duplicate mime foreverypart extracttext imapsieve vnd.dovecot.imapsieve

namespace inbox {
   inbox = yes
   location =
   mailbox Drafts {
     special_use = \Drafts
   }
   mailbox Junk {
     special_use = \Junk
   }
   mailbox Sent {
     special_use = \Sent
   }
   mailbox "Sent Messages" {
     special_use = \Sent
   }
   mailbox Trash {
     special_use = \Trash
   }
   prefix =
}
passdb {
   args = /etc/dovecot/dovecot-sql.conf.ext
   driver = sql
}
passdb {
   args = /etc/dovecot/dovecot-ldap.conf.ext
   driver = ldap
}
plugin {
   fts_autoindex = yes
   fts_autoindex_exclude = \Junk
   fts_autoindex_exclude2 = \Trash
   fts_autoindex_exclude3 = \Drafts
   fts_autoindex_exclude4 = \Spam
   fts_enforced = yes
   imapsieve_mailbox1_before = file:/var/lib/dovecot/sieve/report-spam.sieve
   imapsieve_mailbox1_causes = COPY
   imapsieve_mailbox1_name = Spam
   imapsieve_mailbox2_before = file:/var/lib/dovecot/sieve/report-ham.sieve
   imapsieve_mailbox2_causes = COPY
   imapsieve_mailbox2_from = Spam
   imapsieve_mailbox2_name = *
   plugin = fts managesieve sieve
   sieve = file:%h/sieve;active=%h/.dovecot.sieve
   sieve_Dir = ~/sieve
   sieve_execute_bin_dir = /usr/lib/dovecot/sieve-execute
   sieve_filter_bin_dir = /usr/lib/dovecot/sieve-filter
   sieve_global_dir = /var/lib/dovecot/sieve/
   sieve_global_extensions = +vnd.dovecot.pipe +vnd.dovecot.environment
   sieve_global_path = /var/lib/dovecot/sieve/default.sieve
   sieve_pipe_bin_dir = /var/lib/dovecot/sieve
   sieve_plugins = sieve_imapsieve sieve_extprograms
}
postmaster_address = postmas...@newideatest.site
service auth {
   unix_listener auth-client {
     mode = 0600
     user = exim4u
   }
   unix_listener auth-userdb {
     group = exim4u
     mode = 0777
     user = exim4u
   }
}
service lmtp {
   unix_listener /var/spool/exim/dovecot-lmtp/lmtp {
     group = exim4u
     mode = 0660
     user = exim4u
   }
}
service managesieve-login {
   inet_listener sieve {
     port = 4190
   }
}
service stats {
   unix_listener stats-reader {
     mode = 0777
     user = exim4u
   }
   unix_listener stats-writer {
     mode = 0777
     user = exim4u
   }
}
service submission-login {
   inet_listener submission {
     port = 2587
   }
}
ssl_cert = 


On 2021-05-04 10:20, Dan Egli wrote:

Already did all of that. like I said, EVERY instance of pop3 in the
entire config set is commented out.

Then please post the output of doveconf -n. Seems there is still something left.

The list of installed dovecot packages would also be help.



Panic: file message-parser.c

2020-11-26 Thread Urban Loesch

Hi,

I'm running Dovecot 2.3.11 from Dovet Repository on Debian 9.13.
I get the following error when some of our users are doing searches in 
mailboxes.

My error log:
...
Nov 25 17:32:13 dcot-mydomain-1 dovecot: imap-login: ID sent: x-session-id=Dh04+PC0+O9t+MsP, x-originating-ip=109.248.203.15, 
x-originating-port=61432, x-connected-ip=10.10.10.10206, x-connected-port=993, x-proxy-ttl=4: user=<>, rip=109.248.203.15, lip=10.10.10.10206, 
secured, session=
Nov 25 17:32:13 dcot-mydomain-1 dovecot: imap-login: Login: user=, method=PLAIN, rip=109.248.203.15, lip=10.10.10.10206, mpid=110860, 
secured, session=
Nov 25 17:32:18 dcot-mydomain-1 dovecot: imap(u...@domain.com pid:110860 session:): Panic: file message-parser.c: line 174 
(message_part_finish): assertion failed: (ctx->nested_parts_count > 0)
Nov 25 17:32:18 dcot-mydomain-1 dovecot: imap(u...@domain.com pid:110860 session:): Error: Raw backtrace: 
/usr/lib/dovecot/libdovecot.so.0(backtrace_append+0x42) [0x7fd61a26c0e2] -> /usr/lib/dovecot/libdovecot.so.0(backtrace_get+0x1e) [0x7fd61a26c1ee] -> 
/usr/lib/dovecot/libdovecot.so.0(+0xe9c81) [0x7fd61a276c81] -> /usr/lib/dovecot/libdovecot.so.0(+0xe9d21) [0x7fd61a276d21] -> 
/usr/lib/dovecot/libdovecot.so.0(i_fatal+0) [0x7fd61a1cbfa4] -> /usr/lib/dovecot/libdovecot.so.0(+0xc990e) [0x7fd61a25690e] -> 
/usr/lib/dovecot/libdovecot.so.0(message_parser_parse_next_block+0xdc) [0x7fd61a257f8c] -> /usr/lib/dovecot/libdovecot.so.0(message_search_msg+0xa0) 
[0x7fd61a25a810] -> /usr/lib/dovecot/libdovecot-storage.so.0(+0xc4c5f) [0x7fd61a5fbc5f] -> 
/usr/lib/dovecot/libdovecot-storage.so.0(mail_search_args_foreach+0x45) [0x7fd61a577575] -> /usr/lib/dovecot/libdovecot-storage.so.0(+0xc5a5e) 
[0x7fd61a5fca5e] -> /usr/lib/dovecot/libdovecot-storage.so.0(+0xc6f63) [0x7fd61a5fdf63] -> 
/usr/lib/dovecot/libdovecot-storage.so.0(index_storage_search_next_nonblock+0x10c) [0x7fd61a5fe68c] -> 
/usr/lib/dovecot/libdovecot-storage.so.0(mailbox_search_next_nonblock+0x22) [0x7fd61a580fb2] -> dovecot/imap(+0x257f0) [0x55be0929f7f0] -> 
dovecot/imap(command_exec+0x64) [0x55be09297ff4] -> dovecot/imap(+0x24a32) [0x55be0929ea32] -> 
/usr/lib/dovecot/libdovecot.so.0(io_loop_handle_timeouts+0x137) [0x7fd61a28f707] -> 
/usr/lib/dovecot/libdovecot.so.0(io_loop_handler_run_internal+0xc1) [0x7fd61a291211] -> /usr/lib/dovecot/libdovecot.so.0(io_loop_handler_run+0x59) 
[0x7fd61a28fa69] -> /usr/lib/dovecot/libdovecot.so.0(io_loop_run+0x38) [0x7fd61a28fc98] -> /usr/lib/dovecot/libdovecot.so.0(master_service_run+0x13) 
[0x7fd61a1fd2f3] -> dovecot/imap(main+0x338) [0x55be09287f68] -> /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf1) [0x7fd619e0e2e1] -> 
dovecot/imap(_start+0x2a) [0x55be0928816a]
Nov 25 17:32:18 dcot-mydomain-1 dovecot: imap(u...@domain.com pid:110860 session:): Fatal: master: service(imap): child 110860 
killed with signal 6 (core dumps disabled - https://dovecot.org/bugreport.html#coredumps)

...

I searched the archives and found this patch which seems to fix it:
https://github.com/dovecot/core/commit/a668d767a710ca18ab6e7177d8e8be22a6b024fb


Have you planned to release new Debian 9 packages in dovecot repository where 
this patch will be applied?

My current Dovecot version is:
ii  dovecot-core2:2.3.11.3-3+debian9   amd64
secure POP3/IMAP server - core files
ii  dovecot-dbg 2:2.3.11.3-3+debian9   amd64
secure POP3/IMAP server - debug symbols
ii  dovecot-imapd   2:2.3.11.3-3+debian9   amd64
secure POP3/IMAP server - IMAP daemon
ii  dovecot-lmtpd   2:2.3.11.3-3+debian9   amd64
secure POP3/IMAP server - LMTP server
ii  dovecot-managesieved2:2.3.11.3-3+debian9   amd64
secure POP3/IMAP server - ManageSieve server
ii  dovecot-mysql   2:2.3.11.3-3+debian9   amd64
secure POP3/IMAP server - MySQL support
ii  dovecot-pop3d   2:2.3.11.3-3+debian9   amd64
secure POP3/IMAP server - POP3 daemon
ii  dovecot-sieve   2:2.3.11.3-3+debian9   amd64
secure POP3/IMAP server - Sieve filters support
...


Many thanks
Urban


Re: dovecot 2.2.36.4 problem with ulimit

2020-09-16 Thread Urban Loesch

Hi,

perhaps this?

> with new debian9:
> open files  (-n) 1024

Regards
Urban


Am 16.09.20 um 12:57 schrieb Maciej Milaszewski:

Hi
Limits:

Where all working fine:

core file size  (blocks, -c) 0
data seg size   (kbytes, -d) unlimited
scheduling priority (-e) 0
file size   (blocks, -f) unlimited
pending signals (-i) 257970
max locked memory   (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files  (-n) 65536
pipe size    (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority  (-r) 0
stack size  (kbytes, -s) 8192
cpu time   (seconds, -t) unlimited
max user processes  (-u) 257970
virtual memory  (kbytes, -v) unlimited
file locks  (-x) unlimited


with new debian9:

core file size  (blocks, -c) 0
data seg size   (kbytes, -d) unlimited
scheduling priority (-e) 0
file size   (blocks, -f) unlimited
pending signals (-i) 257577
max locked memory   (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files  (-n) 1024
pipe size    (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority  (-r) 0
stack size  (kbytes, -s) 8192
cpu time   (seconds, -t) unlimited
max user processes  (-u) 257577
virtual memory  (kbytes, -v) unlimited
file locks  (-x) unlimited


maby systemd "something has changed"

and add:

echo "kernel.pid_max = 5" >> /etc/sysctl.conf
sysctl -p
systemctl edit dovecot.service

[Service]
TasksMax=4
systemctl daemon-reload
systemctl restart dovecot.service

cat /sys/fs/cgroup/pids/system.slice/dovecot.service/pids.max


Any idea ?

On 16.09.2020 09:45, Maciej Milaszewski wrote:

Hi
I update os from debian8 to debian9

# 2.2.36.4 (baf9232c1): /etc/dovecot/dovecot.conf
# Pigeonhole version 0.4.24.2 (aaba65b7)
# OS: Linux 4.9.0-13-amd64 x86_64 Debian 9.13

All works fine but sometimes I get:

Sep 16 09:17:00 dovecot4 dovecot: master: Error: service(pop3): fork()
failed: Resource temporarily unavailable (ulimit -u 257577 reached?)
Sep 16 09:17:00 dovecot4 dovecot: master: Error: service(imap): fork()
failed: Resource temporarily unavailable (ulimit -u 257577 reached?)
Sep 16 09:17:00 dovecot4 dovecot: master: Error: service(doveadm):
fork() failed: Resource temporarily unavailable (ulimit -u 257577 reached?)
Sep 16 09:17:02 dovecot4 dovecot: master: Error: service(doveadm):
fork() failed: Resource temporarily unavailable (ulimit -u 257577 reached?)
Sep 16 09:17:02 dovecot4 dovecot: master: Error: service(pop3): fork()
failed: Resource temporarily unavailable (ulimit -u 257577 reached?)
Sep 16 09:17:02 dovecot4 dovecot: master: Error: service(imap): fork()
failed: Resource temporarily unavailable (ulimit -u 257577 reached?)
Sep 16 09:17:04 dovecot4 dovecot: master: Error: service(imap): fork()
failed: Resource temporarily unavailable (ulimit -u 257577 reached?)

Other dovecot is debian8 and problem not exists - any idea ?


Re: moving /var/vmail/vmail1 Maildir tree to another partition?

2020-08-30 Thread Urban Loesch

Hi,

not testet but for a fast solution I think this should work:

- stop dovecot/postfix
- mv "/var/vmail" to the "/newspace/".
- create a symlink which points from "/var/vmail" to "/newspace/vmail".
- start dovecot/postfix

Regards
Urban

Am 28.08.2020 um 16:35 schrieb Voytek Eymont:

I have a small Postfix/Dovecot/MariaDB on Centos 7 with a dozen virtual
domains, I'm running out of space in /var but got space

can I simply move the entire /var/vmail/vmail1 tree elsewhere ?
like say /vmail/vmail1

is it as simple as shutting Dovecot/Postfix, move tree, edit conf and re
start ?
or am I missing something ..? or is the a better way of doing this ?

# grep 'var/vmail' *.conf
dovecot.conf:sieve_global_dir = /var/vmail/sieve
dovecot.conf:sieve_global_path = /var/vmail/sieve/dovecot.sieve
dovecot.conf:#location =
maildir:/var/vmail/public/:CONTROL=~/Maildir/public:INDEX=~/Maildir/public

# cd ../postfix
# grep 'var/vmail' *.cf
main.cf:virtual_mailbox_base = /var/vmail

# dovecot --version
2.3.11.3 (502c39af9)




Re: Still interested in SMTPUTF8 support

2020-04-22 Thread Urban Loesch

Hi,

same here, I updated my hole mail infrustructure (postfix, milter, etc.).
Now only dovecot does not support SMTPUTF8, which forced me to disable it 
compleetly on my edgeservers.

Thanks
Urban

Am 17.03.20 um 09:52 schrieb David Bürgin:

I haven’t seen this term on the mailing list for a while. I don’t know
if you’re tracking this somewhere, in any case I’m curious if you are
still planning to add support for SMTPUTF8.

Cheers,



Dovecot Imap-Proxy: openssl_iostream_handle_error

2020-03-11 Thread Urban Loesch
-epoll.c:222
#17 0x7fb7bca261d9 in io_loop_handler_run (ioloop=) at 
ioloop.c:770
#18 0x7fb7bca26408 in io_loop_run (ioloop=0x5646c074ce10) at ioloop.c:743
#19 0x7fb7bc993423 in master_service_run (service=0x5646c074cca0, 
callback=callback@entry=0x7fb7bccd7d10 ) at 
master-service.c:809
#20 0x7fb7bccd8541 in login_binary_run (binary=, argc=, argv=) at main.c:560
#21 0x7fb7bc5a62e1 in __libc_start_main (main=0x5646beb832f0 , argc=1, argv=0x7fff75b241c8, init=, fini=, 
rtld_fini=, stack_end=0x7fff75b241b8) at ../csu/libc-start.c:291

#22 0x5646beb8333a in _start ()
.


I hope this is correct, fact that I'm not very experienced with coreudmps and 
the debug-process of it.

Many thanks
Urban Loesch


Re: Dovecot quota and Postfix smtpd_recipient_restrictions?

2019-03-20 Thread Urban Loesch via dovecot

Hi,


I would like to enable (the same) quota (count) for all (virtual)users,
on Debian Stretch, Postfix 3.1.8, Dovecot 2.2.27,
and is not clear for me if I need to tell Postfix to communicate with the 
service in /etc/postfix/main.cf as here:


smtpd_recipient_restrictions =
     ...
     check_policy_service inet:mailstore.example.com:12340


I configured it like your example above and it works for me.

Best
Urban


Re: Moving Alternate Storage to another disk.

2019-01-03 Thread Urban Loesch via dovecot

Hi,

if you have the new disk installed on the same server you can try:

- mount new disk for example in /mnt/temp
- rsync -vaWH all files + directories from the old to the new disk
- stop dovecot so no changes will happen on disks
- make a final rsync again -> should not take many time
- umout the old disk
- mount new disk to the original "alt-storage" path, so you don't have to 
change each soft-link in each users directory.
- start dovecot

Not tested, but in theory it should work.

Best
Urban

Am 31.12.18 um 15:02 schrieb bOnK:

Hello,

Dovecot 2.3.4_3 on FreeBSD 11.2.

I am using mdbox Alternate Storage since about two years without any problems.
However, the disk containing this storage is almost full and I have to move 
this data to another disk, probably zpool.

Would it be okay to do the following?
1) Shut down dovecot (and mail server) so no new mail comes in.
2) Copy/move all files in ALT location to new disk, using shell commands like 
cp/mv/cpdup.
3) Change the path to ALT in dovecot-conf mail_location.
4) Change the 'dbox-alt-root' soft-links in each users main (INBOX) directory 
to point to this new location.
5) Start up dovecot and mail server.

Am I missing something or maybe there is a better way?



Re: Possible architecture ?

2018-10-03 Thread Urban Loesch

Hi,

we have running a similar setup for some years now (IMAP + POP3).

- frontend imap+pop3 proxy (imap.mydomain.com)
- multiple backend servers
- each backend is responsible for a few domains (ex. domains beginning with 
a-f, g-l, and so on)

Database setup:
- 3 MySql Servers in a Master/Slave configuration:
- 1 Master, where user, password and proxy information are stored.
- 2 Slaves, each dovecot backend and the frontend proxy is configured 
to read the user configuration from the same database.
	- the 2 slaves helps us to keep away the mysql select queries from our master server. But depending on your workload, perhaps one central mysql 
server without slaves is enough.



For proxying requests to the correct backend server see: 
https://wiki.dovecot.org/PasswordDatabase/ExtraFields/Proxy

In our setup the frontend proxy does only check if the user exists. If yes, the request will be forwarded to the correct backend and "real" 
authentication will be performed there.


Best
Urban Loesch


Am 03.10.2018 um 15:42 schrieb Alexandre Ellert:

Hi,

I've got no answer.. Can someone please help ?

Thank you.

Alex

Le mar. 18 sept. 2018 à 22:55, Alexandre Ellert mailto:ellertalexan...@gmail.com>> a écrit :

Hi,

I'd like to achieve the following setup whit dovecot using multiple servers 
:
- one server dedicated to all client IMAP (TLS) connections (i 
<http://mail.numeezy.com>map.mymaindomain.com <http://map.mymaindomain.com>, see
below )
- each backend server has it's own local storage. no replication
- each backend server responsible of a few domains
- each backend server has it's own Mysql local database for user's 
passwords.

                                                        ===> Server 1 : 
domains A, B and C

> i <http://mail.numeezy.com>map.mymaindomain.com 
<http://map.mymaindomain.com>  ===> Server 2 : domains D, E and F
              (143 TLS / 993 SSL)
                                                         ===> Server 3 : 
domains G, H

For example, if a user connects from domain E to i 
<http://mail.numeezy.com>map.mymaindomain.com <http://map.mymaindomain.com>, 
will Dovecot be
able to use password database hosted on Server 2 ?

Thank you !

Alex



Re: Changes to userdb not picked up

2017-02-06 Thread Urban Loesch

You can flush the cache with: "doveadm auth cache flush $USER"

Regards
Urban


Am 06.02.2017 um 13:59 schrieb Tom Sommer:

I have my quota limits stored in userdb and auth_cache enabled (default 
settings).

When I change the quota limit the old limit is still cached for the user for 
another hour. Is there any way to prevent this from happening?

Thanks



Re: Outlook 2010 woes

2016-10-13 Thread Urban Loesch



Am 13.10.2016 um 16:53 schrieb Bryan Holloway:

On 10/13/16 9:07 AM, Aki Tuomi wrote:



On October 13, 2016 at 4:55 PM Jerry  wrote:


On Thu, 13 Oct 2016 08:36:23 -0500, Bryan Holloway stated:


I also extended the "Server Timeout" setting in OT2010 to 10 minutes,
which doesn't seem to help either. (!)


Outlook 2010 is a very old version. Why not update to the 2016 version.
I am running it without any problems. If you do update, remember to
remove the old version completely first.

--
Jerry


I do wonder if the real culprit is some firewall that timeouts the idle 
connection.

Aki



I considered that, but again everything worked fine until we moved them from 
2.1 to 2.2. Their same firewall is in use.

Is there a way to see the IMAP commands coming from the client? I've tried 
looking at PCAPs, but of course they're encrypted so I can't see the actual
dialog going on between the server and client. I didn't see an obvious way to 
do this in the docs.



There is a "rawlog" feature, which writes down the hole decrypted imap session 
in files.

...
service imap {
...
executable = imap postlogin
...
}

...

service postlogin {
  executable = script-login -d rawlog
  unix_listener postlogin {
  }
}
...

This should write *.in an *.out files to "$mail_location/dovecot.rawlog/" 
directory for each imap session.
The directory should be writeable by the dovecot user. I tested this some years 
ago, so I'm not shure if the configuration
is still valid.

Regards
Urban


Re: Increased errors "Broken MIME parts" in log file

2016-06-20 Thread Urban Loesch

Hi,


Am 11.06.2016 um 20:31 schrieb Timo Sirainen:

On 11 Jun 2016, at 21:29, Timo Sirainen  wrote:


On 01 Jun 2016, at 16:48, Alessio Cecchi  wrote:


Hi,

after the last upgrade to Dovecot 2.2.24.2 (d066a24) I see an increased number of errors 
"Broken MIME parts" for users in dovecot log file, here an example:

Jun 01 15:25:29 Error: imap(alessio.cec...@skye.it): Corrupted index cache file 
/home/domains/skye.it/alessio.cecchi/Maildir/dovecot.index.cache: Broken MIME 
parts for mail UID 34 in mailbox INBOX: Cached MIME parts don't match message 
during parsing: Cached header size mismatch 
(parts=41005b077b07fc0a400b030048008007600064002000210001004000260827002900ea00f00044005d091e002000b308e0082d00010041007b09b208de08)


Should be fixed by 
https://github.com/dovecot/core/commit/1bc6f1c54b4d77830288b8cf19060bd8a6db7b27


Oh, also this is required for it: 
https://github.com/dovecot/core/commit/20faa69d801460e89aa0b1214f3db4b026999b1e



I installed a new version three days ago. There are no more error entries like 
"Broken MIME parts" in the logs.

Many thanks for the fix
Urban


Re: Increased errors "Broken MIME parts" in log file

2016-06-10 Thread Urban Loesch

Hi,

same here on my installation. Version: Enterprise Edition: 2:2.2.24.1-2

Some logs:

...
Jun  5 07:40:01 dovecot-server dovecot: imap(u...@domain.com pid:27937 session:): Error: Corrupted index cache file 
/home/dovecotindex/domain.com/user/mailboxes/INBOX/dovecot.index.cache: Broken MIME parts for mail UID 11678 in mailbox INBOX: Cached MIME parts don't 
match message during parsing: Cached header size mismatch 
(parts=41009f0dd90dd52f023102004800e40d5c005e004800580e5c005f00a52ec52f2001)
Jun  5 07:40:01 dovecot-server dovecot: imap(u...@domain.com pid:27937 session:): Error: Corrupted index cache file 
/home/dovecotindex/domain.com/user/mailboxes/INBOX/dovecot.index.cache: Broken MIME parts for mail UID 11694 in mailbox INBOX: Cached MIME parts don't 
match message during parsing: Cached header size mismatch 
(parts=4100bb0df50de4f2a9f80200480e5c005e004800740e5c005f00b4f16cf7b805)


Got also this errors:
Jun  5 07:40:01 dovecot-server dovecot: imap(u...@domain.com pid:27937 session:): Error: 
unlink(/home/dovecotindex/domain.com/user/mailboxes/INBOX/dovecot.index.cache) failed: No such file or directory (in mail-cache.c:28)
Jun  5 07:40:01 dovecot-server dovecot: imap(u...@domain.com pid:27937 session:): Error: Corrupted index cache file 
/home/dovecotindex/domain.com/user/mailboxes/INBOX/dovecot.index.cache: Broken MIME parts for mail UID 11742 in mailbox INBOX: Cached MIME parts don't 
match message during parsing: Cached header size mismatch (parts=)
Jun  5 07:40:01 dovecot-server dovecot: imap(u...@domain.com pid:27937 session:): Error: 
unlink(/home/dovecotindex/domain.com/user/mailboxes/INBOX/dovecot.index.cache) failed: No such file or directory (in mail-cache.c:28)
Jun  5 07:40:01 dovecot-server dovecot: imap(u...@domain.com pid:27937 session:): Error: Corrupted index cache file 
/home/dovecotindex/domain.com/user/mailboxes/INBOX/dovecot.index.cache: Broken MIME parts for mail UID 11752 in mailbox INBOX: Cached MIME parts don't 
match message during parsing: Cached header size mismatch (parts=)
Jun  5 07:40:02 dovecot-server dovecot: imap(u...@domain.com pid:27937 session:): Error: 
unlink(/home/dovecotindex/domain.com/user/mailboxes/INBOX/dovecot.index.cache) failed: No such file or directory (in mail-cache.c:28)


...

Thanks
Urban


Am 09.06.2016 um 16:06 schrieb Dave:

On 02/06/2016 22:58, Timo Sirainen wrote:

On 01 Jun 2016, at 16:48, Alessio Cecchi  wrote:


Hi,

after the last upgrade to Dovecot 2.2.24.2 (d066a24) I see an increased number of errors 
"Broken MIME parts" for users in dovecot log file, here an
example:

Jun 01 15:25:29 Error: imap(alessio.cec...@skye.it): Corrupted index cache file 
/home/domains/skye.it/alessio.cecchi/Maildir/dovecot.index.cache:
Broken MIME parts for mail UID 34 in mailbox INBOX: Cached MIME parts don't 
match message during parsing: Cached header size mismatch
(parts=41005b077b07fc0a400b030048008007600064002000210001004000260827002900ea00f00044005d091e002000b308e0082d00010041007b09b208de08)


..


but the error reappears always (for the same UID) when I do "search" from 
webmail. All works fine for the users but I don't think is good to have
these errors in log file.


If it's reproducible for a specific email, can you send me the email?


I'm replying to this again for a couple of reasons:

1. I've not heard any further discussion and I accidentally replied to the 
wrong thread initially (oops!)

2. It's actually looking to become a fairly serious issue (extra 100Mb/s of 
network traffic, extra 10K NFS ops/sec)

I've been seeing the same problem after upgrading from 2.2.18 to 2.2.24 with 
identical config. Reads from mailboxes have doubled, which looks
consistent with repeated index rebuilds. It's not got any better over time, so 
the indexes aren't managing to self-heal (over 1300 of these errors
today so far, for example).

[2016-06-09T09:50:30+0100] imap(): Error: Corrupted index cache file 
/mnt/index/c69/923413/.INBOX/dovecot.index.cache: Broken MIME parts for mail
UID 74359 in mailbox INBOX: Cached MIME parts don't match message during 
parsing: Cached header size mismatch

stats: Error: FIFO input error: CONNECT: Duplicate session ID

2016-04-18 Thread Urban Loesch

Hi,

yesterday I updatet to Dovecot EE version 2:2.2.23.1-1.
Now sometimes I see this errors in my logs:

...
Apr 18 09:02:19 dovecot1 dovecot: stats: Error: FIFO input error: CONNECT: Duplicate session ID NjcCDoSAFFd/KQAAFMUCeg for user u...@domain1.com 
service lmtp
Apr 18 09:04:05 dovecot1 dovecot: stats: Error: FIFO input error: CONNECT: Duplicate session ID rjV1HtCGFFcoogAAFMUCeg for user u...@domain2.com 
service lmtp
Apr 18 09:04:30 dovecot1 dovecot: stats: Error: FIFO input error: CONNECT: Duplicate session ID Sqi0IMWAFFeRNQAAFMUCeg for user u...@domain3.com 
service lmtp

...

The error only appears when a mail is sent to 2 ore more recipients 
concurrently.
It's not ciritcal for me, all mails are getting delivered correctly.

Thanks and regards
Urban Loesch


Re: Randomly SSL Errors since upgrade to 2.2.23-1 (Enterprise Edition)

2016-04-15 Thread Urban Loesch

[UPDATE]:

I digged deeper into my logs and I found that before the upgrade I got this 
errors:
...
Apr 15 09:36:09 imap1 dovecot: pop3-login: Disconnected (no auth attempts in 0 secs): user=<>, rip=x.x.x.x, lip=x.x.x.x, TLS handshaking: SSL_accept() 
failed: error:1408E0F4:SSL routines:SSL3_GET_MESSAGE:unexpected message
Apr 15 09:37:56 imap1 dovecot: imap-login: Disconnected (no auth attempts in 1 secs): user=<>, rip=x.x.x.x, lip=x.x.x.x, TLS handshaking: SSL_accept() 
failed: error:1408E0F4:SSL routines:SSL3_GET_MESSAGE:unexpected message
Apr 15 09:45:40 imap1 dovecot: imap-login: Disconnected (no auth attempts in 1 secs): user=<>, rip=x.x.x.x, lip=x.x.x.x, TLS handshaking: SSL_accept() 
failed: error:1408E0F4:SSL routines:SSL3_GET_MESSAGE:unexpected message
Apr 15 09:46:15 imap1 dovecot: pop3-login: Disconnected (no auth attempts in 0 secs): user=<>, rip=x.x.x.x, lip=x.x.x.x, TLS handshaking: SSL_accept() 
failed: error:1408E0F4:SSL routines:SSL3_GET_MESSAGE:unexpected message

...

After the upgrade the errors above stopped and now they look like this:


Apr 15 13:41:30 imap1 dovecot: pop3-login: Error: SSL: Stacked error: 
error:1408F119:SSL routines:SSL3_GET_RECORD:decryption failed or bad record mac
Apr 15 13:41:30 imap1 dovecot: pop3-login: Disconnected (no auth attempts in 0 secs): user=<>, rip=x.x.x.x, lip=x.x.x.x, TLS handshaking: SSL_accept() 
failed: Unknown error



or


Apr 15 11:00:59 imap1 dovecot: imap-login: Error: SSL: Stacked error: 
error:140D00CF:SSL routines:SSL_write:protocol is shutdown
Apr 15 11:00:59 imap1 dovecot: imap-login: proxy(u...@domain.com): disconnecting x.x.x.x (Disconnected by client: read(size=1026) failed: Connection 
reset by peer(0s idle, in=467, out=384881)): user=<u...@domain.com>, method=PLAIN, rip=x.x.x.x, lip=x.x.x.x, TLS: SSL_write() failed: Bad file 
descriptor, TLSv1 with cipher ECDHE-RSA-AES128-SHA (128/128 bits)



First I didn't see the first errors as they are only written to "mail.log" and not 
"mail.err" in Debian.

So I think this is not really critical as there are no user complaints right 
now.

Thanks
Urban Loesch


Am 15.04.2016 um 15:14 schrieb Urban Loesch:

Hi,

first of all, many thanks for a great piece of software.

Today I updated one of our 2 IMAP/POP3 proxies from version 2.2.15.17-1 to 
2.2.23.1-1 (both are enterprise editions).
After the update I now see randomly the following errors in the log file on my 
first proxy:

...
Apr 15 10:28:54 imap1 dovecot: pop3-login: Error: SSL: Stacked error: 
error:1408F119:SSL routines:SSL3_GET_RECORD:decryption failed or bad record mac
Apr 15 10:34:24 imap1 dovecot: pop3-login: Error: SSL: Stacked error: 
error:1408F119:SSL routines:SSL3_GET_RECORD:decryption failed or bad record mac
Apr 15 10:37:11 imap1 dovecot: imap-login: Error: SSL: Stacked error: 
error:140D00CF:SSL routines:SSL_write:protocol is shutdown
Apr 15 10:39:04 imap1 dovecot: imap-login: Error: SSL: Stacked error: 
error:1408F119:SSL routines:SSL3_GET_RECORD:decryption failed or bad record mac
Apr 15 10:43:02 imap1 dovecot: imap-login: Error: SSL: Stacked error: 
error:140D00CF:SSL routines:SSL_write:protocol is shutdown
Apr 15 10:45:14 imap1 dovecot: pop3-login: Error: SSL: Stacked error: 
error:1408F119:SSL routines:SSL3_GET_RECORD:decryption failed or bad record mac
Apr 15 10:50:31 imap1 dovecot: imap-login: Error: SSL: Stacked error: 
error:140D00CF:SSL routines:SSL_write:protocol is shutdown
Apr 15 10:54:56 imap1 dovecot: imap-login: Error: SSL: Stacked error: 
error:1408F119:SSL routines:SSL3_GET_RECORD:decryption failed or bad record mac
Apr 15 10:57:44 imap1 dovecot: imap-login: Error: SSL: Stacked error: 
error:140D00CF:SSL routines:SSL_write:protocol is shutdown
Apr 15 10:59:49 imap1 dovecot: pop3-login: Error: SSL: Stacked error: 
error:140D00CF:SSL routines:SSL_write:protocol is shutdown
Apr 15 11:00:59 imap1 dovecot: imap-login: Error: SSL: Stacked error: 
error:140D00CF:SSL routines:SSL_write:protocol is shutdown
Apr 15 11:13:43 imap1 dovecot: pop3-login: Error: SSL: Stacked error: 
error:14094438:SSL routines:SSL3_READ_BYTES:tlsv1 alert internal error: SSL
alert number 80
Apr 15 11:15:21 imap1 dovecot: imap-login: Error: SSL: Stacked error: 
error:140D00CF:SSL routines:SSL_write:protocol is shutdown
Apr 15 11:18:33 imap1 dovecot: pop3-login: Error: SSL: Stacked error: 
error:140D00CF:SSL routines:SSL_write:protocol is shutdown
Apr 15 11:20:12 imap1 dovecot: pop3-login: Error: SSL: Stacked error: 
error:140D00CF:SSL routines:SSL_write:protocol is shutdown
Apr 15 11:20:40 imap1 dovecot: pop3-login: Error: SSL: Stacked error: 
error:1408F119:SSL routines:SSL3_GET_RECORD:decryption failed or bad record mac
...

Some more details:
OS: Debian wheezy (latest patchlevel)

Dovecot:
ii  dovecot-ee-core 2:2.2.23.1-1
ii  dovecot-ee-imapd2:2.2.23.1-1
ii  dovecot-ee-managesieved 2:2.2.23.1-1
ii  dovecot-e

Randomly SSL Errors since upgrade to 2.2.23-1 (Enterprise Edition)

2016-04-15 Thread Urban Loesch

Hi,

first of all, many thanks for a great piece of software.

Today I updated one of our 2 IMAP/POP3 proxies from version 2.2.15.17-1 to 
2.2.23.1-1 (both are enterprise editions).
After the update I now see randomly the following errors in the log file on my 
first proxy:

...
Apr 15 10:28:54 imap1 dovecot: pop3-login: Error: SSL: Stacked error: 
error:1408F119:SSL routines:SSL3_GET_RECORD:decryption failed or bad record mac
Apr 15 10:34:24 imap1 dovecot: pop3-login: Error: SSL: Stacked error: 
error:1408F119:SSL routines:SSL3_GET_RECORD:decryption failed or bad record mac
Apr 15 10:37:11 imap1 dovecot: imap-login: Error: SSL: Stacked error: 
error:140D00CF:SSL routines:SSL_write:protocol is shutdown
Apr 15 10:39:04 imap1 dovecot: imap-login: Error: SSL: Stacked error: 
error:1408F119:SSL routines:SSL3_GET_RECORD:decryption failed or bad record mac
Apr 15 10:43:02 imap1 dovecot: imap-login: Error: SSL: Stacked error: 
error:140D00CF:SSL routines:SSL_write:protocol is shutdown
Apr 15 10:45:14 imap1 dovecot: pop3-login: Error: SSL: Stacked error: 
error:1408F119:SSL routines:SSL3_GET_RECORD:decryption failed or bad record mac
Apr 15 10:50:31 imap1 dovecot: imap-login: Error: SSL: Stacked error: 
error:140D00CF:SSL routines:SSL_write:protocol is shutdown
Apr 15 10:54:56 imap1 dovecot: imap-login: Error: SSL: Stacked error: 
error:1408F119:SSL routines:SSL3_GET_RECORD:decryption failed or bad record mac
Apr 15 10:57:44 imap1 dovecot: imap-login: Error: SSL: Stacked error: 
error:140D00CF:SSL routines:SSL_write:protocol is shutdown
Apr 15 10:59:49 imap1 dovecot: pop3-login: Error: SSL: Stacked error: 
error:140D00CF:SSL routines:SSL_write:protocol is shutdown
Apr 15 11:00:59 imap1 dovecot: imap-login: Error: SSL: Stacked error: 
error:140D00CF:SSL routines:SSL_write:protocol is shutdown
Apr 15 11:13:43 imap1 dovecot: pop3-login: Error: SSL: Stacked error: error:14094438:SSL routines:SSL3_READ_BYTES:tlsv1 alert internal error: SSL 
alert number 80

Apr 15 11:15:21 imap1 dovecot: imap-login: Error: SSL: Stacked error: 
error:140D00CF:SSL routines:SSL_write:protocol is shutdown
Apr 15 11:18:33 imap1 dovecot: pop3-login: Error: SSL: Stacked error: 
error:140D00CF:SSL routines:SSL_write:protocol is shutdown
Apr 15 11:20:12 imap1 dovecot: pop3-login: Error: SSL: Stacked error: 
error:140D00CF:SSL routines:SSL_write:protocol is shutdown
Apr 15 11:20:40 imap1 dovecot: pop3-login: Error: SSL: Stacked error: 
error:1408F119:SSL routines:SSL3_GET_RECORD:decryption failed or bad record mac
...

Some more details:
OS: Debian wheezy (latest patchlevel)

Dovecot:
ii  dovecot-ee-core 2:2.2.23.1-1
ii  dovecot-ee-imapd2:2.2.23.1-1
ii  dovecot-ee-managesieved 2:2.2.23.1-1
ii  dovecot-ee-mysql2:2.2.23.1-1
ii  dovecot-ee-pop3d2:2.2.23.1-1
ii  dovecot-ee-sieve2:2.2.23.1-1

Libssl:
ii  libssl1.0.0:amd64   1.0.1e-2+deb7u20


One my second proxy there is running also Debian Wheezy with the latest 
patchlevel but dovecot version 2.2.15.17-1 (not yet updated):
ii  dovecot-ee-core 1:2.2.15.17-1
ii  dovecot-ee-imapd1:2.2.15.17-1
ii  dovecot-ee-managesieved 0.4.6-4
ii  dovecot-ee-mysql1:2.2.15.17-1
ii  dovecot-ee-pop3d1:2.2.15.17-1
ii  dovecot-ee-sieve0.4.6-4

On this box I can't see this strange errors.

Until now there was no user that complaints that he can't read his mails.

Do you know what could cause this errors (for example: very old clients and so 
on)?
Or are the logging of this errors new in dovecot 2.2.23?

Many thanks
Urban Loesch


Re: [SPAM: high] imap logging ?

2015-11-25 Thread Urban Loesch
Hi,

perhaps this is what you need.

http://wiki2.dovecot.org/Plugins/MailLog

No "mail_debug enabled" neccessary.

Regards
Urban

Am 26.11.2015 um 07:51 schrieb mancyb...@gmail.com:
> Hi I'm trying to log my users imap actions, like when creating a folder, 
> moving an email or deleting an email.
> So I've enabled 'mail_debug' and I'm checking /var/log/dovecot/debug.log
> this is what happens when I delete an email:
> 
> Nov 26 07:46:38 auth-worker(1555): Debug: sql(XXX,127.0.0.1): query: SELECT 
> password FROM mailbox WHERE username = 'XXX' and active = 1 and 
> restrictedAccess = 0
> Nov 26 07:46:38 auth: Debug: client out: OK   1   user=XXX
> Nov 26 07:46:38 auth-worker(1555): Debug: sql(XXX,127.0.0.1): SELECT 
> '/var/vmail/XXX/XXX' as home, 5000 AS uid, 5000 AS gid, concat('*:storage=', 
> quota) AS quota_rule FROM mailbox WHERE username = 'XXX'
> Nov 26 07:46:38 auth: Debug: master out: USER 374472705   XXX 
> home=/var/vmail/XXX/XXX uid=5000gid=5000
> quota_rule=*:storage=524288
> Nov 26 07:46:38 imap(XXX): Debug: Effective uid=5000, gid=5000, 
> home=/var/vmail/XXX/XXX
> Nov 26 07:46:38 imap(XXX): Debug: Quota root: name=User quota backend=maildir 
> args=
> Nov 26 07:46:38 imap(XXX): Debug: Quota rule: root=User quota mailbox=* 
> bytes=536870912 messages=0
> Nov 26 07:46:38 imap(XXX): Debug: Quota rule: root=User quota mailbox=Trash 
> bytes=+104857600 messages=0
> Nov 26 07:46:38 imap(XXX): Debug: maildir++: root=/var/vmail/XXX/XXX/Maildir, 
> index=/var/vmail/XXX/XXX/Maildir/indexes, control=, 
> inbox=/var/vmail/XXX/XXX/Maildir, alt=
> 
> and when creating a folder, access an email or moving an email, the output is 
> basically the same:
> I'm unable to find the actual IMAP command.
> 
> So, question: is there a way to log IMAP commands to a file ?
> 
> Thank you,
> Mike
> 


Re: sieve is working/forwarding mail - but not for all users

2015-11-25 Thread Urban Loesch
Hi,

to how many addresses do you redirect the incoming mails?
Could it be that you hit the "sieve_max_redirects = n" configuration parameter?

Regards
Urban

Am 25.11.2015 um 15:05 schrieb Götz Reinicke - IT Koordinator:
> Hi,
> 
> we have dovecot-ee-2.2.18.2 and pigeonhole/managesieve 0.4.8 running for
> some time.
> 
> Today some users informed us that they did not get mails from one
> project account forwarded to there personal accounts any more.
> 
> This worked till one week ago and I cant think of any changes we made...
> 
> The project account keeps a copy of received mails.
> 
> I tried two different accounts to configure forwarding to internal and
> external mail addresses which is working.
> 
> Question: Any hint or idea? How may I debug sieve forwarding?
> 
>   Thanks and regards . Götz
> 


Question about dovecot replication

2015-10-23 Thread Urban Loesch
Hi,

last week I installed 2 servers with two dovecot nodes and replication in 
active/active mode located in different datacenters.

Based on the howto from wiki "http://wiki2.dovecot.org/Replication; it works 
great.
According to the wiki it is recomended to run "doveadm purge" on both systems 
continously, because "doveadm purge" will not be replicated by the
replication service. No problem so far.

But I have one doubt:
is it also reccmended (for keeping maildata in sync) to run "doveadm replicator 
replicate '*'" continously on both nodes?
Or is it enough on only one node?

Or should I run "doveadm sync -A tcp:anotherhost.example.com" in regular 
intervalls? Perhaps once a day on both nodes?


Thanks
Urban Loesch


Re: concerning dovecot settings for high volume server

2015-09-14 Thread Urban Loesch
Hi Rajesh,

our setup looks as follows:

- we are running linux-vserver as virtualization technology
- we have 2 dedicated IMAP/POP3 Proxies in front of 8 dovecot containers.
- totally about 2900 concurrent imap sessions on each imap proxy and about 180 
concurrent pop3 sessions

- all dovecot containers are running on the same hardware (no problems until 
today):
DELL PER720 with 2x 200GB RAID 1 SSD's for dovecot indexes, 8x 4TB RAID 
10 for maildata, 2x300GB RAID1 for OS
64GB RAM, 2x CPU E5-2640 0 @ 2.50GHz

- HA is Active/Passive with DRBD on 10GBIT dedicated NIC's for all 3 partitions.
- in summary there are about 47k accounts on it.
- 15minutes system load is between 0.5 - 2.5

- mailserver software is always postfix
- amavis with spamassassin and clamav
- opendkim, opendmarc as milter implementations

Front MX and antispam filtering is running on 2 different machines. Mail volume 
is between 200k and 600k (spam inclusive) per day.
We never faced some email re-download, only if the customers changes his mail 
client. But that's normal.

Hope that helps.

Best
Urban



Am 13.09.2015 um 19:35 schrieb Rajesh M:
> thanks very much urban. this was very helpful.
> 
> i have around 12500 users spread over 3 independent servers each having 
> around 4000+ users
> i am using qmailtoaster, vpopmail, spamassassin and dovecot.
> 
> in future i am planning to consolidate all using a HA cluster.
> 
> if it is ok with you could you kindly share some information about your email 
> server configuration. if you do not wish to put it on the list then you can 
> directly email me.
> 
> 1) is your email volume high ?
> 2) server hardware to support  28000 users
> 3) mailserver software - exim or postfix ??.
> 4) antispam software like spamassassin if any
> 
> also if you have faced any email re-download issues with dovecot sometimes 
> randomly incase of pop3 users storing emails on the server ?
> 
> 
> thanks
> rajesh
> 
> 
> 
> - Original Message -
> From: Urban Loesch [mailto:b...@enas.net]
> To: dovecot@dovecot.org
> Sent: Sun, 13 Sep 2015 09:33:14 +0200
> Subject: Re: concerning dovecot settings for high volume server
> 
> Hi,
> 
> I have running dovecot with about 28k users.
> Here comes my relevant config for pop3 and imap from "doveconf -n".
> No problems so far.
> 
> -- snip --
> default_client_limit = 2000
> ...
> 
> service imap-login {
>inet_listener imap {
>  port = 143
>}
>process_limit = 256
>process_min_avail = 50
>service_count = 1
> }
> service imap {
>process_limit = 2048
>process_min_avail = 50
>service_count = 1
>vsz_limit = 512 M
> }
> ...
> 
> service pop3-login {
>inet_listener pop3 {
>  port = 110
>}
>process_limit = 256
>process_min_avail = 25
>service_count = 1
> }
> service pop3 {
>process_limit = 256
>process_min_avail = 25
>service_count = 1
> }
> ...
> 
> protocol imap {
>imap_client_workarounds = tb-extra-mailbox-sep
>imap_id_log = *
>imap_logout_format = bytes=%i/%o session=<%{session}>
>mail_max_userip_connections = 40
>mail_plugins = " quota mail_log notify zlib imap_quota imap_zlib"
> }
> 
> ...
> protocol pop3 {
>mail_plugins = " quota mail_log notify zlib"
>pop3_logout_format = bytes_sent=%o top=%t/%p, retr=%r/%b, del=%d/%m, 
> \ size=%s uidl_hash=%u session=<%{session}>
> }
> -- snip --
> 
> Regards
> Urban
> 
> 
> Am 12.09.2015 um 20:53 schrieb Rajesh M:
>> hi
>>
>> centos 6 64 bit
>>
>> hex core processor with hyperthreading ie display shows 12 cores
>> 16 gb ram
>> 600 gb 15000 rpm drive
>>
>> we are having around 4000 users on a server
>>
>>
>> i wish to allow 1500 pop3 and 1500 imap connections simultaneously.
>>
>> need help regarding the settings to handle the above
>>
>> imap-login, pop3-login
>> imap pop3 service settings
>>
>> i recently i got an error
>> imap-login: Error: read(imap) failed: Remote closed connection 
>> (process_limit reached?)
>>
>>
>> my current dovecot config file
>>
>> # 2.2.7: /etc/dovecot/dovecot.conf
>> # OS: Linux 2.6.32-431.23.3.el6.x86_64 x86_64 CentOS release 6.5 (Final)
>> auth_cache_negative_ttl = 0
>> auth_cache_ttl = 0
>> auth_mechanisms = plain login digest-md5 cram-md5
>> default_login_user = vpopmail
>> disable_plaintext_auth = no
>> first_valid_gid = 89
>> first_valid_uid = 89
>> log_path = /var/log/dovecot.log
>> login_greeting = ready.
>> mail_max_userip_connectio

Re: concerning dovecot settings for high volume server

2015-09-13 Thread Urban Loesch

Hi,

I have running dovecot with about 28k users.
Here comes my relevant config for pop3 and imap from "doveconf -n".
No problems so far.

-- snip --
default_client_limit = 2000
...

service imap-login {
  inet_listener imap {
port = 143
  }
  process_limit = 256
  process_min_avail = 50
  service_count = 1
}
service imap {
  process_limit = 2048
  process_min_avail = 50
  service_count = 1
  vsz_limit = 512 M
}
...

service pop3-login {
  inet_listener pop3 {
port = 110
  }
  process_limit = 256
  process_min_avail = 25
  service_count = 1
}
service pop3 {
  process_limit = 256
  process_min_avail = 25
  service_count = 1
}
...

protocol imap {
  imap_client_workarounds = tb-extra-mailbox-sep
  imap_id_log = *
  imap_logout_format = bytes=%i/%o session=<%{session}>
  mail_max_userip_connections = 40
  mail_plugins = " quota mail_log notify zlib imap_quota imap_zlib"
}

...
protocol pop3 {
  mail_plugins = " quota mail_log notify zlib"
  pop3_logout_format = bytes_sent=%o top=%t/%p, retr=%r/%b, del=%d/%m, 
\ size=%s uidl_hash=%u session=<%{session}>

}
-- snip --

Regards
Urban


Am 12.09.2015 um 20:53 schrieb Rajesh M:

hi

centos 6 64 bit

hex core processor with hyperthreading ie display shows 12 cores
16 gb ram
600 gb 15000 rpm drive

we are having around 4000 users on a server


i wish to allow 1500 pop3 and 1500 imap connections simultaneously.

need help regarding the settings to handle the above

imap-login, pop3-login
imap pop3 service settings

i recently i got an error
imap-login: Error: read(imap) failed: Remote closed connection (process_limit 
reached?)


my current dovecot config file

# 2.2.7: /etc/dovecot/dovecot.conf
# OS: Linux 2.6.32-431.23.3.el6.x86_64 x86_64 CentOS release 6.5 (Final)
auth_cache_negative_ttl = 0
auth_cache_ttl = 0
auth_mechanisms = plain login digest-md5 cram-md5
default_login_user = vpopmail
disable_plaintext_auth = no
first_valid_gid = 89
first_valid_uid = 89
log_path = /var/log/dovecot.log
login_greeting = ready.
mail_max_userip_connections = 50
mail_plugins = " quota"
managesieve_notify_capability = mailto
managesieve_sieve_capability = fileinto reject envelope encoded-character 
vacation subaddress comparator-i;ascii-numeric relational regex imap4flags copy 
include variables body enotify environment mailbox date ihave
namespace {
   inbox = yes
   location =
   prefix =
   separator = .
   type = private
}
passdb {
   args = cache_key=%u webmail=127.0.0.1
   driver = vpopmail
}
plugin {
   quota = maildir:ignore=Trash
   quota_rule = ?:storage=0
}
protocols = imap pop3
service imap-login {
   client_limit = 256
   process_limit = 400
   process_min_avail = 4
   service_count = 0
   vsz_limit = 512 M
}
service pop3-login {
   client_limit = 1000
   process_limit = 400
   process_min_avail = 12
   service_count = 0
   vsz_limit = 512 M
}
ssl_cert = 

Re: need help debugging deleted mails

2015-08-13 Thread Urban Loesch


Am 13.08.2015 um 16:15 schrieb Vu Ngoc VU:
 Date: Thu, 13 Aug 2015 13:38:58
 From: Marcus Rückert da...@opensu.se
 To: dovecot@dovecot.org
 Subject: Re: need help debugging deleted mails

 mail_log plugin might help

darix
 
 I've seen this plugin, but I fear that it will be enabled for all users.
 I don't have seen an way to enable it only for 1 user like rawlog permits.

We have more than 20k accounts on our server and we have no problem with this 
plugin at all.
It helps us very often, if some users are complaining that they are loosing 
emails.
Every time it was a fault by the users different clients, eg. acces via IMAP
and POP3.

 
 Maybe, I'll setup a new container for only users that I want to debug with 
 that plugin enabled.
 

Reagards
Urban


Re: 451 4.3.0 Temporary internal failure

2015-08-04 Thread Urban Loesch
Hi,

that should be:

# Directory in which LDA/LMTP temporarily stores incoming mails 128 kB.
#mail_temp_dir = /tmp

Best
Urban

Am 04.08.2015 um 12:36 schrieb Nutsch:
 Hi,
 
 OS: Debian GNU/Linux 7
 
 df -hT
 Filesystem Type  Size  Used Avail Use% Mounted on
 /dev/vzfs  reiserfs   30G   12G   19G  38% /
 
 /etc/fstab
 proc  /proc   procdefaults00
 none  /dev/ptsdevpts  rw,gid=5,mode=62000
 none  /run/shmtmpfs   defaults00
 
 
 If someone knows an option to change the tmp directory in dovecot.conf, it 
 would be very helpful. I can't find it.
 I can't increase the size of tmp, its not an partition and even my provider 
 don't knows how it can be possible that the tmp directory is limited to 1 MB.
 
 Regards
 Noctua
 
 On 2015-08-03 20:20, Urban Loesch wrote:
 Hi,

 according to dovecot.conf lmtp stores all mails temporarily in /tmp/ that 
 are bigger than 256KB. You can change the directory in dovecot.conf or
 you should increase the /tmp/ size. It could be that /tmp/ is a ramdisk. 
 Check /etc/fstab.

 Which os you are using?
 What does df -hT say?
 What does /etc/fstab say?

 Regards
 Urban

 Am 03.08.2015 um 19:54 schrieb Nutsch:
 Hi

 i can send mails without problems in any direction, except when the
 attachments are bigger than 1 MB. I alway get this message

 relay=mail.example.net[private/dovecot-lmtp], delay=35155,
 delays=35155/0.03/0.02/0.09, dsn=4.3.0, status=deferred (host
 mail.example.net[private/dovecot-lmtp] said: 451 4.3.0 Temporary
 internal failure (in reply to end of DATA command)) Aug  3 19:34:34
 46185 dovecot: lmtp(6477): Disconnect from local: Temporary internal
 failure (in DATA)


 postconf message_size_limit
 message_size_limit = 0
 postconf mail_size_limit
 mailbox_size_limit = 0

 i can send big attachments to an extern address but not from intern to
 intern. Someone said that maybe the tmp directory is the problem so i
 checked the size.

 ls -ald /tmp/; df -h /tmp/
 drwxrwxrwt 4 root root 80 Aug  3 19:44 /tmp/
 Filesystem  Size  Used Avail Use% Mounted on
 -   1,0M  4,0K 1020K   1% /tmp
 root@example:/tmp#

 is there a connection between lmtp and the size of the tmp directory?
 And why is the size 1,0M, tmp is not an own partion. Should i increase
 the size? And if yes, how?

 df -h
 Filesystem  Size  Used Avail Use% Mounted on
 /dev/vzfs80G   12G   69G  15% /

 ???

 br. noctua
 


Re: 451 4.3.0 Temporary internal failure

2015-08-03 Thread Urban Loesch

Hi,

according to dovecot.conf lmtp stores all mails temporarily in /tmp/ 
that are bigger than 256KB. You can change the directory in dovecot.conf 
or you should increase the /tmp/ size. It could be that /tmp/ is a 
ramdisk. Check /etc/fstab.


Which os you are using?
What does df -hT say?
What does /etc/fstab say?

Regards
Urban

Am 03.08.2015 um 19:54 schrieb Nutsch:

Hi

i can send mails without problems in any direction, except when the
attachments are bigger than 1 MB. I alway get this message

relay=mail.example.net[private/dovecot-lmtp], delay=35155,
delays=35155/0.03/0.02/0.09, dsn=4.3.0, status=deferred (host
mail.example.net[private/dovecot-lmtp] said: 451 4.3.0 Temporary
internal failure (in reply to end of DATA command)) Aug  3 19:34:34
46185 dovecot: lmtp(6477): Disconnect from local: Temporary internal
failure (in DATA)


postconf message_size_limit
message_size_limit = 0
postconf mail_size_limit
mailbox_size_limit = 0

i can send big attachments to an extern address but not from intern to
intern. Someone said that maybe the tmp directory is the problem so i
checked the size.

ls -ald /tmp/; df -h /tmp/
drwxrwxrwt 4 root root 80 Aug  3 19:44 /tmp/
Filesystem  Size  Used Avail Use% Mounted on
-   1,0M  4,0K 1020K   1% /tmp
root@example:/tmp#

is there a connection between lmtp and the size of the tmp directory?
And why is the size 1,0M, tmp is not an own partion. Should i increase
the size? And if yes, how?

df -h
Filesystem  Size  Used Avail Use% Mounted on
/dev/vzfs80G   12G   69G  15% /

???

br. noctua


Re: Scalability with high density servers and proxies, TCP port limits

2015-07-02 Thread Urban Loesch

Hi,

Am 03.07.2015 um 05:14 schrieb Christian Balzer:



2. Here is where the fun starts.
Each IMAP session that gets proxied to the real mailbox server needs a
port for the outgoing connection.
So to support 2 million sessions we need 40 IP addresses here. Ouch.
And from a brief test having multiple IP addresses per server won't help
either (Dovecot unsurprisingly picks the main IP when establishing a
proxy session to the real mailbox), at least not with just one default GW.



If I remeber correctly there is a config option in dovecot 2.x where you 
can set the ip addresses which dovecot should use for outgoing proxy 
connections. Sorry, but I can't remeber the option.


Best
Urban


Re: Testin new installation

2015-06-14 Thread Urban Loesch

Hi,



ssl_cert = /etc/pki/dovecot/certs/tbv2015.crt

This is not correct. It should be:

ssl_cert = /etc/pki/dovecot/certs/tbv2015.crt

Regards
Urban


Re: Disk space usage with mdbox

2015-04-02 Thread Urban Loesch
Did you purged the deleted mails for this user?
On mdbox you must run doveadm purge -u $USER to whipe out any as deleted 
marked mails etc.

Details: http://wiki2.dovecot.org/Tools/Doveadm/Purge

I use a nightly cronjob wor this.

Regards
Urban

Am 01.04.2015 um 23:26 schrieb Alexandros Soumplis:
 Hello,
 
 I am using dovecot with mdbox+sis and I notice an ever increasing disk space 
 usage since I converted mailboxes from Maildir to mdboxes. I have checked
 with a user and while it actually uses only 65K, his mdbox files on disk are 
 more than 6G. The backup of his mailbox is just 64K. Any suggestions ?
 
 Below are some relevant commands:
 
 [root@mail ~]# doveadm quota get -u test
 Quota name TypeValue Limit
   %
 User quota STORAGE 10135 31457280 
  0
 User quota MESSAGE   186 -
   0
 
 [root@mail ~]# du -k --max-depth=1 /mdboxes/test/
 220/mdboxes/test/mailboxes
 6029348/mdboxes/test/storage
 6029592/mdboxes/test/
 
 [root@mail ~]# doveadm purge -u test
 [root@mail ~]# du -k --max-depth=1 /mdboxes/test/
 220/mdboxes/test/mailboxes
 6029348/mdboxes/test/storage
 6029592/mdboxes/test/
 
 [root@mail ~]# doveadm backup -u test  mdbox:/tmp/MDBOX_TEMP/
 [root@mail ~]# du -k --max-depth=1 /tmp/MDBOX_TEMP/
 16/tmp/MDBOX_TEMP/mailboxes
 65540/tmp/MDBOX_TEMP/storage
 65568/tmp/MDBOX_TEMP/
 


Panic: file istream-qp-decoder.c

2014-10-29 Thread Urban Loesch
] - /usr/lib/dovecot/libdovecot.so.0(io_loop_handler_run+0x9)
[0x7f8f7cba2fa9] - /usr/lib/dovecot/libdovecot.so.0(io_loop_run+0x38) 
[0x7f8f7cba3038] - /usr/lib/dovecot/libdovecot.so.0(master_service_run+0x13)
[0x7f8f7cb4f3e3] - dovecot/imap(main+0x2a7) [0x420e97]
Oct 28 18:29:58 mailstore dovecot: imap(u...@domain.com pid:38868 
session:eAIJ/34GpADD/vzI): Fatal: master: service(imap): child 38868 killed 
with
signal 6 (core dumps disabled)


The user uses Horde Webmail 5.2 to access his mailbox and there are about 1300 
active session running curently on that server.

Many thanks
Urban Loesch


Re: Properly locking a useraccount (on a proxy)

2014-10-21 Thread Urban Loesch

Hi,

Am 21.10.2014 20:37, schrieb Ralf Hildebrandt:

* Ralf Hildebrandt r...@sys4.de:


2) defer LMTP delivery somehow (Postfix is talking to dovecot's LMTP server)


I could of course put a mysql: query into postfix which would return

user@domain retry:

for the locked user. But I'm lazy and would prefer a single place /
a single query to lock the account



Why do you not put the mails on hold in some frontend postfix queue (i 
think you have) with a check_recipient_access table? We did that during 
our last migration from an old CGP system.


Ok, it's not the elegant way but for us it worked.


Re: dovecot replication (active-active) - server specs

2014-10-09 Thread Urban Loesch

Hi,

Am 09.10.2014 12:35, schrieb Martin Schmidt:


Our MX server is delivering ca. 30 GB new mails per day.
Two IMAP proxy server get the connections from the users. Atm. without dovecot 
director.
We've got around 700k connections per day (imap 200k / pop3 500k)


Are this the hole connections per day? How many concurrend connections 
do you have at the same time on each server?



So we want to make a new system.
We desire the new system to use mdbox format ( bigger files, less I/O)
and replication through dovecot replication (active/active) instead of drbd.


I have no experience with dovecot replication (Still on our roadmap). We 
are currently using drbd on a 10Gbit dedicated link. Works very well for us.



Each fileserver should know every mailbox/user and for the time being 2 dovecot 
proxies for the user connections (IMAP/POP).
(later after the migration from the old system to the new, dovecot director 
instead of proxies, for caching reasons).


As Florian said, enable zlib. This also decreases I/O, but needs a bit 
more of CPU. But not that much.




we've got 2 new fileservers, they have each SSD HDDs for new-storage
and 7200rpm SATA HDDs on RAID 5 with 10 TB for alt-storage
32 GB RAM per Server


You also could move the INDEX files from mdbox to different SSDs. We are 
doing so with 40k accounts and 2TB user data. Index partition has only 
22GB used and is increasing not very fast.




Do you have some tips for the system?
Do you believe 32 GB RAM are enough for one fileserver each and have you 
experience with the I/O Waiting problem with huge amounts of Data on the 
alt-storage?
Could there be issues with the RAM, if one fileserver has a downtime, so the 
second one has to take over all mailboxes for a short amount of time?


I think memory is not the problem. On IMAP/POP3 servers the main 
problem is I/O. But with dovecot mdbox and index files on SSD's we have 
no problem at the moment.




In general are only 2 new fileserver enough or should we think in bigger 
dimensions, like 4 fileserver
Storage expansion in the new servers should not be a problem (bigger HDDs and a 
few slots free, so we can expand the raid 5).
We are using raid 10 hardware raid controller with cache and sata 
7200rpm disks. OK, raid 10 needs more disks, but is much faster than 
raid 5. Raid 5 is not very fast in my eyes.





thank you
kind regards

Martin Schmidt



Regards
Urban


Imap: Panic: UID 13737 lost unexpectedly from INBOX

2014-09-29 Thread Urban Loesch
, no_modseq_tracking =
0, have_guids = 1, have_save_guids = 0, have_only_guid128 = 0}
ret = optimized out
sync_flags = optimized out
bbox_index_opened = optimized out
#8  virtual_sync_backend_boxes (ctx=0x261f200) at virtual-sync.c:1444
bboxes = 0x6
i = optimized out
count = optimized out
#9  virtual_sync (flags=0, mbox=0x2611de0) at virtual-sync.c:1542
ctx = optimized out
index_sync_flags = optimized out
ret = optimized out
#10 virtual_storage_sync_init (box=0x2611de0, flags=0) at virtual-sync.c:1562
mbox = 0x2611de0
sync_ctx = optimized out
ret = 0
#11 0x7f14d05982a3 in mailbox_sync_init (box=box@entry=0x2611de0, 
flags=flags@entry=0) at mail-storage.c:1678
_data_stack_cur_id = 4
ctx = optimized out
#12 0x0041f92a in imap_sync_init (client=0x2567f40, box=optimized 
out, imap_flags=imap_flags@entry=0, flags=flags@entry=0) at imap-sync.c:230
ctx = 0x25ff630
__FUNCTION__ = imap_sync_init
#13 0x0041032e in idle_sync_now (box=optimized out, ctx=0x2568b40) at 
cmd-idle.c:146
No locals.
#14 0x00410531 in idle_callback (box=optimized out, ctx=optimized 
out) at cmd-idle.c:158
client = 0x2567f40
#15 0x7f14d05bb77e in check_timeout (box=0x267d320) at 
index-mailbox-check.c:51
ibox = optimized out
file = 0x0
st = {st_dev = 37633, st_ino = 21496482, st_nlink = 1, st_mode = 33152, 
st_uid = 1001, st_gid = 1001, __pad0 = 0, st_rdev = 0, st_size = 2500,
st_blksize = 4096, st_blocks = 8, st_atim = {tv_sec = 1411943094,
tv_nsec = 172171250}, st_mtim = {tv_sec = 1411972876, tv_nsec = 
150792601}, st_ctim = {tv_sec = 1411972876, tv_nsec = 150792601}, __unused
= {0, 0, 0}}
notify = true
#16 0x7f14d02bbfa6 in io_loop_handle_timeouts_real (ioloop=0x2543740) at 
ioloop.c:410
timeout = 0x261bdd0
item = 0x261bdd0
tv = {tv_sec = 0, tv_usec = 0}
tv_call = {tv_sec = 1411972876, tv_usec = 171667}
t_id = 3
#17 io_loop_handle_timeouts (ioloop=ioloop@entry=0x2543740) at ioloop.c:423
---Type return to continue, or q return to quit---
_data_stack_cur_id = 2
#18 0x7f14d02bcd63 in io_loop_handler_run_internal 
(ioloop=ioloop@entry=0x2543740) at ioloop-epoll.c:193
ctx = 0x25443d0
events = 0x0
event = 0x11a
list = optimized out
io = optimized out
tv = {tv_sec = 0, tv_usec = 281628}
events_count = 5
msecs = 282
ret = 0
i = optimized out
call = optimized out
__FUNCTION__ = io_loop_handler_run_internal
#19 0x7f14d02bbe09 in io_loop_handler_run (ioloop=ioloop@entry=0x2543740) 
at ioloop.c:488
No locals.
#20 0x7f14d02bbe88 in io_loop_run (ioloop=0x2543740) at ioloop.c:465
__FUNCTION__ = io_loop_run
#21 0x7f14d0268d03 in master_service_run (service=0x25435d0, 
callback=callback@entry=0x420cd0 client_connected) at master-service.c:566
No locals.
#22 0x0040c238 in main (argc=1, argv=0x2543390) at main.c:410
set_roots = {0x428900, 0x0}
login_set = {auth_socket_path = 0x253b048 \001, postlogin_socket_path 
= 0x0, postlogin_timeout_secs = 60, callback = 0x420b60
login_client_connected, failure_callback = 0x420870 login_client_failed, 
request_auth_token = 1}
service_flags = optimized out
storage_service_flags = optimized out
username = 0x0
c = optimized out

When did the error happened?
I was connected with two clients to the same account.
- Thunderbird
- Horde 5 Webmail

I deleted the mail with the UID 13737 within Horde Webmail.

Logs for this operation:
The email was deleted by the Horde Webmail:
Sep 29 08:40:30 mailstoreul. dovecot: imap(sys@domain pid:32487 
session:Xvn5ii4EQwDD/uH4): flag_change: box=INBOX, uid=13737,
msgid=20140928233719.96BAAE842@bkp-eloma, size=3613, from=sys@domain (Cron 
Daemon)
Sep 29 08:40:43 mailstoreul. dovecot: imap(sys@domain pid:33205 
session:/JzHiy4EdgDD/uH4): expunge: box=INBOX, uid=13737,
msgid=20140928233719.96BAAE842@bkp-eloma, size=3613, from=sys@domain (Cron 
Daemon)

The still opened session from Thunderbird gots the error and panics (detailed 
logs see above):
Sep 29 08:41:16 mailstoreul. dovecot: imap(sys@domain pid:15160 
session:4ccaeS4EYgDD/uGI): Panic: UID 13737 lost unexpectedly from INBOX

Thanks and regards
Urban Loesch


Re: sieve redirect to foreign email gets “Relay access denied”

2014-09-23 Thread Urban Loesch
Hi,

I'm not shure, but could it be that you are missing permit_mynetworks in 
smtpd_recipient_restrictions?

Regards
Urban


Am 22.09.2014 22:36, schrieb Henry Stack:
 I have a postfix mail server with sql authentication and I want to implement 
 sieve on it.
 
 Sieve is working relative good, rules who contain 'fileinto' are executed 
 perfectly.
 The problem is the redirect to other servers.
 I configured a rule in Sieve to redirect any email containing redirect in 
 subject to a specified foreign destination. #
 So practically a email coming from sen...@live.de for the local user 
 testu...@server.net should be redirected to destinat...@gmail.com when the
 subject contains redirect
 
if header :contains [subject] [redirect] {redirect
destinat...@gmail.com; stop;}
 
 when I test it I get the following log entry
 
/postfix/smtpd[32114]: NOQUEUE: reject: RCPT from
mail.server.net[xx.xx.xx.xx]: 554 5.7.1 destinat...@gmail.com:
Relay access denied; from=sen...@live.de
to=destinat...@gmail.com proto=ESMTP helo=mail.server.net/
 
 How can I tell postfix to let dovecot/sieve relay the email?
 
 can somebody give a hint?
 
 postconf -n
 
alias_database = hash:/etc/aliases
alias_maps = hash:/etc/aliases
append_dot_mydomain = no
biff = no
broken_sasl_auth_clients = yes
config_directory = /etc/postfix
content_filter = smtp-amavis:[127.0.0.1]:10024
default_process_limit = 15
disable_vrfy_command = yes
dovecot_destination_recipient_limit = 1
home_mailbox = mail/
inet_interfaces = all
mailbox_size_limit = 0
mydestination = mail.server.net, localhost
myhostname = mail.server.net
mynetworks = 127.0.0.0/8 [:::127.0.0.0]/104 [::1]/128
myorigin = /etc/mailname
readme_directory = no
recipient_delimiter = +
smtp_tls_note_starttls_offer = yes
smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache
smtp_use_tls = yes
smtpd_banner = $myhostname ESMTP $mail_name (Debian/GNU)
smtpd_data_restrictions = reject_unauth_pipelining
smtpd_helo_restrictions = reject_unknown_helo_hostname
smtpd_recipient_restrictions = permit_sasl_authenticated,
reject_unknown_sender_domain,
reject_unknown_reverse_client_hostname,
reject_unknown_recipient_domain, reject_unverified_recipient,
reject_unauth_destination, reject_rbl_client zen.spamhaus.org,
reject_rhsbl_helo dbl.spamhaus.org, reject_rhsbl_sender
dbl.spamhaus.org, check_policy_service inet:127.0.0.1:10023
smtpd_sasl_auth_enable = yes
smtpd_sasl_authenticated_header = yes
smtpd_sasl_local_domain = $myhostname
smtpd_sasl_path = private/auth
smtpd_sasl_security_options = noanonymous
smtpd_sasl_type = dovecot
smtpd_sender_restrictions = permit_sasl_authenticated,
permit_mynetworks, reject_authenticated_sender_login_mismatch,
reject_unknown_sender_domain
smtpd_tls_auth_only = no
smtpd_tls_cert_file = /etc/ssl/certs/ssl-cert-snakeoil.pem
smtpd_tls_key_file = /etc/ssl/private/ssl-cert-snakeoil.key
smtpd_tls_loglevel = 2
smtpd_tls_received_header = yes
smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache
smtpd_use_tls = yes
soft_bounce = no
virtual_alias_domains =
mysql:/etc/postfix/mysql_virtual_alias_domains.cf
virtual_alias_maps = mysql:/etc/postfix/mysql_virtual_alias_maps.cf
virtual_mailbox_base = /var/vmail
virtual_mailbox_domains =
mysql:/etc/postfix/mysql_virtual_domains_maps.cf
virtual_mailbox_limit = 51200
virtual_mailbox_maps = mysql:/etc/postfix/mysql_virtual_mailbox_maps.cf
virtual_transport = dovecot
 
 dovecot -n
 
# 2.1.7: /etc/dovecot/dovecot.conf
# OS: Linux 3.2.0-4-amd64 x86_64 Debian 7.6
auth_debug_passwords = yes
auth_mechanisms = plain login
auth_verbose = yes
auth_verbose_passwords = plain
debug_log_path = /var/log/dovecot/dovecot.debug.log
disable_plaintext_auth = no
first_valid_gid = 99
first_valid_uid = 99
hostname = maxi.zp1.net
info_log_path = /var/log/mail.info
lda_mailbox_autocreate = yes
lda_mailbox_autosubscribe = yes
listen = xxx.xxx.xxx.xxx
log_path = /var/log/dovecot/dovecot.log
login_greeting = Dovecot ready, Sir.
mail_debug = yes
mail_gid = 99
mail_location = maildir:~/mail:LAYOUT=fs:INBOX=/var/vmail/%u/mail/
mail_plugins = acl
mail_uid = 99
managesieve_notify_capability = mailto
managesieve_sieve_capability = fileinto reject envelope
encoded-character vacation subaddress comparator-i;ascii-numeric
relational regex imap4flags copy include variables body enotify
environment mailbox date ihave
namespace {
   location = maildir:/var/mail/public
   prefix = Public/
   separator = /
   subscriptions = no
   type = public
}
namespace inbox {
   inbox = yes
   location =
   mailbox Drafts {
 special_use = \Drafts
   }

Problem with virtual folders

2014-09-11 Thread Urban Loesch
 virtual sieve zlib
}
protocol imap {
  imap_client_workarounds = tb-extra-mailbox-sep
  imap_id_log = *
  imap_logout_format = bytes=%i/%o session=%{session}
  mail_max_userip_connections = 40
  mail_plugins =  quota mail_log notify acl zlib stats virtual imap_quota 
imap_acl imap_zlib imap_stats
}
protocol pop3 {
  mail_plugins =  quota mail_log notify acl zlib stats virtual
  pop3_logout_format = bytes_sent=%o top=%t/%p, retr=%r/%b, del=%d/%m, size=%s 
uidl_hash=%u session=%{session}
}
...

The virtual folders are stored in /home/virtual/XXX and are containg only the 
file dovecot-virtual.

Like: /home/virtual/All/dovecot-virtual:
-
*
  all
-

Note:
I just have active virtual folders on a different dovecot server version 
2:2.2.13-1~auto+74.
I copied the configuration from this server. The only three differences between 
the two servers are:

- Server version is different.
- The prefix of the default namespace on the new server is prefix = INBOX/ 
and not prefix =
- Mail storage and index files are seperated in different folders on the new 
server.

Here are the relveant namespace configuraiton from the server where it ist 
working fine:
...
namespace {
  list = children
  location = mdbox:/home/vmail/%%d/%%n
  prefix = shared/%%u/
  separator = /
  subscriptions = no
  type = shared
}
namespace {
  hidden = no
  inbox = no
  list = children
  location = virtual:/home/virtual:INDEX=~/virtual
  prefix = [rolmail]/
  separator = /
  subscriptions = yes
  type = private
}
namespace inbox {
  inbox = yes
  location =
  mailbox Drafts {
special_use = \Drafts
  }
  mailbox Junk {
special_use = \Junk
  }
  mailbox Sent {
special_use = \Sent
  }
  mailbox Sent Items {
special_use = \Sent
  }
  mailbox Sent Messages {
special_use = \Sent
  }
  mailbox Trash {
special_use = \Trash
  }
  prefix =
  separator = /
  type = private
}
...

Have you any hint for me how I can fix my problem.

Thanks and regards
Urban Loesch


Re: Set an archive folder for every user

2014-08-05 Thread Urban Loesch
Hi,

you can try the ALT Storage.

http://wiki2.dovecot.org/MailboxFormat/dbox

Scroll down to Alternate storage.

Regards,
Urban

Am 05.08.2014 10:04, schrieb Felix Rubio Dalmau:
 Hi everybody,
 
   I have a running postfix+dovecot installation, running flawlessly. The 
 machine this setup is running onto has 2 mirrored SSD disks (in which dovecot 
 stores the mails) and 2 mirrored regular HD. I'd like to keep the fresh 
 emails in the SSD, and move them to an Archive folder after some days/weeks. 
 Is there any way in dovecot to set up a folder for every user, that points to 
 an external disk rather than the default one?
 
   Thank you,
   Felix
 


Re: Fatal: master: service(imap): child 20258 killed with signal 6 (core not dumped - set service imap { drop_priv_before_exec=yes })

2014-07-10 Thread Urban Loesch

Hi,

not shure if that helps.

In 10-master.conf file exists a service imap { ... } section.
You could try to increase the process_limit =  parameter in it.

On one of oour servers there we have process_limit = 2048 and we habe 
about 1200 concurrent connections without problems.


Best,
Urban


Am 10.07.2014 20:33, schrieb CJ Keist:

It's not fixed. Now the limit looks to be around 500 processes and we
start to get number of connections exceeded.  Any ideas?



On 7/10/14, 10:35 AM, CJ Keist wrote:

I fixed this issue about the process limit in the 10-master.conf file:

default_process_limit = 5000
default_client_limit = 3


On 7/10/14, 10:03 AM, CJ Keist wrote:

It looks like on the system that once we hit around 200 imap processes
it stops there and no more imap processes can be created.  Is there a
number of max imap processes in the config file somewhere.  By the way
running on OmniOS:

SunOS mail2 5.11 omnios-6de5e81 i86pc i386 i86pc



On 7/10/14, 9:50 AM, CJ Keist wrote:

Thanks for the reply. I have seen threads about setting the
mail_max_userip_connections, I have set this to 5000 and still people
getting the exceeding connections errorl


root@mail2:/userM/mail-services/dovecot/sbin# ./dovecot -n
# 2.2.13: /userM/mail-services/dovecot/etc/dovecot/dovecot.conf
# OS: SunOS 5.11 i86pc
auth_failure_delay = 5 secs
auth_mechanisms = plain login cram-md5
auth_worker_max_count = 3000
base_dir = /userM/mail-services/dovecot/var/run/dovecot/
disable_plaintext_auth = no
hostname = mail2.engr.colostate.edu
managesieve_notify_capability = mailto
managesieve_sieve_capability = fileinto reject envelope
encoded-character vacation subaddress comparator-i;ascii-numeric
relational regex imap4flags copy include variables body enotify
environment mailbox date ihave duplicate
namespace inbox {
   inbox = yes
   location =
   mailbox Drafts {
 special_use = \Drafts
   }
   mailbox Junk {
 special_use = \Junk
   }
   mailbox Sent {
 special_use = \Sent
   }
   mailbox Sent Messages {
 special_use = \Sent
   }
   mailbox Trash {
 special_use = \Trash
   }
   prefix =
}
passdb {
   driver = pam
}
passdb {
   driver = passwd
}
postmaster_address = c...@engr.colostate.edu
service auth {
   unix_listener /var/lib/postfix/private/auth {
 mode = 0666
   }
   unix_listener auth-userdb {
 group = postfix
 mode = 0666
 user = postfix
   }
   user = root
}
service imap-login {
   inet_listener imap {
 port = 143
   }
   inet_listener imaps {
 port = 993
 ssl = yes
   }
}
service pop3-login {
   inet_listener pop3 {
 port = 110
   }
   inet_listener pop3s {
 port = 995
 ssl = yes
   }
}
ssl_cert = /userM/mail-services/dovecot/etc/ssl/dovecot.pem
ssl_key = /userM/mail-services/dovecot/etc/ssl/privkey.pem
userdb {
   args = blocking=yes
   driver = passwd
}
protocol imap {
   mail_max_userip_connections = 5000
}
protocol lda {
   mail_plugins = sieve
}


On 7/10/14, 9:45 AM, Reindl Harald wrote:



Am 10.07.2014 17:32, schrieb CJ Keist:

Another problem is people are getting error message from their
clients stating
they have exceeded their number of connections.


mail_max_userip_connections = 50

well, how much folders do the have

keep in mind that fpr IDLE you have one connection per user and folder
10 users with 10 folders behind the same NAT router are 100
connections
from the same IP


On 7/10/14, 9:09 AM, CJ Keist wrote:

Added info:  These errors seem to come from users using mbox format.


On 7/10/14, 9:04 AM, CJ Keist wrote:

All,
Just move our mail servers over to a new mail server running
postfix
2.11.1 and dovecot 2.2.13 and getting the subject line errors in my
/var/adm/files.  People are complaining of loosing their
connections to
the mail server.

I've been able to google this error but haven't found fix for this
yet.
   NOt sure where to put the drop-priv option in the config files
either.

Any suggestions?

Var adm message:
Jul 10 08:54:29 mail2 dovecot: [ID 583609 mail.crit] imap(chen):
Fatal:
master: service(imap): child 20258 killed with signal 6 (core not
dumped
- set service imap { drop_priv_before_exec=yes })

Here is config output:

root@mail2:/userM/mail-services/dovecot/sbin# ./dovecot -n
# 2.2.13: /userM/mail-services/dovecot/etc/dovecot/dovecot.conf
# OS: SunOS 5.11 i86pc
auth_failure_delay = 5 secs
auth_mechanisms = plain login cram-md5
auth_worker_max_count = 300
base_dir = /userM/mail-services/dovecot/var/run/dovecot/
disable_plaintext_auth = no
hostname = mail2.engr.colostate.edu
managesieve_notify_capability = mailto
managesieve_sieve_capability = fileinto reject envelope
encoded-character vacation subaddress comparator-i;ascii-numeric
relational regex imap4flags copy include variables body enotify
environment mailbox date ihave duplicate
namespace inbox {
inbox = yes
location =
mailbox Drafts {
  special_use = \Drafts
}
mailbox Junk {
  special_use = \Junk
}
mailbox Sent {
  special_use = 

Re: [UPDATE]: Another Crash in service imap with version 2.2.13 - Debian Wheezy

2014-07-03 Thread Urban Loesch

Hi,

Am 03.07.2014 18:58, schrieb Timo Sirainen:

On 26.6.2014, at 16.10, Urban Loesch b...@enas.net wrote:




The client is trying to use COPY or MOVE command, but the copying fails for 
some reason and the cleanup code crashes. I can't reproduce this though, so 
would be helpful to know what exactly it's doing. So getting the gdb output for 
these commands (instead of just bt full) would help:

p *ctx
p *ctx.dest_mail
f 7
p (*_ctx).transaction.box.vname
p (*_ctx).transaction.box.storage.error_string
p mail.box.vname



After some days the customer begans to complain, that he is not able 
read his mails with his IPhone.


So I decided to delete his account and reinstalled it on the IPhone, 
since that it works without error.


Unfortunately now I switched to the latest hg version so I don't have 
the affected imap binary any more.


But I have the core file from the latest crash. If you would like I can 
send it to you off list.


Many Thanks
Urban


Re: Crash in service imap with version 2.2.13

2014-07-01 Thread Urban Loesch
/mailboxes/INBOX, 
{st_mode=S_IFDIR|0700, st_size=4096, ...}) = 0
16:01:52.686691 
open(/home/dovecotindex/domain.net/user/mailboxes/INBOX/dovecot.index.log, 
O_RDWR|O_APPEND) = 4
16:01:52.686720 fstat(4, {st_mode=S_IFREG|0600, st_size=3984, ...}) = 0
16:01:52.686750 pread(4, 
\1\2(\0$\352\16N{\0\0\0z\0\0\0\304\200\0\0\t\362\261S\0\0\0\0\0\0\0\0..., 
3984, 0) = 3984
16:01:52.686786 
open(/home/dovecotindex/domain.net/user/mailboxes/INBOX/dovecot.index, 
O_RDWR) = 12
16:01:52.686821 fstat(12, {st_mode=S_IFREG|0600, st_size=516, ...}) = 0
16:01:52.686850 pread(12, 
\7\2x\0(\1\0\0,\0\0\0\1\0\0\0$\352\16N\0\0\0\0\352\16N\5(\0\0..., 8192, 0) = 
516
16:01:52.686887 fstat(4, {st_mode=S_IFREG|0600, st_size=3984, ...}) = 0
16:01:52.686968 
stat(/home/dovecotindex/domain.net/user/mailboxes/INBOX/dovecot.index.log, 
{st_mode=S_IFREG|0600, st_size=3984, ...}) = 0
16:01:52.687002 fstat(4, {st_mode=S_IFREG|0600, st_size=3984, ...}) = 0
16:01:52.687056 setsockopt(7, SOL_TCP, TCP_CORK, [1], 4) = 0
16:01:52.687079 write(7, * FLAGS (\\Answered \\Flagged \\Del..., 261) = 261
16:01:52.687105 setsockopt(7, SOL_TCP, TCP_CORK, [0], 4) = 0
16:01:52.687207 epoll_wait(11, {{EPOLLIN, {u32=15105616, u64=15105616}}}, 5, 
180) = 1
16:02:04.438436 read(7, 0006 UID THREAD REFERENCES U..., 8085) = 45
16:02:04.438728 
open(/home/dovecotindex/domain.net/user/mailboxes/INBOX/dovecot.index.thread, 
O_RDWR) = 13
16:02:04.438808 fstat(13, {st_mode=S_IFREG|0600, st_size=136, ...}) = 0
16:02:04.438921 pread(13, \1\0\0\0\352\16N\200\200\200\276\242M\0\1 
\343z\1\0\0\0a\0\235\204\246\373\2\0\0..., 8192, 0) = 136
...

Deleting the 
/home/dovecotindex/domain.net/user/mailboxes/INBOX/dovecot.index.thread file 
resolves the problem, but I'm not shure if
this is the correct solution. I mean, if I have to delete all 
dovecot.index.thread files on my servers after upgrading to Dovecot 2.2.13
and I can't say if the problem comes back.

Are there some changes between version 2.1.15 and 2.2.13 which affects the 
dovecot indexes?
I can't find nothing in the wiki to this.

As I just said, the probkem only happens with Horde Webmail.

Thanks
Urban


Am 24.06.2014 10:40, schrieb Urban Loesch:
 
 Hi,
 
 yesterday  I upgraded to version 2.2.13 under Debian Squeeze.
 
 Since today morning sometimes my logfile shows the folling error:
 
 ..
 Jun 24 10:14:16 mailstore dovecot: imap(u...@domain.net pid:23434 
 session:jf6yi5D8TADD/vzh): Fatal: master: service(imap): child 23434 killed 
 with
 signal 11 (core dumped)
 ...
 
 The kernel error log shows:
 ...
 Jun 24 10:14:16 mailstore kernel: [13959701.899726] imap[23434]: segfault at 
 1012acec0 ip 7f7dd89b5e52 sp 7d33d9b0 error 4 in
 libdovecot-storage.so.0.0.0[7f7dd88ed000+10d000]
 ...
 
 This seems only to happen in conjunction with Horde Webmail. Other IMAP 
 clients aren't affected.
 
 I made a backtrace:
 
 - start backtrace -
 Core was generated by `dovecot/imap'.
 Program terminated with signal 11, Segmentation fault.
 #0  mail_index_strmap_uid_exists (ctx=0x7d33d9f0, uid=8442) at 
 mail-index-strmap.c:395
 395   mail-index-strmap.c: No such file or directory.
   in mail-index-strmap.c
 (gdb) bt full
 #0  mail_index_strmap_uid_exists (ctx=0x7d33d9f0, uid=8442) at 
 mail-index-strmap.c:395
 rec = 0x1012acec0
 #1  0x7f7dd89b79ab in mail_index_strmap_view_renumber (_sync=value 
 optimized out) at mail-index-strmap.c:842
 ctx = {view = 0x12b2d80, input = 0x0, end_offset = 0, highest_str_idx 
 = 0, uid_lookup_seq = 0, lost_expunged_uid = 0, data = 0x0, end = 0x0,
 str_idx_base = 0x0, rec = {uid = 0, ref_index = 0, str_idx = 0}, 
 next_ref_index = 0,
   rec_size = 0, too_large_uids = 0}
 str_idx = 0
 count = 1
 ret = value optimized out
 prev_uid = 8442
 i = 0
 dest = 0
 count2 = value optimized out
 #2  mail_index_strmap_write (_sync=value optimized out) at 
 mail-index-strmap.c:1189
 ret = value optimized out
 #3  mail_index_strmap_view_sync_commit (_sync=value optimized out) at 
 mail-index-strmap.c:1236
 sync = value optimized out
 view = value optimized out
 #4  0x7f7dd899fc23 in mail_thread_index_map_build (box=value optimized 
 out, args=value optimized out, ctx_r=value optimized out) at
 index-thread.c:332
 last_uid = 8442
 search_ctx = value optimized out
 mail = value optimized out
 seq1 = 0
 tbox = 0x12af2e0
 headers_ctx = 0x12b7e50
 search_args = value optimized out
 seq2 = value optimized out
 wanted_headers = {0x7f7dd89d8542 message-id, 0x7f7dd89d9f96 
 in-reply-to, 0x7f7dd89d9fa2 references, 0x0}
 #5  mail_thread_init (box=value optimized out, args=value optimized out, 
 ctx_r=value optimized out) at index-thread.c:569
 tbox = 0x12af2e0
 ctx = 0x12afc10
 search_ctx = 0x12b2b20
 ret = value optimized out
 __FUNCTION__ = mail_thread_init
 #6

Sieve seems to break mailbody during automatic redirection

2014-06-30 Thread Urban Loesch
Hi,

I have a strange problem with sieve.
After upgrading to 2.2.13 sieve seems to break the mailbody during automatic 
redirection.

I have the following configuration.

- User A sends mail to User B.
- User B has an automatic redirect to User C
- User C geht the mailbody broken

I did some debugging.


This is a part of the mailbody which i grabbed from the mailqueue before the 
mail gets delivered to user B:

...
Message-ID: 53b12105.2020...@domain.net
Date: Mon, 30 Jun 2014 10:34:13 +0200
From: =?ISO-8859-15?Q?Urban_L=F6sch_Enas?= us...@domain.net
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:24.0) Gecko/20100101 
Thunderbird/24.2.0
MIME-Version: 1.0
To: Urban Loesch us...@domain.net
Subject: testmail 5
X-Enigmail-Version: 1.6
Content-Type: multipart/mixed;
 boundary=040308070600090201000704

This is a multi-part message in MIME format.
--040308070600090201000704
Content-Type: text/plain; charset=ISO-8859-15
Content-Transfer-Encoding: 7bit


--040308070600090201000704
Content-Type: application/rtf;
 name=elenco_siti_inibiti.2.rtf
Content-Transfer-Encoding: base64
Content-Disposition: attachment;
 filename=elenco_siti_inibiti.2.rtf

e1xydGYxXGFkZWZsYW5nMTAyNVxhbnNpXGFuc2ljcGcxMjUyXHVjMVxhZGVmZjBcZGVmZjBc
c3RzaGZkYmNoMFxzdHNoZmxvY2gwXHN0c2hmaGljaDBcc3RzaGZiaTBcZGVmbGFuZzEwNDBc
ZGVmbGFuZ2ZlMTA0MFx0aGVtZWxhbmcxMDQwXHRoZW1lbGFuZ2ZlMFx0aGVtZWxhbmdjczB7
XGZvbnR0Ymx7XGYwXGZiaWRpIFxmcm9tYW5cZmNoYXJzZXQwXGZwcnEye1wqXHBhbm9zZSAw
MjAyMDYwMzA1MDQwNTAyMDMwNH1UaW1lcyBOZXcgUm9tYW47fQ0Ke1xmMVxmYmlkaSBcZnN3
aXNzXGZjaGFyc2V0MFxmcHJxMntcKlxwYW5vc2UgMDIwYjA2MDQwMjAyMDIwMjAyMDR9QXJp
YWx7XCpcZmFsdCBBcmlhbH07fXtcZjJcZmJpZGkgXGZtb2Rlcm5cZmNoYXJzZXQwXGZwcnEx
e1wqXHBhbm9zZSAwMjA3MDMwOTAyMDIwNTAyMDQwNH1Db3VyaWVyIE5ldzt9e1xmM1xmYmlk
aSBcZnJvbWFuXGZjaGFyc2V0MlxmcHJxMntcKlxwYW5vc2UgMDUwNTAxMDIwMTA3MDYwMjA1
MDd9U3ltYm9sO30NCntcZjRcZmJpZGkgXGZzd2lzc1xmY2hhcnNldDBcZnBycTJ7XCpccGFu
b3NlIDAyMGIwNjA0MDIwMjAyMDIwMjA0fUhlbHZldGljYTt9e1xmNVxmYmlkaSBcZm1vZGVy
blxmY2hhcnNldDBcZnBycTF7XCpccGFub3NlIDAyMDcwNDA5MDIwMjA1MDIwNDA0fUNvdXJp
ZXI7fXtcZjZcZmJpZGkgXGZyb21hblxmY2hhcnNldDBcZnBycTJ7XCpccGFub3NlIDAyMDIw
NjAzMDQwNTA1MDIwMzA0fVRtcyBSbW57XCpcZmFsdCBUaW1lcyBOZXcgUm9tYW59O30NCntc
ZjdcZmJpZGkgXGZzd2lzc1xmY2hhcnNldDBcZnBycTJ7XCpccGFub3NlIDAyMGIwNjA0MDIw
MjAyMDMwMjA0fUhlbHY7fXtcZjhcZmJpZGkgXGZyb21hblxmY2hhcnNldDBcZnBycTJ7XCpc
cGFub3NlIDAyMDQwNTAzMDYwNTA2MDIwMzA0fU5ldyBZb3JrO317XGY5XGZiaWRpIFxmc3dp
c3NcZmNoYXJzZXQwXGZwcnEye1wqXHBhbm9zZSAwMDAwMDAwMDAwMDAwMDAwMDAwMH1TeXN0
ZW07fQ0Ke1xmMTBcZmJpZGkgXGZuaWxcZmNoYXJzZXQyXGZwcnEye1wqXHBhbm9zZSAwNTAw
MDAwMDAwMDAwMDAwMDAwMH1XaW5nZGluZ3M7fXtcZjExXGZiaWRpIFxmbW9kZXJuXGZjaGFy
c2V0MTI4XGZwcnExe1wqXHBhbm9zZSAwMjAyMDYwOTA0MDIwNTA4MDMwNH1NUyBNaW5jaG97
XCpcZmFsdCA/bD9yID8/XCc4MVwnNjZjfTt9DQp7XGYxMlxmYmlkaSBcZnJvbWFuXGZjaGFy
c2V0MTI5XGZwcnEye1wqXHBhbm9zZSAwMjAzMDYwMDAwMDEwMTAxMDEwMX1CYXRhbmd7XCpc
ZmFsdCA/Pz8/P0U/Pz8/P0VjRT8/Pz8/RT8/Y0VjRT8/Pz8/fTt9e1xmMTNcZmJpZGkgXGZu
aWxcZmNoYXJzZXQxMzRcZnBycTJ7XCpccGFub3NlIDAyMDEwNjAwMDMwMTAxMDEwMTAxfVNp
bVN1bntcKlxmYWx0ID8/Pz8/Pz8/Pz8/Pz8/Pz8/Pz8/Pz8/Pz8/Pz8/Pz99O30NCntcZjE0
XGZiaWRpIFxmcm9tYW5cZmNoYXJzZXQxMzZcZnBycTJ7XCpccGFub3NlIDAyMDIwNTAwMDAw
MDAwMDAwMDAwfVBNaW5nTGlVe1wqXGZhbHQgIVBzMk9jdUFlfTt9e1xmMTVcZmJpZGkgXGZt
b2Rlcm5cZmNoYXJzZXQxMjhcZnBycTF7XCpccGFub3NlIDAyMGIwNjA5MDcwMjA1MDgwMjA0
fU1TIEdvdGhpY3tcKlxmYWx0ID9sP3IgP1M/Vj9iP059O30NCntcZjE2XGZiaWRpIFxmc3dp
c3NcZmNoYXJzZXQxMjlcZnBycTJ7XCpccGFub3NlIDAyMGIwNjAwMDAwMTAxMDEwMTAxfURv
dHVte1wqXGZhbHQgPz8/Pz9FPz9jRT8/Pz8/RWNFPz8/Pz9FPz8/Pz9FY307fXtcZjE3XGZi
aWRpIFxmbW9kZXJuXGZjaGFyc2V0MTM0XGZwcnExe1wqXHBhbm9zZSAwMjAxMDYwOTA2MDEw
MTAxMDEwMX1TaW1IZWl7XCpcZmFsdCBvPz8/Pz8/Pz8/Pz8/Pz8/Pz8/Pz8/Pz8/Pz8/Pz8/
fTt9DQp7XGYxOFxmYmlkaSBcZm1vZGVyblxmY2hhcnNldDEzNlxmcHJxMXtcKlxwYW5vc2Ug
...

Looks normal to me.

Now:
This is the part of the mailbody which i grabbed from the mailqueue after user 
B has received the mail and sieve has injectd
it to the mailqueue for delivering to user C.

...
Message-ID: 53b12105.2020...@domain.net
Date: Mon, 30 Jun 2014 10:34:13 +0200
From: =?ISO-8859-15?Q?Urban_L=F6sch_Enas?= us...@domain.net
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:24.0) Gecko/20100101 
Thunderbird/24.2.0
MIME-Version: 1.0
To: Urban Loesch us...@domain.net
Subject: testmail 5
X-Enigmail-Version: 1.6
Content-Type: multipart/mixed;
 boundary=040308070600090201000704

This is a multi-part message in MIME format.^M
--040308070600090201000704^M
Content-Type: text/plain; charset=ISO-8859-15^M
Content-Transfer-Encoding: 7bit^M
^M
^M
--040308070600090201000704^M
Content-Type: application/rtf;^M
 name=elenco_siti_inibiti.2.rtf^M
Content-Transfer-Encoding: base64^M
Content-Disposition: attachment;^M
 filename=elenco_siti_inibiti.2.rtf^M
^M
e1xydGYxXGFkZWZsYW5nMTAyNVxhbnNpXGFuc2ljcGcxMjUyXHVjMVxhZGVmZjBcZGVmZjBc^M
c3RzaGZkYmNoMFxzdHNoZmxvY2gwXHN0c2hmaGljaDBcc3RzaGZiaTBcZGVmbGFuZzEwNDBc^M
ZGVmbGFuZ2ZlMTA0MFx0aGVtZWxhbmcxMDQwXHRoZW1lbGFuZ2ZlMFx0aGVtZWxhbmdjczB7^M

Re: Sieve seems to break mailbody during automatic redirection

2014-06-30 Thread Urban Loesch
Hi,

short update.
I found out that with Debian Wheezy I don't have this problem. The problem 
seems only to be on
Debian squeeze.

I have configured to use /usr/bin/sendmail.
It seems that there is some problem with the squeeze version of sendmail. The 
only strange
thing is, that until the upgrade from Dovecot 2.1.15 to 2.2.13 it worked 
without any problems.

I changed to submission_host = localhost:25 in 15-lda.conf and now it works 
correctly. Seems
better than using the sendmail binary.

Many thanks
Urban Loesch

Am 30.06.2014 11:09, schrieb Urban Loesch:
 Hi,
 
 I have a strange problem with sieve.
 After upgrading to 2.2.13 sieve seems to break the mailbody during automatic 
 redirection.
 
 I have the following configuration.
 
 - User A sends mail to User B.
 - User B has an automatic redirect to User C
 - User C geht the mailbody broken
 
 I did some debugging.
 
 
 This is a part of the mailbody which i grabbed from the mailqueue before the 
 mail gets delivered to user B:
 
 ...
 Message-ID: 53b12105.2020...@domain.net
 Date: Mon, 30 Jun 2014 10:34:13 +0200
 From: =?ISO-8859-15?Q?Urban_L=F6sch_Enas?= us...@domain.net
 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:24.0) Gecko/20100101 
 Thunderbird/24.2.0
 MIME-Version: 1.0
 To: Urban Loesch us...@domain.net
 Subject: testmail 5
 X-Enigmail-Version: 1.6
 Content-Type: multipart/mixed;
  boundary=040308070600090201000704
 
 This is a multi-part message in MIME format.
 --040308070600090201000704
 Content-Type: text/plain; charset=ISO-8859-15
 Content-Transfer-Encoding: 7bit
 
 
 --040308070600090201000704
 Content-Type: application/rtf;
  name=elenco_siti_inibiti.2.rtf
 Content-Transfer-Encoding: base64
 Content-Disposition: attachment;
  filename=elenco_siti_inibiti.2.rtf
 
 e1xydGYxXGFkZWZsYW5nMTAyNVxhbnNpXGFuc2ljcGcxMjUyXHVjMVxhZGVmZjBcZGVmZjBc
 c3RzaGZkYmNoMFxzdHNoZmxvY2gwXHN0c2hmaGljaDBcc3RzaGZiaTBcZGVmbGFuZzEwNDBc
 ZGVmbGFuZ2ZlMTA0MFx0aGVtZWxhbmcxMDQwXHRoZW1lbGFuZ2ZlMFx0aGVtZWxhbmdjczB7
 XGZvbnR0Ymx7XGYwXGZiaWRpIFxmcm9tYW5cZmNoYXJzZXQwXGZwcnEye1wqXHBhbm9zZSAw
 MjAyMDYwMzA1MDQwNTAyMDMwNH1UaW1lcyBOZXcgUm9tYW47fQ0Ke1xmMVxmYmlkaSBcZnN3
 aXNzXGZjaGFyc2V0MFxmcHJxMntcKlxwYW5vc2UgMDIwYjA2MDQwMjAyMDIwMjAyMDR9QXJp
 YWx7XCpcZmFsdCBBcmlhbH07fXtcZjJcZmJpZGkgXGZtb2Rlcm5cZmNoYXJzZXQwXGZwcnEx
 e1wqXHBhbm9zZSAwMjA3MDMwOTAyMDIwNTAyMDQwNH1Db3VyaWVyIE5ldzt9e1xmM1xmYmlk
 aSBcZnJvbWFuXGZjaGFyc2V0MlxmcHJxMntcKlxwYW5vc2UgMDUwNTAxMDIwMTA3MDYwMjA1
 MDd9U3ltYm9sO30NCntcZjRcZmJpZGkgXGZzd2lzc1xmY2hhcnNldDBcZnBycTJ7XCpccGFu
 b3NlIDAyMGIwNjA0MDIwMjAyMDIwMjA0fUhlbHZldGljYTt9e1xmNVxmYmlkaSBcZm1vZGVy
 blxmY2hhcnNldDBcZnBycTF7XCpccGFub3NlIDAyMDcwNDA5MDIwMjA1MDIwNDA0fUNvdXJp
 ZXI7fXtcZjZcZmJpZGkgXGZyb21hblxmY2hhcnNldDBcZnBycTJ7XCpccGFub3NlIDAyMDIw
 NjAzMDQwNTA1MDIwMzA0fVRtcyBSbW57XCpcZmFsdCBUaW1lcyBOZXcgUm9tYW59O30NCntc
 ZjdcZmJpZGkgXGZzd2lzc1xmY2hhcnNldDBcZnBycTJ7XCpccGFub3NlIDAyMGIwNjA0MDIw
 MjAyMDMwMjA0fUhlbHY7fXtcZjhcZmJpZGkgXGZyb21hblxmY2hhcnNldDBcZnBycTJ7XCpc
 cGFub3NlIDAyMDQwNTAzMDYwNTA2MDIwMzA0fU5ldyBZb3JrO317XGY5XGZiaWRpIFxmc3dp
 c3NcZmNoYXJzZXQwXGZwcnEye1wqXHBhbm9zZSAwMDAwMDAwMDAwMDAwMDAwMDAwMH1TeXN0
 ZW07fQ0Ke1xmMTBcZmJpZGkgXGZuaWxcZmNoYXJzZXQyXGZwcnEye1wqXHBhbm9zZSAwNTAw
 MDAwMDAwMDAwMDAwMDAwMH1XaW5nZGluZ3M7fXtcZjExXGZiaWRpIFxmbW9kZXJuXGZjaGFy
 c2V0MTI4XGZwcnExe1wqXHBhbm9zZSAwMjAyMDYwOTA0MDIwNTA4MDMwNH1NUyBNaW5jaG97
 XCpcZmFsdCA/bD9yID8/XCc4MVwnNjZjfTt9DQp7XGYxMlxmYmlkaSBcZnJvbWFuXGZjaGFy
 c2V0MTI5XGZwcnEye1wqXHBhbm9zZSAwMjAzMDYwMDAwMDEwMTAxMDEwMX1CYXRhbmd7XCpc
 ZmFsdCA/Pz8/P0U/Pz8/P0VjRT8/Pz8/RT8/Y0VjRT8/Pz8/fTt9e1xmMTNcZmJpZGkgXGZu
 aWxcZmNoYXJzZXQxMzRcZnBycTJ7XCpccGFub3NlIDAyMDEwNjAwMDMwMTAxMDEwMTAxfVNp
 bVN1bntcKlxmYWx0ID8/Pz8/Pz8/Pz8/Pz8/Pz8/Pz8/Pz8/Pz8/Pz8/Pz99O30NCntcZjE0
 XGZiaWRpIFxmcm9tYW5cZmNoYXJzZXQxMzZcZnBycTJ7XCpccGFub3NlIDAyMDIwNTAwMDAw
 MDAwMDAwMDAwfVBNaW5nTGlVe1wqXGZhbHQgIVBzMk9jdUFlfTt9e1xmMTVcZmJpZGkgXGZt
 b2Rlcm5cZmNoYXJzZXQxMjhcZnBycTF7XCpccGFub3NlIDAyMGIwNjA5MDcwMjA1MDgwMjA0
 fU1TIEdvdGhpY3tcKlxmYWx0ID9sP3IgP1M/Vj9iP059O30NCntcZjE2XGZiaWRpIFxmc3dp
 c3NcZmNoYXJzZXQxMjlcZnBycTJ7XCpccGFub3NlIDAyMGIwNjAwMDAwMTAxMDEwMTAxfURv
 dHVte1wqXGZhbHQgPz8/Pz9FPz9jRT8/Pz8/RWNFPz8/Pz9FPz8/Pz9FY307fXtcZjE3XGZi
 aWRpIFxmbW9kZXJuXGZjaGFyc2V0MTM0XGZwcnExe1wqXHBhbm9zZSAwMjAxMDYwOTA2MDEw
 MTAxMDEwMX1TaW1IZWl7XCpcZmFsdCBvPz8/Pz8/Pz8/Pz8/Pz8/Pz8/Pz8/Pz8/Pz8/Pz8/
 fTt9DQp7XGYxOFxmYmlkaSBcZm1vZGVyblxmY2hhcnNldDEzNlxmcHJxMXtcKlxwYW5vc2Ug
 ...
 
 Looks normal to me.
 
 Now:
 This is the part of the mailbody which i grabbed from the mailqueue after 
 user B has received the mail and sieve has injectd
 it to the mailqueue for delivering to user C.
 
 ...
 Message-ID: 53b12105.2020...@domain.net
 Date: Mon, 30 Jun 2014 10:34:13 +0200
 From: =?ISO-8859-15?Q?Urban_L=F6sch_Enas?= us...@domain.net
 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:24.0) Gecko/20100101 
 Thunderbird/24.2.0
 MIME-Version: 1.0
 To: Urban Loesch us...@domain.net
 Subject: testmail 5
 X-Enigmail-Version: 1.6
 Content-Type: multipart/mixed;
  boundary

Another Crash in service imap with version 2.2.13 - Debian Wheezy

2014-06-26 Thread Urban Loesch
output = 0xa062d0
bytes = 39
__FUNCTION__ = client_input
#16 0x7fbf2637478e in io_loop_call_io (io=0x9ffb60) at ioloop.c:439
ioloop = 0x9dc740
t_id = optimized out
__FUNCTION__ = io_loop_call_io
#17 0x7fbf263757b7 in io_loop_handler_run_internal (ioloop=optimized out) 
at ioloop-epoll.c:206
ctx = 0x9dd3d0
events = 0xa955f0
event = 0x9de240
list = 0x9dee30
io = 0xa955f0
tv = {tv_sec = 29, tv_usec = 742827}
events_count = optimized out
msecs = optimized out
ret = 1
i = optimized out
call = optimized out
__FUNCTION__ = io_loop_handler_run_internal
#18 0x7fbf26374819 in io_loop_call_io (io=0x9dc740) at ioloop.c:443
ioloop = 0x7fff64b533f0
t_id = 0
__FUNCTION__ = io_loop_call_io
#19 0x7fbf26321a23 in master_service_run (service=0x9dc740, 
callback=callback@entry=0x420d20 client_connected) at master-service.c:566
No locals.
#20 0x0040c1e8 in main (argc=1, argv=0x9dc390) at main.c:410
set_roots = {0x428960, 0x0}
login_set = {auth_socket_path = 0x9d4048 \001, postlogin_socket_path 
= 0x0, postlogin_timeout_secs = 60, callback = 0x420bb0
login_client_connected, failure_callback = 0x4208c0 login_client_failed, 
request_auth_token = 1}
service_flags = optimized out
storage_service_flags = optimized out
username = 0x9dc5d0 @ǝ
c = optimized out
- end backtrace -

Have you any idea how I can solve this?

Many thanks
Urban Loesch

doveconf -n:
# 2.2.13 (705fd8f3f485): /etc/dovecot/dovecot.conf
# OS: Linux 3.4.67-vs2.3.3.9-rol-em64t-efigpt x86_64 Debian 7.5 ext4
auth_cache_negative_ttl = 0
auth_cache_size = 40 M
auth_cache_ttl = 1 weeks
auth_mechanisms = plain login
deliver_log_format = msgid=%m: %$ %p %w
disable_plaintext_auth = no
info_log_path = syslog
login_trusted_networks = INTERNAL_IP
mail_gid = mailstore
mail_location = mdbox:/home/vmail/%d/%n
mail_log_prefix = %s(%u pid:%p session:%{session}): 
mail_plugins =  quota mail_log notify acl zlib stats virtual
mail_uid = mailstore
managesieve_notify_capability = mailto
managesieve_sieve_capability = fileinto reject envelope encoded-character 
vacation subaddress comparator-i;ascii-numeric relational regex imap4flags
copy include variables body enotify environment mailbox date ihave duplicate 
imapflags notify
mdbox_rotate_size = 10 M
namespace {
  list = children
  location = mdbox:/home/vmail/%%d/%%n
  prefix = shared/%%u/
  separator = /
  subscriptions = no
  type = shared
}
namespace {
  hidden = no
  inbox = no
  list = children
  location = virtual:/home/virtual:INDEX=~/virtual
  prefix = [mymail]/
  separator = /
  subscriptions = yes
  type = private
}
namespace inbox {
  inbox = yes
  location =
  mailbox Drafts {
special_use = \Drafts
  }
  mailbox Junk {
special_use = \Junk
  }
  mailbox Sent {
special_use = \Sent
  }
  mailbox Sent Items {
special_use = \Sent
  }
  mailbox Sent Messages {
special_use = \Sent
  }
  mailbox Trash {
special_use = \Trash
  }
  mailbox [mymail]/All {
auto = no
special_use = \All
  }
  prefix =
  separator = /
  type = private
}
passdb {
  args = /etc/dovecot/dovecot-sql-account.conf
  driver = sql
}
plugin {
  acl = vfile
  acl_shared_dict = file:/home/vmail/%d/shared-mailboxes
  mail_log_events = delete undelete expunge copy mailbox_delete mailbox_rename 
flag_change save mailbox_create append
  mail_log_fields = uid box msgid size from
  mail_log_group_events = no
  quota = dict:Storage used::file:%h/dovecot-quota
  quota_rule2 = Trash:storage=+100M
  quota_warning = storage=95%% quota-warning 95 %u
  quota_warning2 = storage=80%% quota-warning 80 %u
  sieve = ~/.dovecot.sieve
  sieve_dir = ~/sieve
  sieve_extensions = +notify +imapflags
  sieve_max_redirects = 15
  stats_command_min_time = 1 mins
  stats_domain_min_time = 12 hours
  stats_ip_min_time = 12 hours
  stats_memory_limit = 16 M
  stats_refresh = 30 secs
  stats_session_min_time = 15 mins
  stats_track_cmds = no
  stats_user_min_time = 1 hours
  zlib_save = gz
  zlib_save_level = 9
}
protocols = imap pop3 lmtp sieve
service auth {
  unix_listener auth-userdb {
group = mailstore
mode = 0660
user = root
  }
}
service imap-login {
  inet_listener imap {
port = 143
  }
  process_limit = 48
  process_min_avail = 3
  service_count = 1
}
service imap {
  process_limit = 48
  process_min_avail = 2
  service_count = 1
}
service lmtp {
  inet_listener lmtp {
port = 24
  }
  unix_listener /var/spool/postfix/private/dovecot-lmtp {
group = postfix
mode = 0666
user = postfix
  }
}
service pop3-login {
  inet_listener pop3 {
port = 110
  }
  process_limit = 16
  process_min_avail = 2
  service_count = 1
}
service pop3 {
  process_limit = 16
  process_min_avail = 2
  service_count = 1
}
service quota-warning {
  executable = script /usr/local/rol/dovecot/quota-warning.sh

[UPDATE]: Another Crash in service imap with version 2.2.13 - Debian Wheezy

2014-06-26 Thread Urban Loesch
Hi,

short update.
I switched back to Debian Squeeze. Same Dovecot Version 2.2.13.
The crash happens also on Squeeze.

Very strange is that the crash don't happened yesterday.
And it happens only to one certain user. Other users with the same iOS 
os-version=7.1.1 (11D201) aren't affected.

Also very strange is, that the crash doens't happen every time (I don't saw 
that before).

Logs in cronological order from the last attempt:
...
Jun 26 15:00:07 mailstore dovecot: imap-login: ID sent: 
x-session-id=1ji1xbz8fACXEq4b, x-originating-ip=CLIENT_IP, 
x-originating-port=51580,
x-connected-ip=PROXY_IP, x-connected-port=143, x-proxy-ttl=4: user=, 
rip=CLIENT_IP, lip=PROXY_IP, secured, session=1ji1xbz8fACXEq4b
Jun 26 15:00:08 mailstore dovecot: imap-login: Login: user=u...@domain.net, 
method=PLAIN, rip=CLIENT_IP, lip=PROXY_IP, mpid=6407, secured,
session=1ji1xbz8fACXEq4b
Jun 26 15:00:08 mailstore dovecot: imap(u...@domain.net pid:6407 
session:1ji1xbz8fACXEq4b): ID sent: name=iPhone Mail, version=11D201, os=iOS,
os-version=7.1.1 (11D201)
Jun 26 15:00:09 mailstore dovecot: imap(u...@domain.net pid:6407 
session:1ji1xbz8fACXEq4b): Fatal: master: service(imap): child 6407 killed 
with
signal 11 (core dumped)

Jun 26 15:00:09 mailstore dovecot: imap-login: ID sent: 
x-session-id=hWTKxbz8gACXEq4b, x-originating-ip=CLIENT_IP, 
x-originating-port=51584,
x-connected-ip=PROXY_IP, x-connected-port=143, x-proxy-ttl=4: user=, 
rip=CLIENT_IP, lip=PROXY_IP, secured, session=hWTKxbz8gACXEq4b
Jun 26 15:00:09 mailstore dovecot: imap-login: Login: user=u...@domain.net, 
method=PLAIN, rip=CLIENT_IP, lip=PROXY_IP, mpid=46064, secured,
session=hWTKxbz8gACXEq4b
Jun 26 15:00:09 mailstore dovecot: imap(u...@domain.net pid:46064 
session:hWTKxbz8gACXEq4b): ID sent: name=iPhone Mail, version=11D201, os=iOS,
os-version=7.1.1 (11D201)
Jun 26 15:00:09 mailstore dovecot: imap(u...@domain.net pid:46064 
session:hWTKxbz8gACXEq4b): Fatal: master: service(imap): child 46064 killed 
with
signal 11 (core dumped)

Jun 26 15:00:18 mailstore dovecot: imap-login: ID sent: 
x-session-id=M7BRxrz8iQCXEq4b, x-originating-ip=CLIENT_IP, 
x-originating-port=51593,
x-connected-ip=PROXY_IP, x-connected-port=143, x-proxy-ttl=4: user=, 
rip=CLIENT_IP, lip=PROXY_IP, secured, session=M7BRxrz8iQCXEq4b
Jun 26 15:00:18 mailstore dovecot: imap-login: Login: user=u...@domain.net, 
method=PLAIN, rip=CLIENT_IP, lip=PROXY_IP, mpid=41143, secured,
session=M7BRxrz8iQCXEq4b
Jun 26 15:00:18 mailstore dovecot: imap(u...@domain.net pid:41143 
session:M7BRxrz8iQCXEq4b): ID sent: name=iPhone Mail, version=11D201, os=iOS,
os-version=7.1.1 (11D201)
Jun 26 15:02:17 mailstore dovecot: imap(u...@domain.net pid:41143 
session:M7BRxrz8iQCXEq4b): Connection closed bytes=341/1991 
session=M7BRxrz8iQCXEq4b
...

The last session has been endet normallly. Very strange to me.

I think this is a problem only with that specific user and his Iphone.
On the other hand, the crash isn't fine at all.

Thanks
Urban Loesch



 Original-Nachricht 
Betreff: Another Crash in service imap with version 2.2.13 - Debian Wheezy
Datum: Thu, 26 Jun 2014 09:25:27 +0200
Von: Urban Loesch b...@enas.net
Antwort an: Dovecot Mailing List dovecot@dovecot.org
An: Dovecot Mailing List dovecot@dovecot.org

Hi,

yesterday I updated my second server from Debian Squeeze to Debian Wheezy.
Since todaay I get the followinig errors in my logs:

Error-Log:
...
Jun 26 09:08:28 mailstore dovecot: imap(u...@domain.net pid:28898 
session:iuMX3Lf8fACXLrFC): Fatal: master: service(imap): child 28898 killed 
with
signal 11 (core dumped)
...

Mail-log
...
Jun 26 09:08:28 mailstore dovecot: imap-login: ID sent: 
x-session-id=iuMX3Lf8fACXLrFC, x-originating-ip=CLIENT_IP, 
x-originating-port=52092,
x-connected-ip=PROXY_IP, x-connected-port=143, x-proxy-ttl=4: user=, 
rip=CLIENT_IP, lip=PROXY_IP, secured, session=iuMX3Lf8fACXLrFC
Jun 26 09:08:28 mailstore dovecot: imap-login: Login: user=u...@domain.net, 
method=PLAIN, rip=CLIENT_IP, lip=PROXY_IP, mpid=28898, secured,
session=iuMX3Lf8fACXLrFC
Jun 26 09:08:28 mailstore dovecot: imap(u...@domain.net pid:28898 
session:iuMX3Lf8fACXLrFC): ID sent: name=iPhone Mail, version=11D201, os=iOS,
os-version=7.1.1 (11D201)
Jun 26 09:08:28 mailstore dovecot: imap(u...@domain.net pid:28898 
session:iuMX3Lf8fACXLrFC): Fatal: master: service(imap): child 28898 killed 
with
signal 11 (core dumped)
...

I made a backtrace:

- start backtrace -
[Thread debugging using libthread_db enabled]
Using host libthread_db library /lib/x86_64-linux-gnu/libthread_db.so.1.
Core was generated by `dovecot/imap'.
Program terminated with signal 11, Segmentation fault.
#0  0x in ?? ()
(gdb) bt full
#0  0x in ?? ()
No symbol table info available.
#1  0x7fbf26650c44 in mailbox_save_cancel (_ctx=optimized out) at 
mail-storage.c:2116
ctx = 0xa95500
keywords = 0x0
mail = optimized out
#2  0x7fbf2665104f in mailbox_save_begin (ctx=ctx@entry

Crash in service imap with version 2.2.13

2014-06-24 Thread Urban Loesch
 = io_loop_run
#18 0x7f7dd8645153 in master_service_run (service=0x128c5f0, 
callback=0x20fa) at master-service.c:566
No locals.
#19 0x00420f87 in main (argc=1, argv=0x128c3a0) at main.c:410
set_roots = {0x428fc0, 0x0}
login_set = {auth_socket_path = 0x1284050 \210@(\001, 
postlogin_socket_path = 0x0, postlogin_timeout_secs = 60, callback = 0x421180
login_client_connected, failure_callback = 0x421120 login_client_failed,
  request_auth_token = 1}
service_flags = value optimized out
storage_service_flags = MAIL_STORAGE_SERVICE_FLAG_DISALLOW_ROOT
username = 0x0
c = value optimized out

- end backtrace -

Have you any idea how I can solve this?

Many thanks
Urban Loesch

doveconf -n:

...
# 2.2.13 (38cd37cea8b1): /etc/dovecot/dovecot.conf
# OS: Linux 3.4.67-vs2.3.3.9-rol-em64t-efigpt x86_64 Debian 6.0.9 ext4
auth_cache_negative_ttl = 0
auth_cache_size = 80 M
auth_cache_ttl = 1 weeks
auth_mechanisms = plain login
auth_verbose = yes
default_client_limit = 2000
deliver_log_format = msgid=%m: %$ %p %w
disable_plaintext_auth = no
login_trusted_networks = INTERNAL_IPS
mail_gid = mailstore
mail_location = mdbox:/home/vmail/%d/%n:INDEX=/home/dovecotindex/%d/%n
mail_log_prefix = %s(%u pid:%p session:%{session}): 
mail_plugins =  quota mail_log notify zlib
mail_uid = mailstore
mailbox_idle_check_interval = 1 mins
managesieve_notify_capability = mailto
managesieve_sieve_capability = fileinto reject envelope encoded-character 
vacation subaddress comparator-i;ascii-numeric relational regex imap4flags
copy include variables body enotify environment mailbox date ihave duplicate 
imapflags notify
mdbox_rotate_size = 10 M
namespace inbox {
  inbox = yes
  location =
  mailbox Drafts {
special_use = \Drafts
  }
  mailbox Junk {
special_use = \Junk
  }
  mailbox Sent {
special_use = \Sent
  }
  mailbox Sent Items {
special_use = \Sent
  }
  mailbox Sent Messages {
special_use = \Sent
  }
  mailbox Trash {
special_use = \Trash
  }
  prefix =
  separator = /
  type = private
}
passdb {
  args = /etc/dovecot/dovecot-sql-account.conf
  driver = sql
}
plugin {
  mail_log_events = delete undelete expunge copy mailbox_delete mailbox_rename 
flag_change save mailbox_create append
  mail_log_fields = uid box msgid size from
  mail_log_group_events = no
  quota = dict:Storage used::file:%h/dovecot-quota
  quota_rule2 = Trash:storage=+100M
  quota_warning = storage=95%% quota-warning 95 %u
  quota_warning2 = storage=80%% quota-warning 80 %u
  sieve = ~/.dovecot.sieve
  sieve_dir = ~/sieve
  sieve_extensions = +notify +imapflags
  sieve_max_redirects = 15
  zlib_save = gz
  zlib_save_level = 9
}
protocols = imap pop3 lmtp sieve
service auth-worker {
  service_count = 0
  vsz_limit = 512 M
}
service auth {
  unix_listener auth-userdb {
group = mailstore
mode = 0660
user = root
  }
}
service imap-login {
  inet_listener imap {
port = 143
  }
  process_limit = 256
  process_min_avail = 50
  service_count = 1
}
service imap {
  process_limit = 2048
  process_min_avail = 50
  service_count = 1
  vsz_limit = 512 M
}
service lmtp {
  inet_listener lmtp {
address = *
port = 24
  }
  unix_listener /var/spool/postfix/private/dovecot-lmtp {
group = postfix
mode = 0666
user = postfix
  }
}
service pop3-login {
  inet_listener pop3 {
port = 110
  }
  process_limit = 256
  process_min_avail = 25
  service_count = 1
}
service pop3 {
  process_limit = 256
  process_min_avail = 25
  service_count = 1
}
service quota-warning {
  executable = script /usr/local/rol/dovecot/quota-warning.sh
  unix_listener quota-warning {
user = mailstore
  }
  user = mailstore
}
ssl = no
ssl_cert = /etc/dovecot/ssl/dovecot.pem
ssl_key = /etc/dovecot/ssl/dovecot.key
userdb {
  args = /etc/dovecot/dovecot-sql-account.conf
  driver = sql
}
protocol lmtp {
  mail_fsync = optimized
  mail_plugins =  quota mail_log notify zlib sieve zlib
}
protocol imap {
  imap_client_workarounds = tb-extra-mailbox-sep
  imap_id_log = *
  imap_logout_format = bytes=%i/%o session=%{session}
  mail_max_userip_connections = 40
  mail_plugins =  quota mail_log notify zlib imap_quota imap_zlib
}
protocol pop3 {
  mail_plugins =  quota mail_log notify zlib
  pop3_logout_format = bytes_sent=%o top=%t/%p, retr=%r/%b, del=%d/%m, size=%s 
uidl_hash=%u session=%{session}
}




Re: [Dovecot] All Mail folder as in Gmail

2014-04-27 Thread Urban Loesch

You should start here:
http://wiki2.dovecot.org/Plugins/Virtual


Am 27.04.2014 07:26, schrieb Dmitry Podkovyrkin:

Hello
How to make a folder All Mail so that it got a letter from the Inbox
and Sent?
Mail client makes a copy of emails sent to the folder Sent.



Re: [Dovecot] Allowing non-SSL connections only for certain Password Databases

2014-04-23 Thread Urban Loesch



Am 23.04.2014 10:38, schrieb Benjamin Podszun:

On Tuesday, April 22, 2014 3:31:47 PM CEST, Urban Loesch wrote:

Hi,



Is there a way to set disable_plaintext_auth to different values
for different Password Databases? Is there another way to do it?



Why do you not force SSL for all users?

I have no idea how this could be made with different databases. I have
only build a solution for all users stored in mysql.

I'm able to force SSL for imap and pop3 on a per user basis with e.g.:

...
password_query = SELECT password FROM users WHERE userid = '%u' AND
allow_login = 'y' AND ( force_ssl = 'y' OR '%c' = 'secured');


Waitasecond. I might be totally off here, but the way I read that query
you accept plaintext credentials, unsecured and then check the DB. After
which you might say You're not allowed to log in.


Yes that is correct and I knew that when I configured the setup. But I 
can't manipulate the clients.




If that is correct every user might send their credentials over
unsecured connections?


Yes, that is a disadvantage. As I just said, I can't change that.



In my opinion this doesn't help. Clients cannot know in advance that
they shouldn't try to login.

I guess I'd either

- drop the requirement (best option, hit the users that don't support
TLS or offer them help to upgrade/fix their setup)


Can you help me to upgrade/fix 40k users, which have no idea how to 
change the settings of a mail client? Send me your phonenumber and I 
will redirect all requests of that to you :-)


You will see very quickly that it's not practicable to force all users 
to use SSL at the same time. With this setup I can bring users step by 
step to use SSL.




- live with the possibility that the system users are potentially
disclosing their credentials.


I have no system users.




Take a step back: A random client connects to dovecot. It didn't log in
yet. How would you change the capabilities to reflect 'login without
starttls is allowed or not', depending on a username that you cannot
know at this point?


I know all usernames as I activate them. So I can control which user 
must use SSL and which not. I also for example can control which user is 
forced to use port 587 for sending their email and which not.




My take, ignoring the There shouldn't be a need for that quip, is that
this is next to impossible. And not worth the challenge.

Ben


Re: [Dovecot] Allowing non-SSL connections only for certain Password Databases

2014-04-23 Thread Urban Loesch



Am 23.04.2014 10:30, schrieb Reindl Harald:



Am 23.04.2014 05:22, schrieb Dan Pollock:

On Apr 22, 2014, Urban Loesch wrote:

Is there a way to set disable_plaintext_auth to different values for 
different Password Databases? Is there another way to do it?



Why do you not force SSL for all users?


I have some users whose mail clients don't properly support SSL.


which ones in 2014?


You will laugh, but we have some customers which are still using
windows 3.11. Not many but there are..


Re: [Dovecot] Allowing non-SSL connections only for certain Password Databases

2014-04-22 Thread Urban Loesch

Hi,



Is there a way to set disable_plaintext_auth to different values for 
different Password Databases? Is there another way to do it?



Why do you not force SSL for all users?

I have no idea how this could be made with different databases. I have 
only build a solution for all users stored in mysql.


I'm able to force SSL for imap and pop3 on a per user basis with e.g.:

...
password_query = SELECT password FROM users WHERE userid = '%u' AND 
allow_login = 'y' AND ( force_ssl = 'y' OR '%c' = 'secured');

...

Query adopted from:
http://wiki2.dovecot.org/Authentication/RestrictAccess

For available variables see:
http://wiki2.dovecot.org/Variables

As I just said, this works for me, but only for users stored in mysql.

Regards
Urban


Re: [Dovecot] Allowing non-SSL connections only for certain Password Databases

2014-04-22 Thread Urban Loesch

Sorry,theres a typo in sql query.

It should be ( force_ssl = 'n' , not 'y'.
My fault.

Best
Urban


Am 22.04.2014 15:31, schrieb Urban Loesch:

Hi,



Is there a way to set disable_plaintext_auth to different values for
different Password Databases? Is there another way to do it?



Why do you not force SSL for all users?

I have no idea how this could be made with different databases. I have
only build a solution for all users stored in mysql.

I'm able to force SSL for imap and pop3 on a per user basis with e.g.:

...
password_query = SELECT password FROM users WHERE userid = '%u' AND
allow_login = 'y' AND ( force_ssl = 'y' OR '%c' = 'secured');
...

Query adopted from:
http://wiki2.dovecot.org/Authentication/RestrictAccess

For available variables see:
http://wiki2.dovecot.org/Variables

As I just said, this works for me, but only for users stored in mysql.

Regards
Urban


[Dovecot] POP3: Panic: Trying to allocate 0 bytes

2014-04-14 Thread Urban Loesch
Hi,

today I upgraded one of our dovecot servers from 2.117 to version 2.2.12 under 
Debian Squeeze.
After the upgrade I got many of the following errors for pop3 users.

My logfile shows:

...
Apr 14 09:28:05 mailstore dovecot: pop3(u...@domain.net pid:39688 
session:o1DTn/v2IABPNmv7): Panic: Trying to allocate 0 bytes
Apr 14 09:28:05 mailstore dovecot: pop3(u...@domain.net pid:39688 
session:o1DTn/v2IABPNmv7): Error: Raw backtrace:
/usr/lib/dovecot/libdovecot.so.0(+0x6bb0a) [0x7f1ae9a18b0a] - 
/usr/lib/dovecot/libdovecot.
so.0(+0x6bb86) [0x7f1ae9a18b86] - /usr/lib/dovecot/libdovecot.so.0(i_error+0) 
[0x7f1ae99d1e8f] - /usr/lib/dovecot/libdovecot.so.0(+0x8148b)
[0x7f1ae9a2e48b] - dovecot/pop3() [0x4077f0] - 
dovecot/pop3(client_command_execute+0x9d) [0x4
07d0d] - dovecot/pop3(client_handle_input+0x80) [0x405810] - 
/usr/lib/dovecot/libdovecot.so.0(io_loop_call_io+0x4e) [0x7f1ae9a28d2e] -
/usr/lib/dovecot/libdovecot.so.0(io_loop_handler_run_internal+0xaf) 
[0x7f1ae9a29e9f] - /usr/lib/do
vecot/libdovecot.so.0(io_loop_handler_run+0x9) [0x7f1ae9a28db9] - 
/usr/lib/dovecot/libdovecot.so.0(io_loop_run+0x38) [0x7f1ae9a28e38] -
/usr/lib/dovecot/libdovecot.so.0(master_service_run+0x13) [0x7f1ae99d6c43] - 
dovecot/pop3(main+0x2
57) [0x404a67] - /lib/libc.so.6(__libc_start_main+0xfd) [0x7f1ae9669c8d] - 
dovecot/pop3() [0x4045b9]
Apr 14 09:28:05 mailstore dovecot: pop3(u...@domain.net pid:39688 
session:o1DTn/v2IABPNmv7): Fatal: master: service(pop3): child 39688 killed 
with
signal 6 (core dumps disabled)




Have you any idea what the error could be?
As this is a production server I switched back to version 2.1.17.

Many thanks
Urban Loesch

doveconf -n:
# 2.2.12 (978871ca81e7): /etc/dovecot/dovecot.conf
# OS: Linux 3.4.67-vs2.3.3.9-rol-em64t-efigpt x86_64 Debian 6.0.9 ext4
auth_cache_negative_ttl = 0
auth_cache_size = 40 M
auth_cache_ttl = 1 weeks
auth_mechanisms = plain login
auth_verbose = yes
deliver_log_format = msgid=%m: %$ %p %w
disable_plaintext_auth = no
login_trusted_networks = $INTERNAL_IPS
mail_gid = mailstore
mail_location = mdbox:/home/vmail/%d/%n:INDEX=/home/dovecotindex/%d/%n
mail_log_prefix = %s(%u pid:%p session:%{session}): 
mail_plugins =  quota mail_log notify zlib
mail_uid = mailstore
managesieve_notify_capability = mailto
managesieve_sieve_capability = fileinto reject envelope encoded-character 
vacation subaddress comparator-i;ascii-numeric relational regex imap4flags
copy include variables body enotify environment mailbox date ihave imapflags 
notify
mdbox_rotate_size = 10 M
namespace inbox {
  inbox = yes
  location =
  mailbox Drafts {
special_use = \Drafts
  }
  mailbox Junk {
special_use = \Junk
  }
  mailbox Sent {
special_use = \Sent
  }
  mailbox Sent Items {
special_use = \Sent
  }
  mailbox Sent Messages {
special_use = \Sent
  }
  mailbox Trash {
special_use = \Trash
  }
  prefix =
  separator = /
  type = private
}
passdb {
  args = /etc/dovecot/dovecot-sql-account.conf
  driver = sql
}
plugin {
  mail_log_events = delete undelete expunge copy mailbox_delete mailbox_rename
  mail_log_fields = uid box msgid size from
  mail_log_group_events = no
  quota = dict:Storage used::file:%h/dovecot-quota
  quota_rule2 = Trash:storage=+100M
  quota_warning = storage=95%% quota-warning 95 %u
  quota_warning2 = storage=80%% quota-warning 80 %u
  sieve = ~/.dovecot.sieve
  sieve_dir = ~/sieve
  sieve_extensions = +notify +imapflags
  sieve_max_redirects = 10
  zlib_save = gz
  zlib_save_level = 9
}
protocols = imap pop3 lmtp sieve
service auth {
  unix_listener auth-userdb {
group = mailstore
mode = 0660
user = root
  }
}
service imap-login {
  inet_listener imap {
port = 143
  }
  process_limit = 256
  process_min_avail = 25
  service_count = 1
}
service imap {
  process_limit = 256
  process_min_avail = 25
  service_count = 1
}
service lmtp {
  inet_listener lmtp {
address = *
port = 24
  }
  unix_listener /var/spool/postfix/private/dovecot-lmtp {
group = postfix
mode = 0666
user = postfix
  }
}
service pop3-login {
  inet_listener pop3 {
port = 110
  }
  process_limit = 256
  process_min_avail = 25
  service_count = 1
}
service pop3 {
  process_limit = 256
  process_min_avail = 25
  service_count = 1
}
service quota-warning {
  executable = script /usr/local/rol/dovecot/quota-warning.sh
  unix_listener quota-warning {
user = mailstore
  }
  user = mailstore
}
ssl = no
ssl_cert = /etc/dovecot/certs/dovecot.pem
ssl_key = /etc/dovecot/private/dovecot.pem
userdb {
  args = /etc/dovecot/dovecot-sql-account.conf
  driver = sql
}
protocol lmtp {
  mail_fsync = optimized
  mail_plugins =  quota mail_log notify zlib sieve zlib
}
protocol imap {
  imap_client_workarounds = tb-extra-mailbox-sep
  imap_id_log = *
  imap_logout_format = bytes=%i/%o session=%{session}
  mail_max_userip_connections = 40
  mail_plugins =  quota mail_log notify zlib imap_quota imap_zlib
}
protocol pop3 {
  mail_plugins =  quota mail_log notify zlib

Re: [Dovecot] Panic: file mail-index-map.c: line 547 (mail_index_map_lookup_seq_range): assertion failed: (first_uid 0)

2014-04-08 Thread Urban Loesch
Hi,

today I had the same problem with 2.2.12 on debian squeeze.

Here comes the log:

...
Apr  8 08:40:45 mailstoreul dovecot: imap(u...@domain.net pid:3618 
session:9cAjIG724wDD/uGI): Panic: file mail-index-map.c: line 547
(mail_index_map_lookup_seq_range): assertion failed: (first_uid  0)
Apr  8 08:40:45 mailstoreul dovecot: imap(u...@domain.net pid:3618 
session:9cAjIG724wDD/uGI): Error: Raw backtrace:
/usr/lib/dovecot/libdovecot.so.0(+0x6b85a) [0x7fb17b16b85a] - 
/usr/lib/dovecot/libdovecot.so.0(+0x6b8d6) [0x7fb17b16b8d6] -
/usr/lib/dovecot/libdovecot.so.0(i_error+0) [0x7fb17b124b9f] - 
/usr/lib/dovecot/libdovecot-storage.so.0(+0xbe7b4) [0x7fb17b48d7b4] -
/usr/lib/dovecot/libdovecot-storage.so.0(mail_index_lookup_seq+0x12) 
[0x7fb17b49f232] - /usr/lib/dovecot/modules/lib20_virtual_plugin.so(+0x9cbd)
[0x7fb17993dcbd] - /usr/lib/dovecot/modules/lib20_virtual_plugin.so(+0xa5eb) 
[0x7fb17993e5eb] -
/usr/lib/dovecot/modules/lib20_virtual_plugin.so(virtual_storage_sync_init+0x5f5)
 [0x7fb17993f4d5] -
/usr/lib/dovecot/libdovecot-storage.so.0(mailbox_sync_init+0x31) 
[0x7fb17b450461] - dovecot/imap(imap_sync_init+0x7a) [0x42038a] -
dovecot/imap(cmd_sync_delayed+0x1db) [0x42068b] - 
dovecot/imap(client_handle_input+0x1ed) [0x4176ad] - 
dovecot/imap(client_input+0x6f) [0x41795f] -
/usr/lib/dovecot/libdovecot.so.0(io_loop_call_io+0x36) [0x7fb17b17b3e6] - 
/usr/lib/dovecot/libdovecot.so.0(io_loop_handler_run+0xaf) [0x7fb17b17c46f]
- /usr/lib/dovecot/libdovecot.so.0(io_loop_run+0x38) [0x7fb17b17b358] - 
/usr/lib/dovecot/libdovecot.so.0(master_service_run+0x13) [0x7fb17b129953]
- dovecot/imap(main+0x2a7) [0x420e67] - 
/lib/libc.so.6(__libc_start_main+0xfd) [0x7fb17adbcc8d] - dovecot/imap() 
[0x40bcc9]
Apr  8 08:40:45 mailstoreul dovecot: imap(u...@domain.net pid:3618 
session:9cAjIG724wDD/uGI): Fatal: master: service(imap): child 3618 killed 
with
signal 6 (core dumps disabled)
...

Now I enabled core dumps. If it happens again I will send it.
My Client is Thunderbird 24.2.0. I have no idea which operation triggerd the 
error. I moved some mails to different multiple subfolders
under the INBOX.

Many thanks
Urban


Am 11.03.2014 21:00, schrieb Hardy Flor:
 Version: 2.2.12
 OS: Debian wheezy x86_64
 
 2014 Mar 11 20:06:53 ptb-test imap(flor_hardy): Panic: file mail-index-map.c: 
 line 547 (mail_index_map_lookup_seq_range): assertion failed: (first_uid
 0)
 2014 Mar 11 20:06:53 ptb-test imap(flor_hardy): Fatal: master: service(imap): 
 child 2760 killed with signal 6 (core dumped)
 
 GNU gdb (GDB) 7.4.1-debian
 Copyright (C) 2012 Free Software Foundation, Inc.
 License GPLv3+: GNU GPL version 3 or laterhttp://gnu.org/licenses/gpl.html
 This is free software: you are free to change and redistribute it.
 There is NO WARRANTY, to the extent permitted by law.  Type show copying
 and show warranty for details.
 This GDB was configured as x86_64-linux-gnu.
 For bug reporting instructions, please see:
 http://www.gnu.org/software/gdb/bugs/...
 Reading symbols from /usr/lib/dovecot/imap...Reading symbols from 
 /usr/lib/debug/usr/lib/dovecot/imap...done.
 done.
 [New LWP 2760]
 
 warning: Can't read pathname for load map: Eingabe-/Ausgabefehler.
 [Thread debugging using libthread_db enabled]
 Using host libthread_db library /lib/x86_64-linux-gnu/libthread_db.so.1.
 Core was generated by `dovecot/imap'.
 Program terminated with signal 6, Aborted.
 #0  0x7f32d28b4475 in raise () from /lib/x86_64-linux-gnu/libc.so.6
 (gdb) bt full
 #0  0x7f32d28b4475 in raise () from /lib/x86_64-linux-gnu/libc.so.6
 No symbol table info available.
 #1  0x7f32d28b76f0 in abort () from /lib/x86_64-linux-gnu/libc.so.6
 No symbol table info available.
 #2  0x7f32d2c78345 in default_fatal_finish (type=optimized out, 
 status=status@entry=0) at failures.c:193
 backtrace = 0x186d768 /usr/lib/dovecot/libdovecot.so.0(+0x6b34f) 
 [0x7f32d2c7834f] - /usr/lib/dovecot/libdovecot.so.0(+0x6b3ae)
 [0x7f32d2c783ae] - /usr/lib/dovecot/libdovecot.so.0(i_fatal+0) 
 [0x7f32d2c31e8e] - /usr/lib/d...
 #3  0x7f32d2c783ae in i_internal_fatal_handler (ctx=0x7fff8d12aa30, 
 format=optimized out, args=optimized out) at failures.c:657
 status = 0
 #4  0x7f32d2c31e8e in i_panic (format=format@entry=0x7f32d2fbc098 file 
 %s: line %d (%s): assertion failed: (%s)) at failures.c:267
 ctx = {type = LOG_TYPE_PANIC, exit_status = 0, timestamp = 0x0}
 args = {{gp_offset = 40, fp_offset = 48, overflow_arg_area = 
 0x7fff8d12ab20, reg_save_area = 0x7fff8d12aa60}}
 #5  0x7f32d2fa03b2 in mail_index_map_lookup_seq_range (map=optimized 
 out, first_uid=0, last_uid=optimized out,
 first_seq_r=optimized out, last_seq_r=optimized out) at 
 mail-index-map.c:549
 __FUNCTION__ = mail_index_map_lookup_seq_range
 #6  0x7f32d2fa856d in tview_lookup_seq_range (view=0x18a6850, 
 first_uid=0, last_uid=0, first_seq_r=0x18a79e0, last_seq_r=0x18a79e0)
 at mail-index-transaction-view.c:178
 tview = 0x18a6850

[Dovecot] Crash in pop3 with version 2.2.12

2014-03-28 Thread Urban Loesch
Hi,

today I upgraded to version 2.2.12 under Debian Squeeze.
I saw some people on the list they had the same problems with version 2.2.11, 
but which should have been fixed in version 2.2.12.

My logfile shows:

..
Mar 28 08:25:01 mailstore dovecot: pop3-login: Login: user=u...@domain.net, 
method=PLAIN, rip=127.0.0.1, lip=127.0.0.1, mpid=34568, secured,
session=Jp6RmaX1YAB/AAAB
Mar 28 08:25:06 mailstore dovecot: pop3(u...@domain.net pid:34568 
session:Jp6RmaX1YAB/AAAB): Fatal: master: service(pop3): child 34568 killed 
with
signal 11 (core dumped)
...

I made some more dubugging and I found that the pop3 process crashed on the 
uidl command.
Here my output from telnet session:

..
root@mailstore: #  telnet localhost 110
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
+OK Dovecot ready.
user u...@domain.net
+OK
pass PASS
+OK Logged in.
list
+OK 5 messages:
1 3492
2 21924
3 3525
4 3472
5 3273
.
uidl
Connection closed by foreign host.
...

Listing massages or retrieving messages works normally. Only on uidl command 
the services crashes.

I made a backtrace.

- start backtrace -
Core was generated by `dovecot/pop3'.
Program terminated with signal 11, Segmentation fault.
#0  0x7f9dd8ca488d in vfprintf () from /lib/libc.so.6
(gdb) bt full
#0  0x7f9dd8ca488d in vfprintf () from /lib/libc.so.6
No symbol table info available.
#1  0x7f9dd8cc6732 in vsnprintf () from /lib/libc.so.6
No symbol table info available.
#2  0x7f9dd904d0db in str_vprintfa (str=0x11aa4f8, fmt=0x409184 %u %s, 
args=0x7ab4fff0) at str.c:155
args2 = {{gp_offset = 16, fp_offset = 48, overflow_arg_area = 
0x7ab500e0, reg_save_area = 0x7ab50010}}
init_size = 4231558
pos = 0
ret = value optimized out
ret2 = value optimized out
__FUNCTION__ = str_vprintfa
#3  0x004055ff in client_send_line (client=0x11d1e50, fmt=value 
optimized out) at pop3-client.c:678
str = 0x11aa4f8
_data_stack_cur_id = 4
va = {{gp_offset = 32, fp_offset = 48, overflow_arg_area = 
0x7ab500e0, reg_save_area = 0x7ab50010}}
ret = value optimized out
__FUNCTION__ = client_send_line
#4  0x004073cd in list_uidls_saved_iter (client=0x11d1e50, 
ctx=0x11da3c0) at pop3-commands.c:666
found = true
#5  list_uids_iter (client=0x11d1e50, ctx=0x11da3c0) at pop3-commands.c:693
str = value optimized out
permanent_uidl = value optimized out
found = value optimized out
failed = value optimized out
#6  0x00407d88 in cmd_uidl (client=0x11d1e50, name=value optimized 
out, args=0x408880 ) at pop3-commands.c:874
ctx = 0x0
seq = value optimized out
#7  client_command_execute (client=0x11d1e50, name=value optimized out, 
args=0x408880 ) at pop3-commands.c:938
No locals.
#8  0x00405870 in client_handle_input (client=0x11d1e50) at 
pop3-client.c:739
_data_stack_cur_id = 3
line = value optimized out
args = 0x408880 
ret = value optimized out
#9  0x7f9dd903c3d6 in io_loop_call_io (io=0x11d2760) at ioloop.c:388
ioloop = 0x11b2740
t_id = 2
#10 0x7f9dd903d45f in io_loop_handler_run (ioloop=value optimized out) at 
ioloop-epoll.c:220
ctx = 0x11b2aa0
event = 0x11b3900
list = 0x11b44d0
io = 0x51
tv = {tv_sec = 9, tv_usec = 999326}
msecs = value optimized out
ret = 1
i = 0
call = false
__FUNCTION__ = io_loop_handler_run
#11 0x7f9dd903c348 in io_loop_run (ioloop=0x11b2740) at ioloop.c:412
__FUNCTION__ = io_loop_run
#12 0x7f9dd8fea953 in master_service_run (service=0x11b25d0, 
callback=0x409186) at master-service.c:566
No locals.
#13 0x00404ac7 in main (argc=1, argv=0x11b2390) at main.c:277
set_roots = {0x4094e0, 0x0}
login_set = {auth_socket_path = 0x11aa050 
/var/run/dovecot/auth-master, postlogin_socket_path = 0x0, 
postlogin_timeout_secs = 60, callback =
0x404ca0 login_client_connected,
  failure_callback = 0x404c50 login_client_failed, request_auth_token 
= 0}
service_flags = value optimized out
storage_service_flags = MAIL_STORAGE_SERVICE_FLAG_DISALLOW_ROOT
username = 0x0
c = value optimized out
(gdb) quit
- end backtrace -

And at least my pop3 configuration part:

- start config -
protocol pop3 {
  mail_plugins =  quota mail_log notify acl zlib stats
  pop3_client_workarounds = outlook-no-nuls oe-ns-eoh
  pop3_lock_session = yes
  pop3_logout_format = bytes_sent=%o top=%t/%p, retr=%r/%b, del=%d/%m, size=%s 
uidl_hash=%u session=%{session}
  pop3_reuse_xuidl = yes
}
- end config -

Setting pop3_reuse_xuidl = no has noe effect. Always the same crash.

Thanks and regards
Urban Loesch


Re: [Dovecot] Crash in pop3 with version 2.2.12

2014-03-28 Thread Urban Loesch
Hi,

thanks for your fast help.
Now pop3 works again without error.

Thanks
Urban


Am 28.03.2014 15:03, schrieb Teemu Huovila:
 Thats my bad. This commit should fix it 
 http://hg.dovecot.org/dovecot-2.2/rev/b0359910ec96. Thanks for reporting it.
 
 Teemu Huovila
 


[Dovecot] imap: Error: mmap() failed with file ... dovecot.index.cache: Cannot allocate memory

2014-03-24 Thread Urban Loesch
Hi,

since some days (about 10) I get the following error in mail error log many, 
many times:

...
dovecot: imap(u...@domain.com pid:32769 session:dszL7lX1xADD/uGI): Error: 
mmap() failed with file /home/dovecotindex/domain.com/user/mailboxes/Trash
/dovecot.index.cache: Cannot allocate memory


It's always the same dovecot.index.cache file and only for the same heavily 
used account.
The account is currently used from about 10 different clients with imap at the 
sime time.

I checked the size of the index cache file and it seems very big:

total 2,7G
-rw--- 1 mailstore mailstore  464 Mär 24 14:36 dovecot.index
-rw--- 1 mailstore mailstore  464 Mär 24 14:36 dovecot.index.backup
-rw--- 1 mailstore mailstore 2,7G Mär 24 14:19 dovecot.index.cache
-rw--- 1 mailstore mailstore  140 Mär 24 14:45 dovecot.index.log
-rw--- 1 mailstore mailstore  89K Mär 24 14:36 dovecot.index.log.2

About 2,7 GB?

To solve the problem temporarily, I removed the index files from the index 
Trash folder
and Dovecot initialized an index rebuild. Now the size of the index files are 
small:

total 28K
-rw--- 1 mailstore mailstore  512 Mär 24 14:47 dovecot.index
-rw--- 1 mailstore mailstore  20K Mär 24 15:28 dovecot.index.cache
-rw--- 1 mailstore mailstore 1,2K Mär 24 15:28 dovecot.index.log


But why could the index cache file be so big?

Many thanks
Urban


[Dovecot] Force SSL authentication per user basis

2014-02-28 Thread Urban Loesch
Hi,

I'm searching a way to force encrypted connections for POP3/IMAP on a per user 
basis.

To not break clients which still connect in plaintext (there are
still many of it) I must implement a mechanism to force encrypted connections
on a per user basis.

The users and passwords are stored in a mysql database. So there would be no 
problem to
expand the database with a column like ssl_tls - yes/no.

The problem now: how can I get dovecot to process this field in the right way?
I searched the list and googled about such a way, but I can't find any solution.

My Dovecot version is 2.2.12.

Many thanks and regards
Urban Loesch


Re: [Dovecot] Force SSL authentication per user basis - SOLVED

2014-02-28 Thread Urban Loesch
Hi,

I found the solution with %c variable.

Thanks
Urban


Am 28.02.2014 16:11, schrieb Urban Loesch:
 Hi,
 
 I'm searching a way to force encrypted connections for POP3/IMAP on a per 
 user basis.
 
 To not break clients which still connect in plaintext (there are
 still many of it) I must implement a mechanism to force encrypted connections
 on a per user basis.
 
 The users and passwords are stored in a mysql database. So there would be no 
 problem to
 expand the database with a column like ssl_tls - yes/no.
 
 The problem now: how can I get dovecot to process this field in the right way?
 I searched the list and googled about such a way, but I can't find any 
 solution.
 
 My Dovecot version is 2.2.12.
 
 Many thanks and regards
 Urban Loesch
 


Re: [Dovecot] Namespace Mistake

2014-02-04 Thread Urban Loesch
Hi,

if I try a telnet to your IP I get the following:

telnet 37.187.103.194 143
Trying 37.187.103.194...
Connected to 37.187.103.194.
Escape character is '^]'.
* OK [CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE IDLE 
STARTTLS LOGINDISABLED] Dovecot ready.
1 logout
* BYE Logging out
1 OK Logout completed.
Connection closed by foreign host.

You have LOGINDISABLED enabled. Which means that you can't login with plain 
text mechanism.
See: http://archiveopteryx.org/imap/logindisabled

Is Sylpheed able to use another auth mechanism rather than PLAIN or LOGIN? 
Perhaps you should also use
TLS to encrypt the hole traffic between client and server, as it is supported 
by dovecot (STARTTLS).

Regards
Urban


Am 04.02.2014 16:26, schrieb Silvio Siefke:
 Hello,
 
 On Tue, 4 Feb 2014 08:14:20 +0100 (CET) Steffen Kaiser
 skdove...@smail.inf.fh-brs.de wrote:
 
 Those two namespaces should have the same name, because the prefix is
 the same:
 
 Yes that's it. Thank you for help. Can i ask one question then im happy.
 
 I try to connect with email client, but nothing happen. Sylpheed say
 only can not build the box. I has activate the auth_debug but in the
 logs i find no mistake. 
 
 ks3374456 log # cat dovecot-debug.log 
 Feb 04 16:25:40 auth: Debug: Loading modules from directory: 
 /usr/lib64/dovecot/auth
 Feb 04 16:25:40 auth: Debug: Read auth token secret from 
 /var/run/dovecot/auth-token-secret.dat
 Feb 04 16:25:40 auth: Debug: auth client connected (pid=15214)
 
 ks3374456 log # cat dovecot-info.log 
 Feb 04 16:22:58 master: Info: Dovecot v2.2.9 starting up (core dumps disabled)
 Feb 04 16:23:18 imap-login: Info: Aborted login (no auth attempts in 0 secs): 
 user=, rip=176.3.32.140, lip=37.187.103.194, session=MEpKOJbxAgCwAyCM
 Feb 04 16:25:28 master: Info: Dovecot v2.2.9 starting up (core dumps disabled)
 Feb 04 16:25:41 imap-login: Info: Aborted login (no auth attempts in 1 secs): 
 user=, rip=176.3.32.140, lip=37.187.103.194, session=lBTMQJbxCQCwAyCM
 
 ks3374456 log # cat dovecot.log 
 Feb 04 16:25:28 master: Warning: Killed with signal 15 (by pid=15178 uid=0 
 code=kill)
 
 the config:
 ks3374456 log # dovecot -n
 # 2.2.9: /etc/dovecot/dovecot.conf
 # OS: Linux 3.10.23--std-ipv6-64 x86_64 Gentoo Base System release 2.2 
 auth_debug = yes
 debug_log_path = /var/log/dovecot-debug.log
 info_log_path = /var/log/dovecot-info.log
 log_path = /var/log/dovecot.log
 mail_debug = yes
 mail_location = maildir:~/maildir
 mail_plugins = acl
 managesieve_notify_capability = mailto
 managesieve_sieve_capability = fileinto reject envelope encoded-character 
 vacation subaddress comparator-i;ascii-numeric relational regex imap4flags 
 copy include variables body enotify environment mailbox date ihave
 namespace {
   list = yes
   location = maildir:/var/vmail/public:LAYOUT=fs:INDEX=~/public
   prefix = Public/
   separator = /
   subscriptions = no
   type = public
 }
 namespace inbox {
   hidden = no
   inbox = yes
   location = 
   mailbox Drafts {
 special_use = \Drafts
   }
   mailbox Junk {
 special_use = \Junk
   }
   mailbox Sent {
 special_use = \Sent
   }
   mailbox Sent Messages {
 special_use = \Sent
   }
   mailbox Trash {
 special_use = \Trash
   }
   prefix = 
   separator = /
   type = private
 }
 passdb {
   args = username_format=%u /var/vmail/auth.d/%d/passwd
   driver = passwd-file
 }
 plugin {
   acl = vfile:/var/vmail/conf.d/%d/acls:cache_secs=300
   sieve = ~/.dovecot.sieve
   sieve_dir = ~/sieve
   sieve_global_dir = /var/vmail/conf.d/%d/sieve
 }
 protocols = imap lmtp
 service auth-worker {
   user = dovecot
 }
 service auth {
   unix_listener /var/spool/postfix/private/auth {
 group = postfix
 mode = 0660
 user = postfix
   }
   user = dovecot
 }
 service imap-login {
   inet_listener imap {
 address = 37.187.103.194
 port = 143
   }
   inet_listener imaps {
 port = 0
   }
 }
 service lmtp {
   unix_listener /var/spool/postfix/private/dovecot-lmtp {
 group = postfix
 mode = 0666
 user = postfix
   }
 }
 ssl_cert = /etc/ssl/dovecot/server.pem
 ssl_key = /etc/ssl/dovecot/server.key
 userdb {
   args = username_format=%u /var/vmail/auth.d/%d/passwd
   driver = passwd-file
 }
 verbose_proctitle = yes
 protocol lmtp {
   mail_plugins = acl sieve
   postmaster_address = webmas...@silviosiefke.com
 }
 protocol lda {
   mail_plugins = sieve
 }
 protocol imap {
   mail_plugins = acl imap_acl mail_log notify
 }
 
 Thank you for help  Nice Day
 Silvio
 


Re: [Dovecot] Architecture for large Dovecot cluster

2014-01-24 Thread Urban Loesch
Hi,

 and some other Dovecot mailing list threads but I am not sure how many users 
 such a setup will handle.  I have a concern about the I/O performance of
 NFS in the suggested architecture above.  One possible option available to us 
 is to split up the mailboxes over multiple clusters with subsets of
 domains.  Is there anyone out there currently running this many users on a 
 Dovecot based mail cluster?  Some suggestions or advice on the best way to
 go would be greatly appreciated.
 

we only have running a setup with 35k Users (2000 imap and 300 pop3 sessions 
simultaneous).
But we split all users and domains accross 9 virtual containers. Until now all 
containers are running on 1 bare metal machine, because
the server is fast enough and quite new.

In front of our backend servers we use two imap/pop3 proxies which gets their 
static routing informations for imap/pop3/smtp/lmtp from
dedicated mysql-databases (master-master mode, also multiple slaves are 
possible). Same for smtp relay.

This setup allows us to scale out as wide we need. In theory it's possible to 
use for each account a separate storage backend scaled out on
multiple servers. Connections beetween proxies and backends are made by IPv6 on 
layer2. No routers between.
So we have no problems with tight ipv4 space :-)

Some info on storage backends:
- Mailbox format is mdbox with zlib plugin. Each file hax a max of 10MB.
- Dovecot internal caches for authentication etc. doing a good job. Without the 
caches the database becomes busy.
- Central administration functions are implemented on our internal admin 
frontend to for example clear caches, change account password or get/change
user quota.
- Mailindexes are stored on RAID 1 SSD SLC disks (about 20GB now)
- Maildata is stored on RAID 10 SATA 7.2k rpm disks (10 disks)
- Incomming Mailqueue and OS for the containers on RAID 1 SAS disks (10k rpm)
- all Backends are in HA with a passive machine and DRBD with 10GBIT Cross Links

IMAP/POP3/SMTP Proxies are running on 2 dedicated mid range servers (HA):
- IMAP/POP3 Proxies are clustered and load balanced with the IPTable ClusterIP 
Module (poor man's load balancer)
- Same on SMTP relay server for outgoing email.
- MX Servers for incomming mail are load balanced by DNS priority as usual.

Each setup has his advantages and disadvantages. For example no idea how can we 
use shared folders within one domain if the accounts
are spread out on multiple backends. But at the moment we don't need that.
For our needs this setup works very good.

Also thanks to Timo for his great work on dovecot.

Regards
Urban


Re: [Dovecot] Architecture for large Dovecot cluster

2014-01-24 Thread Urban Loesch
Am 24.01.2014 16:15, schrieb Rick Romero:

 - all Backends are in HA with a passive machine and DRBD with 10GBIT
 Cross Links
  
 
 How do you do backups?
 

The underlying storage is based on lvm. So we can take a daily snapshot on the 
passive server,
mount them readonly and have no load impact on the active machine during the 
backuptime.

Maildata etc. is synced via rsync to a small storagesystem in a seperate 
datacenter over a dedicated 1Gbit
dark fiber link. Works very well for us and is within our budget.





Re: [Dovecot] Experience with ALTStorage on NFS mount

2013-11-18 Thread Urban Loesch
Hi Daniel,

thanks for your reply.
I don't need director because all clients are proxied every time to the same 
backend.
I will try it out asap.

regards
Urban


Am 14.11.2013 19:33, schrieb Daniel Parthey:
 Hi Urban
 
 I would recommend you to use NFS Version 4 and director instances, especially 
 for such content which is heavily read but seldom written. NFSv4 has way 
 better client-side caching than older NFS versions. You will need to run 
 idmapd on NFS server and client to map usernames between server and client 
 and.
 
 Regards
 Daniel
 


[Dovecot] Experience with ALTStorage on NFS mount

2013-11-14 Thread Urban Loesch
Hi,

we are thinking about storing older mails (eg. safed before 6 months) to an 
alternative storage mounted via NFS
from an external storage box.

Some technical details
- the storage will be connected via 1 gbit ethernet. Access will be allowed 
from different dovecot servers, but only to divided directories for each
dovecot.
- storage has the option to use RAID 5 or 6. Max. amount of Disks are 15.
- the index files and the primary storage are just divided in two different 
directories and are stored local on SAS disks with RAID 10
- current mail_location config is: 
mdbox:/home/vmail/%d/%n:INDEX=/home/dovecotindex/%d/%n
- future mail_location config shoudl look like: 
mdbox:/home/vmail/%d/%n:INDEX=/home/dovecotindex/%d/%n:ALT=/home/extstorage/vmail/%d/%n

My question is, if someone of you have already experience in such a setup with 
NFS? And are there some tuning tips for NFS?
We have not many experience with NFS right now.

Or perhaps is there some better solution?

Many thanks and regards
Urban




[Dovecot] Apple IOS 7 Mail APP uses multi body searches by default

2013-09-24 Thread Urban Loesch

Hi,

today we found this blogpost:

http://blog.fastmail.fm/2013/09/17/ios-7-mail-app-uses-multi-folder-body-searches-by-default/

Have you any idea if this could impact performance of dovecot using mdbox 
format with 10MB per file size and zlib enabled?


Thanks and regards
Urban Loesch


Re: [Dovecot] Where's Dovecot's ports?

2013-09-12 Thread Urban Loesch

What does netstat -tunplo say?



Am 12.09.2013 12:44, schrieb Mohsen Pahlevanzadeh:

I tested but i got such as nmap localhost
On Thu, 2013-09-12 at 12:20 +0200, Johan Hendriks wrote:

Mohsen Pahlevanzadeh wrote:

On Thu, 2013-09-12 at 08:33 +0200, Daniel Parthey wrote:

Hi Mohsen,

please post the output of doveconf -n

Regards
Daniel

i attached my doveconf -n

maybe dovecot is not using the ports on localhost but on the interface
ip adress itself.
So nmap ipadres would show other things than nmap localhost.

regards
Johan








Re: [Dovecot] Deleting messages with Mac Mail via IMAP

2013-09-03 Thread Urban Loesch

Hi,

if I remember correctly, there is no option in the IMAP protocol to move 
a message per se.


Moving a message to trash or elswere will copy the message first and 
then mark the message as deleted from original folder. If you would like 
to delete it immediately after storing in the odther folder, there must 
be some flag (expunge) to set in your mail client.


As I just said, I'm not shure if I rember correctly.

Regards
Urban

Am 03.09.2013 19:11, schrieb Tim Schneider:

Hello mailing list subscribers!

When I delete a message in Mac Mail 6.5 (OS X 10.8.4),
with the option to move messages to the trash set in Mac Mail account 
preferences,
the message is copied to the trash, and markes as trashed in the cur/inbox 
directory on the server (STa flag in the file name).

My horde webmail then displays this messages correctly as trashed in the inbox.
However, I want the message to be gone from the inbox.

Quitting Mac Mail deletes it in the inbox reliably.

Is there any IMAP setting for dovecot to get move, and not copy and mark as 
trashed behavior?

Here are my dovecot settings:

# 2.1.7: /etc/dovecot/dovecot.conf
# OS: Linux 3.2.0-4-amd64 x86_64 Debian 7.1 ext4
auth_mechanisms = plain login
mail_location = maildir:/var/mail/vhosts/%d/%n
mail_privileged_group = mail
managesieve_notify_capability = mailto
managesieve_sieve_capability = fileinto reject envelope encoded-character 
vacation subaddress comparator-i;ascii-numeric relational regex imap4flags copy 
include variables body enotify environment mailbox date ihave
passdb {
   args = /etc/dovecot/dovecot-sql.conf.ext
   driver = sql
}
plugin {
   sieve = ~/.dovecot.sieve
   sieve_dir = ~/sieve
}
postmaster_address = ad...@xxx.xxx
protocols = imap pop3 lmtp sieve
service auth-worker {
   user = vmail
}
service auth {
   unix_listener /var/spool/postfix/private/auth {
 group = postfix
 mode = 0666
 user = postfix
   }
   unix_listener auth-userdb {
 mode = 0600
 user = vmail
   }
   user = dovecot
}
service imap-login {
   inet_listener imap {
 port = 0
   }
   inet_listener imaps {
 port = 993
 ssl = yes
   }
}
service lmtp {
   unix_listener /var/spool/postfix/private/dovecot-lmtp {
 group = postfix
 mode = 0600
 user = postfix
   }
}
service pop3-login {
   inet_listener pop3 {
 port = 0
   }
   inet_listener pop3s {
 port = 995
 ssl = yes
   }
}
ssl = required
ssl_cert = /etc/ssl/certs/mailportabile.pem
ssl_key = /etc/ssl/private/mailportabile.key
submission_host = localhost
userdb {
   args = uid=vmail gid=vmail home=/var/mail/vhosts/%d/%n
   driver = static
}
protocol lda {
   mail_plugins =  sieve
}



Re: [Dovecot] High Load Average on POP/IMAP.

2013-08-21 Thread Urban Loesch

Hi,

if you try the following command during the server has a high load:

# ps -ostat,pid,time,wchan='WCHAN-',cmd ax  |grep D

Do you get back something like this?

STAT   PID TIME WCHAN- CMD
D18713 00:00:00 synchronize_srcu   dovecot/imap
D18736 00:00:00 synchronize_srcu   dovecot/imap
D18775 00:00:05 synchronize_srcu   dovecot/imap
D20330 00:00:00 synchronize_srcu   dovecot/imap
D20357 00:00:00 synchronize_srcu   dovecot/imap
D20422 00:00:00 synchronize_srcu   dovecot/imap
D20687 00:00:00 synchronize_srcu   dovecot/imap
S+   20913 00:00:00 pipe_wait  grep D

If yes, it could be a problem with Inotify in your kernel. You can try to 
disable inotify
in the kernel with:

echo 0  /proc/sys/fs/inotify/max_user_watches
echo 0  /proc/sys/fs/inotify/max_user_instances

Full article:
http://thread.gmane.org/gmane.linux.kernel/1315430

For me this resolved the problem. Load goes down to  1.00


Regards
Urban




Am 21.08.2013 12:37, schrieb Kavish Karkera:

Hi,

We have a serious issue running on our POP/IMAP servers these days. The load 
average of a servers
spikes up to 400-500  as a uptime command result, for a particular time period 
, to be specific
mostly in noon time and evening, but it last for few minutes only.

We have 2 servers running dovecot 1.1.20 , in loadbanlancer, We have used 
KEEPLIVE (1.1.13) for
loadbalacing.

Server specification.
Operating System : CentOS 5.5 64bit
CPU cores : 16
RAM : 8GB

Mail and Indexes are mounted on NFS (NetApp).

Below is the dovecot -n ... (top results during high spike)


#

# 1.1.20: /usr/local/etc/dovecot.conf
# OS: Linux 2.6.28 x86_64 CentOS release 5.5 (Final)
log_path: /var/log/dovecot-info.log
info_log_path: /var/log/dovecot-info.log
syslog_facility: local1
protocols: imap imaps pop3 pop3s
listen(default): *:143
listen(imap): *:143
listen(pop3): *:110
ssl_listen(default): *:993
ssl_listen(imap): *:993
ssl_listen(pop3): *:995
ssl_cert_file: /usr/local/etc/ssl/certs/dovecot.pem
ssl_key_file: /usr/local/etc/ssl/private/dovecot.pem
disable_plaintext_auth: no
login_dir: /usr/local/var/run/dovecot/login
login_executable(default): /usr/local/libexec/dovecot/imap-login
login_executable(imap): /usr/local/libexec/dovecot/imap-login
login_executable(pop3): /usr/local/libexec/dovecot/pop3-login
login_greeting: Welcome to Popserver.
login_process_per_connection: no
max_mail_processes: 1024
mail_max_userip_connections(default): 100
mail_max_userip_connections(imap): 100
mail_max_userip_connections(pop3): 50
verbose_proctitle: yes
first_valid_uid: 99
first_valid_gid: 99
mail_location: maildir:~/Maildir:INDEX=/indexes/%h:CONTROL=/indexes/%h
mmap_disable: yes
mail_nfs_storage: yes
mail_nfs_index: yes
lock_method: dotlock
mail_executable(default): /usr/local/libexec/dovecot/imap
mail_executable(imap): /usr/local/libexec/dovecot/imap
mail_executable(pop3): /usr/local/libexec/dovecot/pop3
mail_plugins(default): quota imap_quota
mail_plugins(imap): quota imap_quota
mail_plugins(pop3): quota
mail_plugin_dir(default): /usr/local/lib/dovecot/imap
mail_plugin_dir(imap): /usr/local/lib/dovecot/imap
mail_plugin_dir(pop3): /usr/local/lib/dovecot/pop3
pop3_no_flag_updates(default): no
pop3_no_flag_updates(imap): no
pop3_no_flag_updates(pop3): yes
pop3_lock_session(default): no
pop3_lock_session(imap): no
pop3_lock_session(pop3): yes
pop3_client_workarounds(default):
pop3_client_workarounds(imap):
pop3_client_workarounds(pop3): outlook-no-nuls
lda:
   postmaster_address: ad...@research.com
   mail_plugins: cmusieve quota mail_log
   mail_plugin_dir: /usr/local/lib/dovecot/lda
   auth_socket_path: /var/run/dovecot/auth-master
auth default:
   worker_max_count: 15
   passdb:
 driver: sql
 args: /usr/local/etc/dovecot-mysql.conf
   userdb:
 driver: sql
 args: /usr/local/etc/dovecot-mysql.conf
   userdb:
 driver: prefetch
   socket:
 type: listen
 client:
   path: /var/run/dovecot/auth-client
   mode: 432
   user: nobody
   group: nobody
 master:
   path: /var/run/dovecot/auth-master
   mode: 384
   user: nobody
   group: nobody
plugin:
   quota_warning: storage=95%% /usr/local/bin/quota-warning.sh 95 %u
   quota_warning2: storage=80%% /usr/local/bin/quota-warning.sh 80 %u
   quota: maildir:storage=64
##

##

top - 12:08:31 up 206 days, 10:45,  3 users,  load average: 189.88, 82.07, 55.97
Tasks: 771 total,   1 running, 767 sleeping,   1 stopped,   2 zombie
Cpu(s):  8.3%us,  7.6%sy,  0.0%ni,  8.3%id, 75.0%wa,  0.0%hi,  0.8%si,  0.0%st
Mem:  16279824k total, 11913788k used,  4366036k free,   334308k buffers
Swap:  

[Dovecot] Zlib plugin: changing compression save level

2013-06-11 Thread Urban Loesch

Hi,

I have running dovecot 2.1.15 with zlib plugin enabled and zlib_save_level = 
6 since about 2 years now
without any problems.

What happens if I now change the zlib_save_level to 9?
Should that work without any problems, or become the current saved *.m files
incompatible or unreadable?

Thanks
Urban





Re: [Dovecot] My old email is not stored

2013-04-10 Thread Urban Loesch

Hi,

perhaps you are using POP3 in your mail client and have you have enabled the 
setting to
delete mails on server after a few months.

Change to IMAP and you leave all messages on the server.

reagards
urban



On 10.04.2013 11:20, HylkeB wrote:

Hi,

Im have not much experience with dovecot, postfix or debian, which i am
using for my email server.

But the thing is, everything worked just fine, until I found out that old
emails were missing, only emails of a few months old exist on my server. I
have no idea if this is a setting in dovecot, postfix or debian, or that
something weird happened.

But my question is, is there a setting in either dovecot, postfix or debian
that i should check that could cause this behaviour, and if so, how do i
check that? (opening and checking a conf file for example, or executing some
commands in PuTTy).

And if this is somehow default behaviour that cannot be changed, is there
some other way to store all emails ever entered my inbox?

Sincerely,

Hylke Bron



--
View this message in context: 
http://dovecot.2317879.n4.nabble.com/My-old-email-is-not-stored-tp41478.html
Sent from the Dovecot mailing list archive at Nabble.com.



Re: [Dovecot] Please help to make decision

2013-03-30 Thread Urban Loesch

Hi,

we have similar setup like Thierry, but not so big. Only 40k users and 
1,2T of used space. Only 300 concurrent POP3 and 1600 IMAP sessions. 
Imap is increasing continously.


Due to the fact that we have a low budget we impelented the following 
small solution.


- 2 static IMAP/POP3 Proxies (no director) load balanced with the well 
known CLUSTERIP module from iptables (poors man loadbalancing, but works 
only in layer 2 envirmonments. Works great for our needs and would be 
scalable too)

- 2 static SMTP relayservers load balanced the same way as above.
- 4 storage machines in active/passive setup with DRBD on top of LVM2. 
On each active node are running 4-5 virtual containers (based on 
http://linux-vserver.org). All 40k accounts a spread on this 8 
containers. This has the advantage to quickly move the hole container 
from one storage machine to another wihout dsync if there is not enough 
space on some node.
- 2 Mysql master/master containers to store userinformation which then 
are be cached by dovecot itself. This extremly reduces database load.


All servers (proxies, relayserver dovecot, mysql are containers). So we 
can move them around on different hardware without changing any 
configuration. But this happens rarely.


Dovecot uses mdbox storage format with compression enabled. No problems 
yet. Index and mdbox files are stored on different mount points. This
gives us the chance to move them easily to different spindles if we 
need. In the future we plan to store indexes on SSD's and mdbox files on 
SATA drives, as in fact the main IO happens on index files and the

use of disk space is increasing.

As mentioned above, this is not a big setup, but for our needs it works
very good and stable. Helps us to save money and problems with NFS and 
SAN's, etc. And it can be scaled out very easy.


Regards
Urban

Am 25.03.2013 19:47, schrieb Thierry de Montaudry:

Hi Tigran,

Managing a mail system for 1M odd users, we did run for a few years on some 
high range SAN system (NetApp, then EMC), but were not happy with the 
performance, whatever double head, fibre, and so on, it just couldn't handle 
the IOs. I must just say that at this time, we were not using dovecot.

Then we moved to a completely different structure: 24 storage machines (plain 
CentOS as NFS servers), 7 frontend (webmail through IMAP + POP3 server) and 5 
MXs, and all front end machines running dovecot. That was a major change in the 
system performances, but not happy yet with the 50T total storage we had. 
Having huge traffic between front end machine and storage, and at this time, I 
was not sure the switches were handling the load properly. Not talking about 
the load on the front end machine which some times needed a hard reboot to 
recover from NFS timeouts. Even after trying some heavy optimizations all 
around, and particularly on NFS.

Then we did look at the Dovecot director, but not sure how it would handle 1M 
users, we moved to the proxy solution: we are now running dovecot on the 24 
storage machines, our webmail system connecting with IMAP to the final storage 
machine, as well as the MXs with LMTP, we only use dovecot proxy for the POP3 
access on the 7 front end machines. And I must say, what a change. Since then 
the system is running smoothly, no more worries about NFS timeouts and the 
loadavg on all machine is down to almost nothing, as well as the internal 
traffic on the switches and our stress. And most important, the feed back from 
our users told us that we did the right thing.

Only trouble: now and then we have to move users around, as if a machine gets 
full, the only solution is to move data to one that has more space. But this is 
achieved easily with  the dsync tool.

This is just my experience, it might not be the best, but with the (limited) 
budget we had, we finally came up with a solutions that can handle the load and 
got us away from SAN systems which could never handle the IOs for mail access. 
Just for the sake of it, our storage machines only have each 4 x 1T SATA drives 
in RAID 10, and 16G of mem, which I've been told would never do the job, but it 
just works. Thanks Timo.

Hoping this will help in your decision,

Regards,

Thierry


On 24 Mar 2013, at 18:12, Tigran Petrosyantpetr...@gmail.com  wrote:


Hi
We are going to implement the Dovecot for 1 million users. We are going
to use more than 100T storage space. Now we examine 2 solutions NFS or GFS2
via (Fibre Channel storage).
Can someone help to make decision? What kind of storage solution we can use
to achieve good performance and scalability.




Re: [Dovecot] Dovecot SASL Client support?

2013-01-08 Thread Urban Loesch

On 08.01.2013 14:59, Charles Marcus wrote:

On 2013-01-08 8:46 AM, Reindl Harald h.rei...@thelounge.net wrote:


Am 08.01.2013 14:40, schrieb Charles Marcus:

Hi all,

I seem to recall mention of SASL client support either being added, but can't 
remember for sure. The wiki says
nothing about Client support (now, or in the future)...

http://wiki2.dovecot.org/Sasl

So - is there support for it now? If not, is it planned for anytime soon?

what exactly are you missing?

* dovecot can be used from postfix to replace cyrus sasl
   you even linked the documentation
   http://wiki2.dovecot.org/HowTo/PostfixAndDovecotSASL
* dovecot supports authentication for POP3/IMAP

so hich sort fo client support are you missing?


CLIENT support... do you not understand the difference?

http://www.postfix.org/SASL_README.html#client_sasl

So that postfix can use dovecot-sasl for remotely authenticating against 
another SMTP server, ie, for secure relays...



If I understand you right, you would like to make postfix authenticate against 
a remote smtp server and relaying
all emails to it.


For this you don't need dovecot on your site.

I use it as follows on my debian lenny.

installed packages: postfix, libsasl2-modules

needed main.cf configuration options:

...
relayhost = [the.relayserver.com]
smtp_sasl_auth_enable = yes
smtp_sasl_password_maps = hash:/etc/postfix/relay_passwd
smtp_use_tls = yes
smtp_sasl_security_options = noanonymous
...

Content of /etc/postfix/relay_passwd:
the.relayserver.com u...@domain.com:PASSWORD

regards
Urban






Re: [Dovecot] Too many imap connections in state idling

2012-12-19 Thread Urban Loesch

Can you see other strange symptoms on that machine?
For example very high system load but not high I/O?

We had similar issues with imap processes that are hanging in D state some 
months ago.
The problem was, or is a bug in the inotify mechanism in the linux kernel. Not 
shure if the bug has just been
fixed.

For details see here:
http://www.dovecot.org/list/dovecot/2012-May/065884.html

and solution here:
http://www.dovecot.org/list/dovecot/2012-June/066314.html

Regards
Urban



On 18.12.2012 17:48, 3.lis...@adminlinux.com.br wrote:

Thanks Steffen Kaiser ! I think not. The server is currently always 60% of imap processes 
in state idling.

IDLE processes are like this:
root@server:/root# ps aux |grep imap
dovemail617  0.0  0.0  23136  2260 ?SDec15   0:01 dovecot/imap 
[Username1 IP1  IDLE]
dovemail677  0.0  0.0  23104  2172 ?SDec15   0:01 dovecot/imap 
[Username2 IP2 IDLE]
...

My idling processes are seen as follows:
root@server:~#  ps aux |grep imap |grep idling
dovemail   1141  0.0  0.0  16836  2148 ?DDec15   0:01 dovecot/imap 
[idling]
dovemail   3375  0.0  0.0  16828  2120 ?D15:48   0:00 dovecot/imap 
[idling]
dovemail   4833  0.0  0.0  16828  2212 ?D15:49   0:00 dovecot/imap 
[idling]
...

Thanks!
--
Thiago Henrique
www.adminlinux.com.br

On 14-12-2012 06:40, Steffen Kaiser wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Thu, 13 Dec 2012, 3.lis...@adminlinux.com.br wrote:


Is it normal this large amount of connections in state 'idling' ?


If they actually using the IDLE command to wait for PUSH mails on much more 
folders than on the other server?

Regards,

- -- Steffen Kaiser
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.11 (GNU/Linux)

iQEVAwUBUMrmFmoxLS8a3A9mAQKmjgf/RbVzZet2+PUDQpMkrquB8zURR+WaBpxa
OCnNahjCV3kkuyLSciU8cq54vQhfPEXoyeqoQric/QmMOIZwhaVclLGpnSGa6lGR
fjk6x1PgcUDpqncktTJ+aUuJYTiigQbZ1wLWUfCHNZMXx5smReUMV+IdYV/0UH2a
NCnQMX7+FfUWOfZdU6QbomgTXAEgjUD+YRo0JqQ6ja/ELWfbUrYAXwhGXAXyskJT
0eygy3FSEBJQMaIO+o5Dco7AbaCGs19xRagZWGZV5/1j17dHqOHwLdp8MnO4wCI8
4IPzuTUbAY0gnCEJLcSYmQG1FzNi5SjPfMV/dypB7wcPdIx4rIzsww==
=OQSw
-END PGP SIGNATURE-





[Dovecot] Documentation of Redis and Memcache Backends

2012-12-06 Thread Urban Loesch

Hi,

in the release notes of 2.1.9 I read that dovecot supports memcache and redis 
backends for
userdb/passdb authentication. This is very interesting for me and should reduce 
queries and database load of
our mysql servers.

My idea is to use memcache or redis in our IMAP/POP3 proxies in front of our 
backend servers.
So I would like to try out if it's possible to store proxy information of our 
backends. For
example the backend ip adress.

But in the wiki I found only a few configuration settings for redis backend.
http://master.wiki2.dovecot.org/AuthDatabase/Dict

Also the mentioned example config file dovecot-dict-auth.conf.ext with full 
list of configuration options
does not exits in the source of 2.1.11.

Have you any idea where I can find the full info or any howtos?

Many thanks and regards
Urban


Re: [Dovecot] Very High Load on Dovecot 2 and Errors in mail.err.

2012-06-20 Thread Urban Loesch

Hi,

yesterday I disabled the inotify as mentioned in the previous post
and it works for me also. Thanks to all for the hint.

On 20.06.2012 08:35, Jesper Dahl Nyerup wrote:

On Jun 11  23:37, Jesper Dahl Nyerup wrote:

We're still chasing the root cause in the kernel or the VServer patch
set. We'll of course make sure to post our findings here, and I'd very
much appreciate to hear about other people's progress.


We still haven't found a solution, but here's what we've got thus far:

  - The issue is not VServer specific. We're able to reproduce it on
recent vanilla kernels.

  - The issue has a strong correlation with the number of processor cores
in the machine. The behavior is impossible to provoke on a dual core
workstation, but is very widespread on 16 or 24 core machines.


For the records:
I have the problem on 2 different machines with different CPU's
- PE2950 with 2x Intel Xeon X5450 3.00Ghz (8) CPU's (problem happens not so 
often as with PER610)
- PER610 with 2x Intel Xeon X5650 2.67GHz (24) CPU's



One of my colleagues has written a snippet of code that reproduces and
exposes the problem, and we've sent this to the Inotify maintainers and
the kernel mailing list, hoping that someone more familiar with the code
will be quicker to figure out what is broken.

If anyone's interested - either in following the issue or the code
snippet that reproduces it - here's the post:
http://thread.gmane.org/gmane.linux.kernel/1315430


As you described on the kernel maillinglist, I can confirm. The higher the
number of cpu's, the worse it gets.



As this is clearly a kernel issue, we're going to try to keep the
discussion there, and I'll probably not follow up here, until the issue
has been resolved.

Jesper.


Thanks
Urban


Re: [Dovecot] director and IPs shown at the backends

2012-06-07 Thread Urban Loesch


Hi,

try it with login_trusted_networks option on the backends:

# Space separated list of trusted network ranges. Connections from these
# IPs are allowed to override their IP addresses and ports (for logging and
# for authentication checks). disable_plaintext_auth is also ignored for
# these networks. Typically you'd specify your IMAP proxy servers here.
login_trusted_networks =

But for POP this will only working with version 2.1.x

regards
Urban



On 07.06.2012 13:52, Angel L. Mateo wrote:

Hello,

I am configuring a dovecot imap/pop servers with a dovecot director in front of 
them. Because I am using director proxy, connections in the backends
are show as coming from director IPs. Is there any way to configure director 
(or backends) so the backends know (and report) the original IP instead
of the director IP?



Re: [Dovecot] Very High Load on Dovecot 2 and Errors in mail.err.

2012-05-20 Thread Urban Loesch



Am 19.05.2012 21:05, schrieb Timo Sirainen:

On Wed, 2012-05-16 at 08:59 +0200, Urban Loesch wrote:


The Server was running about 1 year without any problems. 15Min Load was 
between 0,5 and max 8.
No high IOWAIT. CPU Idletime about 98%.

..

#  iostat -k
Linux 3.0.28-vs2.3.2.3-rol-em64t (mailstore4)   16.05.2012  _x86_64_
(24 CPU)


Did you change the kernel just before it broke? I'd try another version.





The first time it brokes with kernel 2.6.38.8-vs2.3.0.37-rc17.
Then I tried it with 3.0.28 and it brokes again.
On friday evening I disabled the cgroup feature compleetly and until now 
it seems to work normally.
But this could be because we have weekend and now there are not many 
connections active. So I have

to wait until monday. If it happens again I will try version 3.2.17.

On the other side it could be that the server is overloaded, because 
this problem happens only when there are
more than 1000 tasks active. Sounds strange for me, because it has been 
working without problems since 1 year
and we made no changes. Also there were almost more than 1000 tasks 
active over the last year and we had no problems.


thanks
Urban


Re: [Dovecot] Very High Load on Dovecot 2 and Errors in mail.err.

2012-05-20 Thread Urban Loesch

Hi Javier,

thanks for your help.

Am 20.05.2012 13:58, schrieb Javier Miguel Rodríguez:



I know that you are NOT running RHEL / CentOS, but this problem with

1000 child processes bit us hard, read this red hat kernel bugzilla

(Timo has comments inside):


https://bugzilla.redhat.com/show_bug.cgi?id=681578

Maybe you are
hitting the same limit?



yes maybe.
The only strange thing is that I don't see any erros in my dovecot logs.
I don't see erros like Panic: epoll_ctl ore something else.

I checked my kernel and the patch mentioned in
https://bugzilla.redhat.com/show_bug.cgi?id=681578

(comment 31) is not applied. It comes in version 3.0.30 and 3.2.17.

I will see what tomorrow happens under more load.
If I have the problem again, I give 3.2.17 a chance.

thanks
Urban



Regards

Javier

El 20/05/2012 11:59, Urban
Loesch escribió:


Am 19.05.2012 21:05, schrieb Timo Sirainen:




On Wed, 2012-05-16 at 08:59 +0200, Urban Loesch wrote:



The

Server was running about 1 year without any problems. 15Min Load was
between 0,5 and max 8. No high IOWAIT. CPU Idletime about 98%.

..




# iostat -k Linux 3.0.28-vs2.3.2.3-rol-em64t (mailstore4)

16.05.2012 _x86_64_ (24 CPU)

Did you change the kernel just before it

broke? I'd try another version.


The first time it brokes with

kernel 2.6.38.8-vs2.3.0.37-rc17.

Then I tried it with 3.0.28 and it

brokes again.

On friday evening I disabled the cgroup feature

compleetly and until now

it seems to work normally.
But this could

be because we have weekend and now there are not many

connections

active. So I have

to wait until monday. If it happens again I will try

version 3.2.17.


On the other side it could be that the server is

overloaded, because

this problem happens only when there are
more

than 1000 tasks active. Sounds strange for me, because it has been



working without problems since 1 year

and we made no changes. Also

there were almost more than 1000 tasks

active over the last year and

we had no problems.


thanks
Urban





[Dovecot] Very High Load on Dovecot 2 and Errors in mail.err.

2012-05-16 Thread Urban Loesch
   GnuPG archive 
keys of the Automatic Dovecot Debian repository
ii  dovecot-common  2:2.0.13-0~auto+54   secure mail 
server that supports mbox, maildir, dbox and mdbox mailboxes
ii  dovecot-imapd   2:2.0.13-0~auto+54   secure IMAP 
server that supports mbox, maildir, dbox and mdbox mailboxes
ii  dovecot-lmtpd   2:2.0.13-0~auto+54   secure LMTP 
server for Dovecot
ii  dovecot-managesieved2:2.0.13-0~auto+54   secure 
ManageSieve server for Dovecot
ii  dovecot-mysql   2:2.0.13-0~auto+54   MySQL support 
for Dovecot
ii  dovecot-pop3d   2:2.0.13-0~auto+54   secure POP3 
server that supports mbox, maildir, dbox and mdbox mailboxes
ii  dovecot-sieve   2:2.0.13-0~auto+54   sieve filters 
support for Dovecot

Dovecot Packages on both Proxies installed:
ii  debian-dovecot-auto-keyring 2010.01.30   GnuPG archive keys 
of the Automatic Dovecot
ii  dovecot-common  2:2.0.17-0~auto+4Transitional 
package for dovecot
ii  dovecot-core2:2.0.17-0~auto+4secure mail server 
that supports mbox, maild
ii  dovecot-gssapi  2:2.0.17-0~auto+4GSSAPI 
authentication support for Dovecot
ii  dovecot-imapd   2:2.0.17-0~auto+4secure IMAP server 
that supports mbox, maild
ii  dovecot-managesieved2:2.0.17-0~auto+4secure ManageSieve 
server for Dovecot
ii  dovecot-mysql   2:2.0.17-0~auto+4MySQL support for 
Dovecot
ii  dovecot-pgsql   2:2.0.17-0~auto+4PostgreSQL support 
for Dovecot
ii  dovecot-pop3d   2:2.0.17-0~auto+4secure POP3 server 
that supports mbox, maild
ii  dovecot-sieve   2:2.0.17-0~auto+4sieve filters 
support for Dovecot
ii  dovecot-sqlite  2:2.0.17-0~auto+4SQLite support for 
Dovecot


# doveconf -n
# 2.0.13 (02d97fb66047): /etc/dovecot/dovecot.conf
# OS: Linux 3.0.28-vs2.3.2.3-rol-em64t x86_64 Debian 6.0.2 ext4
auth_cache_negative_ttl = 0
auth_cache_size = 40 M
auth_cache_ttl = 12 hours
auth_mechanisms = plain login
auth_username_format = %Lu
auth_verbose = yes
deliver_log_format = msgid=%m: %$ %p %w
disable_plaintext_auth = no
login_trusted_networks = 195.254.252.207 195.254.252.208 2a02:2e8:0:2:0:143:0:1 
2a02:2e8:0:2:0:143:0:2
mail_gid = mailstore
mail_location = mdbox:/home/vmail/%d/%n:INDEX=/home/dovecotindex/%d/%n
mail_plugins =  quota zlib
mail_uid = mailstore
managesieve_notify_capability = mailto
managesieve_sieve_capability = fileinto reject envelope encoded-character vacation subaddress comparator-i;ascii-numeric relational regex imap4flags 
copy include variables body enotify environment mailbox date ihave imapflags notify

mdbox_rotate_size = 5 M
passdb {
  args = /etc/dovecot/dovecot-sql-account.conf
  driver = sql
}
plugin {
  mail_log_events = delete undelete expunge copy mailbox_delete mailbox_rename
  mail_log_fields = uid box msgid size from
  mail_log_group_events = no
  quota = dict:Storage used::file:%h/dovecot-quota
  sieve = ~/.dovecot.sieve
  sieve_dir = ~/sieve
  sieve_extensions = +notify +imapflags
  sieve_max_redirects = 10
  zlib_save = gz
  zlib_save_level = 5
}
protocols = imap pop3 lmtp sieve
service auth-worker {
  process_min_avail = 12
  service_count = 0
  vsz_limit = 512 M
}
service imap-login {
  inet_listener imap {
port = 143
  }
  process_min_avail = 12
  service_count = 0
  vsz_limit = 256 M
}
service imap {
  process_min_avail = 12
  service_count = 0
  vsz_limit = 512 M
}
service lmtp {
  inet_listener lmtp {
address = *
port = 24
  }
  unix_listener /var/spool/postfix/private/dovecot-lmtp {
group = postfix
mode = 0666
user = postfix
  }
  vsz_limit = 512 M
}
service pop3-login {
  inet_listener pop3 {
port = 110
  }
  process_min_avail = 12
  service_count = 0
  vsz_limit = 256 M
}
service pop3 {
  service_count = 0
  vsz_limit = 256 M
}
ssl = no
ssl_cert = /etc/ssl/certs/dovecot.pem
ssl_key = /etc/ssl/private/dovecot.pem
userdb {
  args = /etc/dovecot/dovecot-sql-account.conf
  driver = sql
}
protocol lmtp {
  mail_plugins =  quota zlib sieve zlib
}
protocol imap {
  imap_client_workarounds = tb-extra-mailbox-sep
  mail_max_userip_connections = 40
  mail_plugins =  quota zlib imap_quota imap_zlib
}
protocol pop3 {
  mail_plugins =  quota zlib
  pop3_logout_format = bytes_sent=%o top=%t/%p, retr=%r/%b, del=%d/%m, size=%s 
uidl_hash=%u
}

Thanks and regards
Urban Loesch


Re: [Dovecot] Question about folder creation/delete and logging

2012-03-12 Thread Urban Loesch


Hi,

perhaps the mail_log plugin is what you need.

Regards
Urban

On 12.03.2012 12:56, Maria Arrea wrote:

Hello

  We are working in a web based restore system for our Dovecot users. In this 
web form a user must log-in and after successful login can estore a deleted 
folder from date X. We will release it under the GPL. I have a couple of 
questions:

  - Is there any way of Dovecot logging to write when a folder is deleted or created? We 
do not want to increase too much our normal logging level. We use Dovecot 
2.0.18+mdbox+zlib
  - Does anybody know of any other project to create an easy-restore for 
Dovecot?

  Regards

  Maria



Re: [Dovecot] POP/IMAP on proxy rip issue

2012-02-27 Thread Urban Loesch

Same here on 2.0.x.
But I think this is because it's only implemented for IMAP.

See e-mail from Timo 2 days ago:

...

Subject: Proxying improvements in v2.1.2

I just committed a couple of features that will make life easier for some types 
of proxying setups:

1. IMAP proxying has already for a while supported sending local/remote IP/port to backend server, which can use it for logging and other purposes. 
I've now implemented this for POP3 as well, although only the remote IP/port is forwarded, not local IP/port. I implemented this also for LMTP in v2.2 
tree, but haven't bothered to backport that change. Both POP3 and LMTP uses XCLIENT command that is compatible to Postfix's (XCLIENT ADDR=1.2.3.4 
PORT=110).


2. proxy_maybe=yes + host=host.example.com actually works now. As long as host.example.com DNS lookup returns one IP that belongs to the current 
server the proxying is skipped.


3. auth_proxy_self = 1.2.3.4 setting means that if proxy_maybe=yes and host=1.2.3.4 then Dovecot assumes that this is a local login and won't proxy 
it, even if 1.2.3.4 isn't the actual local IP. This can be helpful if the host field contains load balancer's IP address instead of the server's. You 
can add more than one IP (space separated) and of course everything related to this works just as well with hostnames as with IPs (even when hostname 
expands to multiple IPs).




regards
Urban


On 27.02.2012 16:30, Tomislav Mihalicek wrote:


I have a proxy setup for pop/imap. The proxies are defined in
login_trusted_networks = x.x.x.x and for the imap it works fine but for pop3
connections displays the ip address of proxy IP... Dovecots are both 1.2
from the debian repo deb http://xi.rename-it.nl/debian/
stable-auto/dovecot-1.2 main

thanks


Re: [Dovecot] Proxy login failures

2012-01-10 Thread Urban Loesch



On 09.01.2012 23:39, Timo Sirainen wrote:

On 9.1.2012, at 22.23, Urban Loesch wrote:


I'm using two dovecot pop3/imap proxies in front of our dovecot servers.
Since some days I see many of the following errors in the logs of the two 
proxy-servers:

dovecot: pop3-login: Error: proxy: Remote IPV6-IP:110 disconnected: Connection 
closed: Connection reset by peer (state=0): user=myuser, method=PLAIN, rip=remote-ip, 
lip=localip

When this happens the Client gets the following error from the proxy:
-ERR [IN-USE] Account is temporarily unavailable.

The connection to remote server dies before authentication finishes. The reason 
for why that happens should be logged by the backend server. Sounds like it 
crashes. Check for ANY error messages in backend servers.



I still did that, but I found nothing in the logs.


It's difficult to guess then. At the very least there should be an Info 
message about a new connection at the time when this failure happened. If there's not 
even that, then maybe the problem is network related.


No, there is nothing.




The only thing I could think about is that all 7 backend servers are virtual 
servers (using technology from http://linux-vserver.org) and they all are 
running
on the same physical machine (DELL PER610 with 32GB RAM, RAID 10 SAS - load 
between 0.5 and 2.0, iowait about 1-5%). So they are sharing the same kernel.


For testing, or what's the point in doing that? :) But the load is low enough 
that I doubt it has anything to do with it.


This because the hardware is fast enough for handling about 40.000 Mailaccounts (both IMAP and POP). That tells me that dovecot is a really good piece 
of software. Very performant in my eyes.





Also all servers are connected to a mysql server, running on a different 
machine in the same subnet. Could it be that either the kernel needs some tcp 
tuning ore perhaps the answers from the remote mysql server
could be to slow in some cases?


MySQL server problem would show up with a different error message. TCP tuning 
is also unlikely to help, since the connection probably dies within a second. 
Actually it would be a good idea to log the duration. This patch adds it:
http://hg.dovecot.org/dovecot-2.0/raw-rev/8438f66433a6



I installed the patch on my proxies and I got this:

...
Jan 10 09:30:45 imap2 dovecot: pop3-login: Error: proxy: Remote IPV6-IP:110 disconnected: Connection closed: Connection reset by peer (state=0, 
duration=0s): user=myuser, method=PLAIN, rip=remote-ip, lip=local-ip


Jan 10 09:45:21 imap2 dovecot: pop3-login: Error: proxy: Remote IPV6-IP:110 disconnected: Connection closed: Connection reset by peer (state=0, 
duration=1s): user=myuser, method=PLAIN, rip=remote-ip, lip=local-ip

...

As you can see the duration is between 0 and 1 seconds.

During this errors there was a tcpdump running on proxy #2 (imap2 in the above 
logs).
In the time range of 09:30:45:00 - 09:30:46:00 I got an error that the 
backend server has resetted the connection (RST Flag set).
The fact that dovecot on the backend server writes nothing in the log I think 
that the connection will be resetted on a lower level.

Here is what whireshark tells me about that:

No. SourceTime   Destination   
Protocol Info
 101235 IPv6-Proxy-Server 2012-01-10 09:29:38.015073 IPv6-Backend-Server TCP  35341  pop3 [SYN] Seq=0 Win=14400 Len=0 MSS=1440 SACK_PERM=1 
TSV=1925901864 TSER=0 WS=7
 101236 IPv6-Backend-Server 2012-01-10 09:29:38.015157 IPv6-Proxy-Server TCP  pop3  35341 [SYN, ACK] Seq=0 Ack=1 Win=14280 Len=0 MSS=1440 
SACK_PERM=1 TSV=309225565 TSER=1925901864 WS=7

 101248 IPv6-Proxy-Server 2012-01-10 09:29:38.233046 IPv6-Backend-Server POP
  [TCP ACKed lost segment] [TCP Previous segment lost] C: UIDL
 101249 IPv6-Backend-Server 2012-01-10 09:29:38.233312 IPv6-Proxy-Server POP
  S: +OK
 101250 IPv6-Proxy-Server 2012-01-10 09:29:38.233328 IPv6-Backend-Server TCP  35341  pop3 [ACK] Seq=57 Ack=50 Win=14464 Len=0 TSV=1925901886 
TSER=309225587

 101263 IPv6-Proxy-Server 2012-01-10 09:29:38.452210 IPv6-Backend-Server POP
  C: LIST
 101264 IPv6-Backend-Server 2012-01-10 09:29:38.452403 IPv6-Proxy-Server POP
  S: +OK 0 messages:
 101265 IPv6-Proxy-Server 2012-01-10 09:29:38.452426 IPv6-Backend-Server TCP  35341  pop3 [ACK] Seq=63 Ack=70 Win=14464 Len=0 TSV=1925901908 
TSER=309225609

 101324 IPv6-Proxy-Server 2012-01-10 09:29:38.671209 IPv6-Backend-Server POP
  C: QUIT
 101325 IPv6-Backend-Server 2012-01-10 09:29:38.671566 IPv6-Proxy-Server POP
  S: +OK Logging out.
 101326 IPv6-Proxy-Server 2012-01-10 09:29:38.671678 IPv6-Backend-Server TCP  35341  pop3 [FIN, ACK] Seq=69 Ack=89 Win=14464 Len=0 
TSV=1925901930 TSER=309225631
 101327 IPv6-Backend-Server 2012-01-10 09:29:38.671759 IPv6-Proxy-Server TCP  pop3  35341 [ACK] Seq=89 Ack=70 Win=14336 Len=0 TSV=309225631 
TSER=1925901930


 134205 IPv6-Proxy-Server 2012-01-10 09:30:45.477314 IPv6-Backend

[Dovecot] Proxy login failures

2012-01-09 Thread Urban Loesch

Hi,

I'm using two dovecot pop3/imap proxies in front of our dovecot servers.
Since some days I see many of the following errors in the logs of the two 
proxy-servers:

...
dovecot: pop3-login: Error: proxy: Remote IPV6-IP:110 disconnected: Connection closed: Connection reset by peer (state=0): user=myuser, 
method=PLAIN, rip=remote-ip, lip=localip

...
dovecot: imap-login: Error: proxy: Remote IPV6-IP:143 disconnected: Connection closed: Connection reset by peer (state=0): user=myuser, 
method=PLAIN, rip=remote-ip, lip=localip


...


When this happens the Client gets the following error from the proxy:
-ERR [IN-USE] Account is temporarily unavailable.


System-details:
OS: Debian Linux
Proxy: 2.0.5-0~auto+23
Backend: 2.0.13-0~auto+54

Have you any idea what could cause this type of error?

Thanks and regards
Urban Loesch


doveconf -n from one of our backendservers:

# 2.0.13 (02d97fb66047): /etc/dovecot/dovecot.conf
# OS: Linux 2.6.38.8-vs2.3.0.37-rc17-rol-em64t-timerp x86_64 Debian 6.0.2 ext4
auth_cache_negative_ttl = 0
auth_cache_size = 40 M
auth_cache_ttl = 12 hours
auth_mechanisms = plain login
auth_username_format = %Lu
auth_verbose = yes
deliver_log_format = msgid=%m: %$ %p %w
disable_plaintext_auth = no
login_trusted_networks = our Proxy IP's (v4 and v6)
mail_gid = mailstore
mail_location = mdbox:/home/vmail/%d/%n:INDEX=/home/dovecotindex/%d/%n
mail_plugins =  quota mail_log notify zlib
mail_uid = mailstore
managesieve_notify_capability = mailto
managesieve_sieve_capability = fileinto reject envelope encoded-character vacation subaddress comparator-i;ascii-numeric relational regex imap4flags 
copy include variables body enotify environment mailbox date ihave imapflags notify

mdbox_rotate_size = 5 M
passdb {
  args = /etc/dovecot/dovecot-sql-account.conf
  driver = sql
}
plugin {
  mail_log_events = delete undelete expunge copy mailbox_delete mailbox_rename
  mail_log_fields = uid box msgid size from
  mail_log_group_events = no
  quota = dict:Storage used::file:%h/dovecot-quota
  sieve = ~/.dovecot.sieve
  sieve_dir = ~/sieve
  sieve_extensions = +notify +imapflags
  sieve_max_redirects = 10
  zlib_save = gz
  zlib_save_level = 5
}
protocols = imap pop3 lmtp sieve
service imap-login {
  inet_listener imap {
port = 143
  }
  service_count = 0
  vsz_limit = 256 M
}
service lmtp {
  inet_listener lmtp {
address = *
port = 24
  }
  unix_listener /var/spool/postfix/private/dovecot-lmtp {
group = postfix
mode = 0666
user = postfix
  }
  vsz_limit = 512 M
}
service pop3-login {
  inet_listener pop3 {
port = 110
  }
  service_count = 0
  vsz_limit = 256 M
}
ssl = no
ssl_cert = /etc/ssl/certs/dovecot.pem
ssl_key = /etc/ssl/private/dovecot.pem
userdb {
  args = /etc/dovecot/dovecot-sql-account.conf
  driver = sql
}
protocol lmtp {
  mail_plugins =  quota mail_log notify zlib sieve zlib
}
protocol imap {
  imap_client_workarounds = tb-extra-mailbox-sep
  mail_max_userip_connections = 40
  mail_plugins =  quota mail_log notify zlib imap_quota imap_zlib
}
protocol pop3 {
  mail_plugins =  quota mail_log notify zlib
  pop3_logout_format = bytes_sent=%o top=%t/%p, retr=%r/%b, del=%d/%m, size=%s 
uidl_hash=%u
}



Re: [Dovecot] POP3 problems

2012-01-04 Thread Urban Loesch



Am 04.01.2012 19:11, schrieb sottile...@rfx.it:

On Wed, 4 Jan 2012, Timo Sirainen wrote:


Migrated a 1.0.2 server to 2.0.16 (same old box).
IMAP seems working Ok.
POP3 give problems with some clients (Outlook 2010 and Thunderbird 
reported).

Seems authentication problem
Below my doveconf -n (debug enbled, but no answer found to the 
problems)


What do the logs say when a client logs in? The debug logs should 
tell everything.


Yes, but my problem is that this is a production server with a really 
fast increasing log, so (in my limited skill of dovecot), I have some 
difficult to select interesting rows from it (I hoped this period was 
less busy, but my customers don't have the same idea ... ;-) )

Thanks for hints in selecting interesting rows.


Try to run tail -f $MAILLOG | grep $USERNAME until the user logs in 
and tries to fetch his emails.


$MAILLOG = logfile to which dovecot logs all info
$USERNAME = Username of your client which has the problems



doveconf: Warning: NOTE: You can get a new clean config file with: 
doveconf -n  dovecot-new.conf


You should do this and replace your old dovecot.conf with the new 
generated one.



userdb {
 driver = passwd
}
userdb {
 driver = passwd
}


Also remove the duplicated userdb passwd.



This was an experimental configs manually derived from old 1.0.2 (mix 
of old working and new).


If I replace it with a new config (below), authentication seems Ok, 
but fetch of mail from client is very slow (compared with old 1.0.2).


Thanks for your very fast support ;-)

P.



# doveconf -n
# 2.0.16: /etc/dovecot/dovecot.conf
# OS: Linux 2.6.9-42.0.10.ELsmp i686 CentOS release 4.9 (Final)
auth_mechanisms = plain login
disable_plaintext_auth = no
info_log_path = /var/log/mail/dovecot.info.log
log_path = /var/log/mail/dovecot.log
mail_full_filesystem_access = yes
mail_location = mbox:~/:INBOX=/var/mail/%u
mbox_read_locks = dotlock fcntl
passdb {
  driver = pam
}
protocols = imap pop3
ssl_cert = /etc/pki/dovecot/certs/dovecot.pem
ssl_key = /etc/pki/dovecot/private/dovecot.pem
userdb {
  driver = passwd
}
protocol pop3 {
  pop3_uidl_format = %08Xu%08Xv
}



Re: [Dovecot] Mail lost - maybe a bug???

2011-11-17 Thread Urban Loesch

Hi,


On 17.11.2011 17:47, Marco Carcano wrote:

Hello Timo, and thanks for your reply

I waited to reply until got it another time

as I already said, it does not happen very often, for example it happened on 12 
november - the log is at the end of this mail



Enable mail_log plugin to make sure of this.
http://wiki2.dovecot.org/Plugins/MailLog


I already did, but just for a few days: it does not happen very often that we 
loose mails, so I'm afraid I can damage the disks of the production
server if I keep logging enabled for too much time - it will be a pain, years 
ago I had a server damaged because of logging enabled for too much time.
I do not want to repeat such a painfull experience


I think logging is not a main reason for damaging disks.
I have enabled mail_log plugin since march 2011 without problems and it helps 
me very often in such cases.







Oct 27 11:20:34 srv001 dovecot: lda(user3): 
msgid=e9447410-51fe-45ff-b624-197840b9a...@usstlz-pinfez02.emrsn.org

: saved mail to INBOX


If Dovecot logs this, then the message definitely was saved to INBOX.


it is exactly what I told to my colleagues, but belive me, sometime some mail 
get lost - I suspect however that could be mine misconfiguration
somewhere, so that lda sometimes write the email not in the right place, but 
elsewhere, and just write the phrase  saved mail to INBOX in the logs
(however I'm wondering why sometimes?!?)



Could it be that some other person is downloading the mail via pop3 and then 
the client is deleting it from the server?
This happens to me sometimes when a customer is accessing his account with a 
new client but forgot to disable the same account on his old PC.
So it happens, that the old pc downloads all new mails, delete it then and the 
customer never see new mails on his new client.

Regards
Urban


I tried to find the missed email in the Maildir, but have not been able to get 
it - the commands used are

cd /home/mailboxstore/theuser/Maildir

grep 629222 */* |grep RE:
grep 629222 .Drafts/* |grep RE:
grep 629222 .Drafts/*/* |grep RE:
grep 629222 .Junk/* |grep RE:
grep 629222 .Posta\ eliminata/* |grep RE:
grep 629222 .Posta\ indesiderata/* |grep RE:
grep 629222 .Posta\ inviata/* |grep RE:
grep 629222 .Sent/* |grep RE:
grep 629222 .Templates/* |grep RE:
grep 629222 .Trash/* |grep RE:

and never got anything

here is the log instead

Nov 12 08:48:01 srv001 postfix/smtpd[1430]: connect from 
mail.tasnee.com[62.3.52.58]
Nov 12 08:48:02 srv001 postfix/smtpd[1430]: 6C3874E4A9F: 
client=mail.tasnee.com[62.3.52.58]
Nov 12 08:48:03 srv001 postfix/cleanup[1434]: 6C3874E4A9F: warning: header 
Subject: RE: RFQ NO. 629222 - OUR OFFER NO. 2111221 from
mail.tasnee.com[62.3.52.58]; from=sen...@tasnee.com to=theu...@ourdomain.ch 
proto=ESMTP helo=mail.tasnee.com
Nov 12 08:48:03 srv001 postfix/cleanup[1434]: 6C3874E4A9F: 
message-id=899eab831ea7414f994704db43677a140450e...@npicmail.npic.com.sa
Nov 12 08:48:03 srv001 postfix/qmgr[4876]: 6C3874E4A9F: 
from=sen...@tasnee.com, size=9920, nrcpt=4 (queue active)
Nov 12 08:48:06 srv001 postfix/smtpd[1442]: connect from 
localhost.localdomain[127.0.0.1]
Nov 12 08:48:06 srv001 postfix/smtpd[1442]: 244774E4AA2: 
client=localhost.localdomain[127.0.0.1]
Nov 12 08:48:06 srv001 postfix/cleanup[1434]: 244774E4AA2: 
message-id=899eab831ea7414f994704db43677a140450e...@npicmail.npic.com.sa
Nov 12 08:48:06 srv001 postfix/qmgr[4876]: 244774E4AA2: 
from=sen...@tasnee.com, size=10323, nrcpt=4 (queue active)
Nov 12 08:48:06 srv001 postfix/smtpd[1442]: disconnect from 
localhost.localdomain[127.0.0.1]
Nov 12 08:48:06 srv001 amavis[8902]: (08902-05) Passed CLEAN, [62.3.52.58] [62.3.52.58] 
sen...@tasnee.com -
user2@ourdomain.local,theuser@ourdomain.local,user4@ourdomain.local,user3@ourdomain.local,
 Message-ID:
899eab831ea7414f994704db43677a140450e...@npicmail.npic.com.sa, mail_id: 
z4aAgl2gBrfV, Hits: -0.592, size: 9919, queued_as: 244774E4AA2, 2632 ms
Nov 12 08:48:06 srv001 postfix/lmtp[1438]: 6C3874E4A9F: to=user2@ourdomain.local, 
orig_to=us...@ourdomain.ch, relay=127.0.0.1[127.0.0.1]:10024,
delay=3.9, delays=1.2/0.01/0/2.6, dsn=2.0.0, status=sent (250 2.0.0 from 
MTA([127.0.0.1]:10025): 250 2.0.0 Ok: queued as 244774E4AA2)
Nov 12 08:48:06 srv001 postfix/lmtp[1438]: 6C3874E4A9F: to=theuser@ourdomain.local, 
orig_to=theu...@ourdomain.ch,
relay=127.0.0.1[127.0.0.1]:10024, delay=3.9, delays=1.2/0.01/0/2.6, dsn=2.0.0, 
status=sent (250 2.0.0 from MTA([127.0.0.1]:10025): 250 2.0.0 Ok:
queued as 244774E4AA2)
Nov 12 08:48:06 srv001 postfix/lmtp[1438]: 6C3874E4A9F: to=user4@ourdomain.local, 
orig_to=us...@ourdomain.ch, relay=127.0.0.1[127.0.0.1]:10024,
delay=3.9, delays=1.2/0.01/0/2.6, dsn=2.0.0, status=sent (250 2.0.0 from 
MTA([127.0.0.1]:10025): 250 2.0.0 Ok: queued as 244774E4AA2)
Nov 12 08:48:06 srv001 postfix/lmtp[1438]: 6C3874E4A9F: to=user3@ourdomain.local, 
orig_to=us...@ourdomain.ch, relay=127.0.0.1[127.0.0.1]:10024,
delay=3.9, 

[Dovecot] Question about pop3_reuse_xuidl

2011-11-15 Thread Urban Loesch

Hi,

we are in migration progress from Communigate Pro 5.0.x to Dovecot 2.0.15 with 
mdbox.
We still migrated about 25.000 IMAP accounts from CGP to Dovecot.

Also there were still added about 2000 new POP3 Accounts on Dovecot.

Now we must migrate about 10.000 POP3 accounts from CGP to Dovecot.

At the beginning of our migration we didn't set the pop3_reuse_xuidl 
configuration option to yes.
Do you know what happens when we activate the pop3_reuse_xuidl option on our 
running dovecot?

Will Dovecot change the uidl value for all existing mails which have set 
X-UIDL in its mailheader,
or does Dovecot only change the uidl value for new received and still not 
downloaded mails?


Many thanks and regards
Urban Loesch


Re: [Dovecot] debug user's message retrieval

2011-09-09 Thread Urban Loesch

Hi,

perhaps the mail_log plugin can help you.


# mail_log plugin provides more event logging for mail processes.
plugin {
  # Events to log. Also available: flag_change append
  mail_log_events = delete undelete expunge copy mailbox_delete mailbox_rename
  # Group events within a transaction to one line.
  mail_log_group_events = no
  # Available fields: uid, box, msgid, from, subject, size, vsize, flags
  # size and vsize are available only for expunge and copy events.
  mail_log_fields = uid box msgid size from
}

...

Regards
Urban

Костырев Александр Алексеевич wrote:

I forgot to mention that when I go to user's directory there's no
letters at all.

On Fri, 2011-09-09 at 13:30 +1100, Костырев Александр Алексеевич wrote:

Hi there!

Is there any method to log user's activity with pop3 service?

I'll try to explain situation:

In maillog I saw that my dovecot lmtp saved four letters in user's
mailbox.
After a while I got a call from that user saying that he received
nothing.

Is there any method to log that that user RETR every single letter,
maybe with full names of letter's id or something like that?





Re: [Dovecot] Dovecot 1.2.17, Proxy and forwarding of remote ip

2011-07-19 Thread Urban Loesch

Hi,

try to put your proxy ip's in the login_trusted_networks configuration option.

...
# Space separated list of trusted network ranges. Connections from these
# IPs are allowed to override their IP addresses and ports (for logging and
# for authentication checks). disable_plaintext_auth is also ignored for
# these networks. Typically you'd specify your IMAP proxy servers here.
login_trusted_networks =
...

Regards
Urban

Tomislav Mihalicek wrote:

Hi

i have a nice working proxy setup and postlogin script that is doing logging
logs.
echo $(date +%Y.%m.%d-%H:%M:%S), $USER, $IP, ${1} 
/var/log/mail/logins.info 21

is it possible that i recieve a remote ip of user client on proxied internal
machine, cause i have only the proxy one, and that date is not a relevant to
me

thanx

t.


Re: [Dovecot] Panic: doveadm quota get -A

2011-06-28 Thread Urban Loesch

Many thanks, works.

Regards
Urban

Timo Sirainen wrote:

On Wed, 2011-06-22 at 10:02 +0200, Urban Loesch wrote:


#  doveadm quota get -A
doveadm: Panic: file doveadm-print-table.c: line 58 
(doveadm_calc_header_length): assertion failed: ((value_count % hdr_count) == 0)


Fixed: http://hg.dovecot.org/dovecot-2.0/rev/02d97fb66047







[Dovecot] Panic: doveadm quota get -A

2011-06-22 Thread Urban Loesch

Hi,

I'm new to the list and I'm using dovecot since 2 months. Still in progress to 
migrating from Stalker (Communigate Pro) to Dovecot.

Today I upgraded from Dovecot 2.0.13-0~auto+27 (used form the Mirror 
xi.rename-it.nl - stable-auto) to 2:2.0.13-0~auto+48.

List of installed packages:
ii  dovecot-common  2:2.0.13-0~auto+48   secure mail 
server that supports mbox, maildir, dbox and mdbox mailboxes
ii  dovecot-imapd   2:2.0.13-0~auto+48   secure IMAP 
server that supports mbox, maildir, dbox and mdbox mailboxes
ii  dovecot-lmtpd   2:2.0.13-0~auto+48   secure LMTP 
server for Dovecot
ii  dovecot-managesieved2:2.0.13-0~auto+48   secure 
ManageSieve server for Dovecot
ii  dovecot-mysql   2:2.0.13-0~auto+48   MySQL support 
for Dovecot
ii  dovecot-pop3d   2:2.0.13-0~auto+48   secure POP3 
server that supports mbox, maildir, dbox and mdbox mailboxes
ii  dovecot-sieve   2:2.0.13-0~auto+48   sieve filters 
support for Dovecot


It seems all is working fine, except doveadm quota get -A. It gives me the 
following error:

#  doveadm quota get -A
doveadm: Panic: file doveadm-print-table.c: line 58 
(doveadm_calc_header_length): assertion failed: ((value_count % hdr_count) == 0)
doveadm: Error: Raw backtrace: /usr/lib/dovecot/libdovecot.so.0(+0x3fd0a) [0x7fb901d0fd0a] - 
/usr/lib/dovecot/libdovecot.so.0(default_fatal_handler+0x32) [0x7fb901d0fdf2] - /usr/lib/dovecot/libdovecot.so.0(i_error+0) [0x7fb901ce916f] - 
doveadm() [0x416bed] - doveadm(doveadm_print_flush+0x1f) [0x40f1cf] - doveadm() [0x40a92d] - doveadm(doveadm_mail_try_run+0x11c) [0x40acfc] - 
doveadm(main+0x381) [0x410761] - /lib/libc.so.6(__libc_start_main+0xfd) [0x7fb901581c4d] - doveadm() [0x409e09]

Aborted

I also tried the versions 2.0.13-0~auto+45 - 47 from xi.rename-it.nl. Same 
thing.

Have you any idea how i can fix this.
Downgrade to a 2.0.13-0~auto+27 is not possible because i have to fix this 
error to: http://hg.dovecot.org/dovecot-2.0/rev/09b8701362a4

Many thanks and regards
Urban Loesch