Re: [Dovecot] Subfolders problem

2013-02-05 Thread Adam Maciejewski
# 2.1.10: /etc/dovecot/dovecot.conf
# OS: Linux 2.6.32-5-amd64 x86_64 Debian 6.0.1
auth_debug = yes
auth_master_user_separator = *
auth_username_format = %Ln
default_vsz_limit = 2 G
dict {
  sieve = mysql:/etc/dovecot/dict-sql-sieve.conf
  sieve_dir = mysql:/etc/dovecot/dict-sql-sieve_dir.conf
}
disable_plaintext_auth = no
dotlock_use_excl = no
dsync_remote_cmd = ssh -l%{login} %{host} doveadm dsync-server -u%u -l5
log_path = /var/log/dovecot.log
log_timestamp = %Y-%m-%d %H:%M:%S 
login_greeting = xbsd it solutions
mail_access_groups = mail
mail_debug = yes
mail_location = mdbox:~/mdbox:ALT=/var/vmail/alt/%n
mail_plugins = acl
namespace {
  hidden = no
  list = children
  location =
maildir:/var/vmail/public/xbsd/Maildir:INDEX=~/mdbox/public/xbsd:CONTROL=~/mdbox/public/xbsd
  prefix = xbsd/
  separator = /
  subscriptions = no
  type = public
}
namespace deleted {
  hidden = yes
  list = no
  location =
mdbox:~/mdbox:ALT=/var/vmail/alt/%n:MAILBOXDIR=deleted:SUBSCRIPTIONS=subscriptions-deleted
  prefix = .DELETED/
  separator = /
  type = private
}
namespace deleted_expunged {
  hidden = yes
  list = no
  location =
mdbox:~/mdbox:ALT=/var/vmail/alt/%n:MAILBOXDIR=deleted_expunged:SUBSCRIPTIONS=subscriptions-deleted_expunged
  prefix = .DELETED/.EXPUNGED/
  separator = /
  type = private
}
namespace expunged {
  hidden = yes
  list = no
  location =
mdbox:~/mdbox:ALT=/var/vmail/alt/%n:MAILBOXDIR=expunged:SUBSCRIPTIONS=subscriptions-expunged
  prefix = .EXPUNGED/
  separator = /
  type = private
}
namespace inbox {
  inbox = yes
  location =
  mailbox Drafts {
special_use = \Drafts
  }
  mailbox Junk {
special_use = \Junk
  }
  mailbox Sent {
special_use = \Sent
  }
  mailbox Sent Messages {
special_use = \Sent
  }
  mailbox Trash {
special_use = \Trash
  }
  prefix =
  separator = /
}
passdb {
  args = scheme=CRYPT username_format=%u /etc/dovecot/master.users.%s
  driver = passwd-file
  master = yes
  pass = yes
}
passdb {
  args = /etc/dovecot/dovecot-ldap.conf
  driver = ldap
}
plugin {
  acl = vfile
  lazy_expunge = .EXPUNGED/
  mail_log_events = delete undelete expunge copy mailbox_delete
mailbox_rename flag_change append save mailbox_create
  mail_log_fields = uid box msgid size vsize flags from subject
  mail_replica = remote:mail-storage-2.atm
  sieve = dict:proxy::sieve;name=active;bindir=~/.sieve-bin
  sieve_dir = dict:proxy::sieve_dir;bindir=~/.sieve-bin
}
protocols = imap lmtp pop3
service aggregator {
  fifo_listener replication-notify-fifo {
mode = 0666
  }
  unix_listener replication-notify {
mode = 0666
  }
}
service auth {
  executable = /usr/lib/dovecot/auth
  unix_listener auth-client {
mode = 0660
  }
  unix_listener auth-master {
mode = 0600
  }
  unix_listener auth-userdb {
mode = 0666
  }
  user = root
}
service dict {
  unix_listener dict {
mode = 0666
  }
}
service doveadm {
  process_min_avail = 1
  service_count = 1024
}
service imap-login {
  chroot = login
  executable = /usr/lib/dovecot/imap-login
  inet_listener imap {
address = *
port = 143
  }
  service_count = 0
  user = dovecot
}
service imap {
  executable = /usr/lib/dovecot/imap
}
service lmtp {
  inet_listener lmtp {
port = 24
  }
  process_limit = 48
}
service pop3-login {
  chroot = login
  executable = /usr/lib/dovecot/pop3-login
  inet_listener pop3 {
address = *
port = 110
  }
  service_count = 0
  user = dovecot
}
service pop3 {
  executable = /usr/lib/dovecot/pop3
}
service replicator {
  process_min_avail = 1
}
ssl = no
userdb {
  args = /etc/dovecot/dovecot-ldap.conf
  driver = ldap
}
protocol doveadm {
  namespace deleted {
list = yes
location =
prefix =
  }
  namespace deleted_expunged {
list = yes
location =
prefix =
  }
  namespace expunged {
list = yes
location =
prefix =
  }
}
protocol imap {
  imap_client_workarounds = tb-extra-mailbox-sep
  mail_plugins = acl lazy_expunge notify replication
}
protocol lmtp {
  mail_plugins = acl notify replication sieve
}
protocol pop3 {
  pop3_uidl_format = %08Xu%08Xv
}
protocol lda {
  lda_mailbox_autocreate = yes
  lda_mailbox_autosubscribe = yes
  mail_plugins = notify mail_log sieve acl
}

2013/2/5 Timo Sirainen t...@iki.fi

 On Mon, 2013-02-04 at 09:51 +0100, Adam Maciejewski wrote:
  I have moved from dovecot 1.x to 2.x and I have big problem with
  subfolders.
  When I'm moving subfolder with other subfolders is moving only main
  subfolder, without subfolders, example :

 What v2.x exactly? Looks like a bug. Probably fixed in a newer version.
 If you already have a recent v2.1, show your doveconf -n.

  mail-storage-1 /var/vmail/home/adamskitest/mdbox/mailboxes # find | egrep
  -e janusz|jarek
  ./jarek
  ./jarek/dbox-Mails
  ./jarek/dbox-Mails/dovecot.index.log
  ./jarek/jarek2
  ./jarek/jarek2/dbox-Mails
  ./jarek/jarek2/dbox-Mails/dovecot.index.log
  ./jarek/jarek2/jarek3
  ./jarek/jarek2/jarek3/dbox-Mails
  ./jarek/jarek2/jarek3/dbox-Mails/dovecot.index.log
  

Re: [Dovecot] Upgrade from 2.1.13 to 2.1.14 and load doubled

2013-02-05 Thread Alessio Cecchi

Il 04/02/2013 16:44, Timo Sirainen ha scritto:

On 4.2.2013, at 17.38, Alessio Cecchi ales...@skye.it wrote:


Feb 04 14:03:56 imap(x...@.com): Error: read() failed: No such file or 
directory
Feb 04 14:06:55 imap(x...@.com): Error: read() failed: No such file or 
directory
Feb 04 14:09:09 imap(z...@.it): Error: read() failed: No such file or 
directory


Well .. Those errors should go away if you revert this patch:

http://hg.dovecot.org/dovecot-2.1/raw-rev/2b76d357a56a

But does it fix the performance? ..


No, but this should: http://hg.dovecot.org/dovecot-2.1/rev/443ff272317f




Thanks Timo,

tonight I will apply the second patch to 2.1.14 for the performance problem.

Should I also remove the path 2b76d357a56a or is only optional/for test?


Removing it removes those errors from logs, but other than that it shouldn't 
make any difference. Instead of removing you could also apply these patches 
that should remove them more correctly:

http://hg.dovecot.org/dovecot-2.1/rev/93633121bc9d
http://hg.dovecot.org/dovecot-2.1/rev/b15a98fd8e15



Also after the application of these patches the situation of the load is 
the same.


If there are no other solutions I will return to 2.1.13 ASAP.

Thanks
--
Alessio Cecchi is:
@ ILS - http://www.linux.it/~alessice/
on LinkedIn - http://www.linkedin.com/in/alessice
Assistenza Sistemi GNU/Linux - http://www.cecchi.biz/
@ PLUG - ex-Presidente, adesso senatore a vita, http://www.prato.linux.it


Re: [Dovecot] lmtp-proxying in 2.1 slower than in 2.0.14 ?

2013-02-05 Thread Jan-Frode Myklebust
I think there must be some bug I'm hitting here. One of my directors
is still running with client_limit = 1, process_limit = 100 for the
lmtp service, and now it's logging:

   master: Warning: service(lmtp): process_limit (100) reached, client
connections are being dropped

Checking sudo netstat -anp|grep :24  I see 287 ports in TIME_WAIT,
one in CLOSE_WAIT and the listening 0.0.0.0:24. No active
connections. There are 100 lmtp-processes running. When trying to
connect to the lmtp-port I immediately get dropped:

$ telnet localhost 24
Trying 127.0.0.1...
Connected to localhost.localdomain (127.0.0.1).
Escape character is '^]'.
Connection closed by foreign host.

Is there maybe some counter that's getting out of sync, or some back
off penalty algorithm that kicks in when it first hit the process
limit ?


  -jf


Re: [Dovecot] Errors with doveadm when using checkpassword

2013-02-05 Thread Andy Dills
On Tue, 5 Feb 2013, Timo Sirainen wrote:

 I think you need to remove doveadm_proxy_port from the backend
 dovecot.conf. Then it doesn't perform the PASS lookup. But you also
 should run doveadm via the proxy instance so that it gets run in the
 correct server (doveadm -c /etc/dovecot/proxy.conf or doveadm -i proxy
 if you've given it a name).

On a seperate note I'm sure a lot of people would benefit from -c/-i being 
mentioned on http://wiki2.dovecot.org/Tools/Doveadm. 

You are one man with only so much time so I tried registering on the wiki 
to propose an edit for you, but I'm not allowed. I think all we need is to 
know that -c and -i exist, and a note about how people in proxy/director 
configurations need to make sure to tell doveadm to communicate with the 
instance that is running director. 

For some reason, my intuition would be that since doveadm is aware of both 
instances, that it should be aware of which one's config to use for 
connecting to director for proxy information. 

Thanks,
Andy

---
Andy Dills
Xecunet, Inc.
www.xecu.net
301-682-9972
---


[Dovecot] Header is huge in fts-solr

2013-02-05 Thread Valery V. Sedletski
Hi, Timo and all!

I am trying to index mail in a test mailbox using fts_solr plugin for
full-text search. On most mailboxes, it works fine, but on some big
messages I get
warnings like the following, and then I get an Out of memory error from
Solr, then the indexer-worker process (or doveadm) crashes with assertion
failed error and the backtrace:

==
doveadm(valer...@test.afterlogic.com): Warning:
fts-solr(valer...@test.afterlogic.com): Mailbox gmail.com UID=48 header
size is huge
doveadm(valer...@test.afterlogic.com): Warning:
fts-solr(valer...@test.afterlogic.com): Mailbox gmail.com UID=49 header
size is huge
doveadm(valer...@test.afterlogic.com): Panic: file
../../../../src/plugins/fts-solr/solr-connection.c: line 548
(solr_connection_post_more): assertion failed: (maxfd = 0)
doveadm(valer...@test.afterlogic.com): Error: Raw backtrace:
/usr/mailsuite/lib/dovecot/libdovecot.so.0(+0x58f04) [0x7fe8a908af04] -
/usr/mailsuite/lib/dovecot/libdovecot.so.0(default_error_handler+0)
[0x7fe8a908af93] - /usr/mailsuite/lib/dovecot/libdovecot.so.0(i_fatal+0)
[0x7fe8a908b274] -
/usr/mailsuite/lib/dovecot/lib21_fts_solr_plugin.so(solr_connection_post_more+0x2d2)
[0x7fe8a75fe973] -
/usr/mailsuite/lib/dovecot/lib21_fts_solr_plugin.so(+0x4d03)
[0x7fe8a75f9d03] -
/usr/mailsuite/lib/dovecot/lib20_fts_plugin.so(fts_backend_update_build_more+0x77)
[0x7fe8a7c1d401] - /usr/mailsuite/lib/dovecot/lib20_fts_plugin.so(+0x7fe2)
[0x7fe8a7c1dfe2] - /usr/mailsuite/lib/dovecot/lib20_fts_plugin.so(+0x80d5)
[0x7fe8a7c1e0d5] - /usr/mailsuite/lib/dovecot/lib20_fts_plugin.so(+0x89e4)
[0x7fe8a7c1e9e4] -
/usr/mailsuite/lib/dovecot/lib20_fts_plugin.so(fts_build_mail+0x2b)
[0x7fe8a7c1ebf5] - /usr/mailsuite/lib/dovecot/lib20_fts_plugin.so(+0xe7cf)
[0x7fe8a7c247cf] - /usr/mailsuite/lib/dovecot/lib20_fts_plugin.so(+0xe8ba)
[0x7fe8a7c248ba] -
/usr/mailsuite/lib/dovecot/libdovecot-storage.so.0(mail_precache+0x25)
[0x7fe8a9379bc9] - /usr/mailsuite/bin/doveadm() [0x4139de] -
/usr/mailsuite/bin/doveadm() [0x413c17] - /usr/mailsuite/bin/doveadm()
[0x413f18] - /usr/mailsuite/bin/doveadm() [0x40fea6] -
/usr/mailsuite/bin/doveadm(doveadm_mail_single_user+0x154) [0x410069] -
/usr/mailsuite/bin/doveadm() [0x41090a] -
/usr/mailsuite/bin/doveadm(doveadm_mail_try_run+0xac) [0x410b81] -
/usr/mailsuite/bin/doveadm(main+0x28d) [0x41a92c] -
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xfd) [0x7fe8a8cc9ead] -
/usr/mailsuite/bin/doveadm() [0x40f499]
==

And Solr log, at the same time:
==
2013-02-01 18:03:53.342:INFO::Logging to STDERR via
org.mortbay.log.StdErrLog
2013-02-01 18:03:53.425:INFO::jetty-6.1-SNAPSHOT
01.02.2013 18:03:53 org.apache.solr.core.SolrResourceLoader locateSolrHome
INFO: JNDI not configured for solr (NoInitialContextEx)
01.02.2013 18:03:53 org.apache.solr.core.SolrResourceLoader locateSolrHome
INFO: solr home defaulted to 'solr/' (could not find system property or
JNDI)
01.02.2013 18:03:53 org.apache.solr.core.SolrResourceLoader
INFO: new SolrResourceLoader for deduced Solr Home: 'solr/'
01.02.2013 18:03:53 org.apache.solr.servlet.SolrDispatchFilter init
INFO: SolrDispatchFilter.init()
01.02.2013 18:03:53 org.apache.solr.core.SolrResourceLoader locateSolrHome
INFO: JNDI not configured for solr (NoInitialContextEx)
01.02.2013 18:03:53 org.apache.solr.core.SolrResourceLoader locateSolrHome
INFO: solr home defaulted to 'solr/' (could not find system property or
JNDI)
01.02.2013 18:03:53 org.apache.solr.core.CoreContainerSInitializer
initialize
INFO: looking for solr.xml:
/home/valerius/apache-solr-3.6.2/example/solr/solr.xml
01.02.2013 18:03:53 org.apache.solr.core.CoreContainer load
INFO: Loading CoreContainer using Solr Home: 'solr/'
01.02.2013 18:03:53 org.apache.solr.core.SolrResourceLoader
INFO: new SolrResourceLoader for directory: 'solr/'
01.02.2013 18:03:53 org.apache.solr.core.CoreContainer create
INFO: Creating SolrCore '' using instanceDir: solr/.
01.02.2013 18:03:53 org.apache.solr.core.SolrResourceLoader
INFO: new SolrResourceLoader for directory: 'solr/./'
01.02.2013 18:03:53 org.apache.solr.core.SolrConfig initLibs
INFO: Adding specified lib dirs to ClassLoader
01.02.2013 18:03:53 org.apache.solr.core.SolrResourceLoader
replaceClassLoader
INFO: Adding
'file:/home/valerius/apache-solr-3.6.2/dist/apache-solr-cell-3.6.2.jar' to
classloader
01.02.2013 18:03:53 org.apache.solr.core.SolrResourceLoader
replaceClassLoader
INFO: Adding
'file:/home/valerius/apache-solr-3.6.2/contrib/extraction/lib/poi-ooxml-3.8-beta4.jar'
to classloader
01.02.2013 18:03:53 org.apache.solr.core.SolrResourceLoader
replaceClassLoader
INFO: Adding
'file:/home/valerius/apache-solr-3.6.2/contrib/extraction/lib/jdom-1.0.jar'
to classloader
01.02.2013 18:03:53 org.apache.solr.core.SolrResourceLoader
replaceClassLoader
INFO: Adding

Re: [Dovecot] lmtp-proxying in 2.1 slower than in 2.0.14 ?

2013-02-05 Thread Timo Sirainen
On 5.2.2013, at 11.57, Jan-Frode Myklebust janfr...@tanso.net wrote:

 I think there must be some bug I'm hitting here. One of my directors
 is still running with client_limit = 1, process_limit = 100 for the
 lmtp service, and now it's logging:
 
   master: Warning: service(lmtp): process_limit (100) reached, client
 connections are being dropped
 
 Checking sudo netstat -anp|grep :24  I see 287 ports in TIME_WAIT,
 one in CLOSE_WAIT and the listening 0.0.0.0:24. No active
 connections. There are 100 lmtp-processes running.

Sounds like the LMTP processes are hanging for some reason.. 
http://hg.dovecot.org/dovecot-2.1/rev/63117ab893dc might show something 
interesting, although I'm pretty sure it will just say that the processes are 
hanging in DATA command.

Other interesting things to check:

gdb -p pid of lmtp process
bt full

strace -tt -p pid of lmtp process (for a few seconds to see if anything is 
happening)

If lmtp proxy is hanging, it should have a timeout (default 30 secs) and it 
should log about it if it triggers. (Although maybe not to error log.)

 When trying to
 connect to the lmtp-port I immediately get dropped:
 
 $ telnet localhost 24
 Trying 127.0.0.1...
 Connected to localhost.localdomain (127.0.0.1).
 Escape character is '^]'.
 Connection closed by foreign host.

This happens when the master process notices that all the service processes are 
full.

 Is there maybe some counter that's getting out of sync, or some back
 off penalty algorithm that kicks in when it first hit the process
 limit ?

Shouldn't be, but the proctitle patch should make it clearer. Strange anyway, I 
haven't heard of anything like this happening before.

Re: [Dovecot] Upgrade from 2.1.13 to 2.1.14 and load doubled

2013-02-05 Thread Timo Sirainen
On 5.2.2013, at 11.07, Alessio Cecchi ales...@skye.it wrote:

 But does it fix the performance? ..
 
 No, but this should: http://hg.dovecot.org/dovecot-2.1/rev/443ff272317f
 
 Also after the application of these patches the situation of the load is the 
 same.
 
 If there are no other solutions I will return to 2.1.13 ASAP.

You definitely included the above patch? That's the important one. And it 
really should have fixed the performance problems..



Re: [Dovecot] Upgrade from 2.1.13 to 2.1.14 and load doubled

2013-02-05 Thread Alessio Cecchi

Il 05/02/2013 16:46, Timo Sirainen ha scritto:

On 5.2.2013, at 11.07, Alessio Cecchi ales...@skye.it wrote:


But does it fix the performance? ..


No, but this should: http://hg.dovecot.org/dovecot-2.1/rev/443ff272317f


Also after the application of these patches the situation of the load is the 
same.

If there are no other solutions I will return to 2.1.13 ASAP.


You definitely included the above patch? That's the important one. And it 
really should have fixed the performance problems..



Sure Timo, during in the afternoon the situations seems better, wait a 
few days for tests.


Thanks
--
Alessio Cecchi is:
@ ILS - http://www.linux.it/~alessice/
on LinkedIn - http://www.linkedin.com/in/alessice
Assistenza Sistemi GNU/Linux - http://www.cecchi.biz/
@ PLUG - ex-Presidente, adesso senatore a vita, http://www.prato.linux.it


[Dovecot] Out of memory after upgrading to dovecot 2.0.21

2013-02-05 Thread Arnaud Abélard

Hello,

I've upgraded our servers and now dovecot seems to be running out of 
memory when accessing my own mailbox (and only mine, which is in a way 
pretty fortunate):


dovecot: imap(abelard-a): Error: mmap() failed with file 
/vmail/a/b/abelard-a/dovecot.index.cache: Cannot allocate memory


imap(abelard-a): Fatal: pool_system_realloc(131072): Out of memory

/vmail/a/b/abelard-a/dovecot.index.cache is 224MB big and my mailbox is 
72% full of 8GB.


any idea of what is happening?

Thanks in advance,

Arnaud


--
Arnaud Abélard
jabber: arnaud.abel...@univ-nantes.fr / twitter: ArnY
Administrateur Système
DSI Université de Nantes
-



Re: [Dovecot] Out of memory after upgrading to dovecot 2.0.21

2013-02-05 Thread Arnaud Abélard
Hrm... exactly when I pressed sent, dovecot started giving me more 
information:


master: Error: service(imap): child 22324 returned error 83 (Out of 
memory (service imap { vsz_limit=256 MB }, you may need to increase it))


Is that why the imap processes were running out of memory or is that 
just another symptom of the same problem? How come we never had that 
problem with the older version of dovecot 2.0 (sorry, i can't remember 
which one we were running...).


Arnaud




On 02/05/2013 06:33 PM, Arnaud Abélard wrote:

Hello,

I've upgraded our servers and now dovecot seems to be running out of
memory when accessing my own mailbox (and only mine, which is in a way
pretty fortunate):

dovecot: imap(abelard-a): Error: mmap() failed with file
/vmail/a/b/abelard-a/dovecot.index.cache: Cannot allocate memory

imap(abelard-a): Fatal: pool_system_realloc(131072): Out of memory

/vmail/a/b/abelard-a/dovecot.index.cache is 224MB big and my mailbox is
72% full of 8GB.

any idea of what is happening?

Thanks in advance,

Arnaud





--
Arnaud Abélard
jabber: arnaud.abel...@univ-nantes.fr / twitter: ArnY
Administrateur Système
DSI Université de Nantes
-



Re: [Dovecot] Out of memory after upgrading to dovecot 2.0.21

2013-02-05 Thread Reindl Harald
hard to say without config-details

shared prcoess or one per connection?
in case of shared 256 MB is really really low

Am 05.02.2013 18:38, schrieb Arnaud Abélard:
 Hrm... exactly when I pressed sent, dovecot started giving me more 
 information:
 
 master: Error: service(imap): child 22324 returned error 83 (Out of memory 
 (service imap { vsz_limit=256 MB }, you
 may need to increase it))
 
 Is that why the imap processes were running out of memory or is that just 
 another symptom of the same problem? How
 come we never had that problem with the older version of dovecot 2.0 (sorry, 
 i can't remember which one we were
 running...).
 
 Arnaud
 
 On 02/05/2013 06:33 PM, Arnaud Abélard wrote:
 Hello,

 I've upgraded our servers and now dovecot seems to be running out of
 memory when accessing my own mailbox (and only mine, which is in a way
 pretty fortunate):

 dovecot: imap(abelard-a): Error: mmap() failed with file
 /vmail/a/b/abelard-a/dovecot.index.cache: Cannot allocate memory

 imap(abelard-a): Fatal: pool_system_realloc(131072): Out of memory

 /vmail/a/b/abelard-a/dovecot.index.cache is 224MB big and my mailbox is
 72% full of 8GB.

 any idea of what is happening?



signature.asc
Description: OpenPGP digital signature


Re: [Dovecot] Out of memory after upgrading to dovecot 2.0.21

2013-02-05 Thread Arnaud Abélard

On 02/05/2013 06:44 PM, Reindl Harald wrote:

hard to say without config-details

shared prcoess or one per connection?


One per connection, around 1000 connections right now.


in case of shared 256 MB is really really low


I changed the vsz_limit to 512MB and it seems better, but I'm still 
surprised my mailbox actually hit the memory limit since I doubt it's 
most used one.


Arnaud





Am 05.02.2013 18:38, schrieb Arnaud Abélard:

Hrm... exactly when I pressed sent, dovecot started giving me more information:

master: Error: service(imap): child 22324 returned error 83 (Out of memory 
(service imap { vsz_limit=256 MB }, you
may need to increase it))

Is that why the imap processes were running out of memory or is that just 
another symptom of the same problem? How
come we never had that problem with the older version of dovecot 2.0 (sorry, i 
can't remember which one we were
running...).

Arnaud

On 02/05/2013 06:33 PM, Arnaud Abélard wrote:

Hello,

I've upgraded our servers and now dovecot seems to be running out of
memory when accessing my own mailbox (and only mine, which is in a way
pretty fortunate):

dovecot: imap(abelard-a): Error: mmap() failed with file
/vmail/a/b/abelard-a/dovecot.index.cache: Cannot allocate memory

imap(abelard-a): Fatal: pool_system_realloc(131072): Out of memory

/vmail/a/b/abelard-a/dovecot.index.cache is 224MB big and my mailbox is
72% full of 8GB.

any idea of what is happening?





--
Arnaud Abélard
jabber: arnaud.abel...@univ-nantes.fr / twitter: ArnY
Administrateur Système
DSI Université de Nantes
-



[Dovecot] Per user special-use folder names

2013-02-05 Thread Radek Novotný

Hi all,

let me ask a question, please. Is it possible in dovecot to set up per 
user special-use folder names?


Imagine situation with two users where first prefere another language 
that second.


mailbox Sent {
special_use = \Sent
}

for english speaking users and

mailbox Odeslaná pošta {
special_use = \Sent
}

for czech speaking users.



Thanks for your answers. Radek


Re: [Dovecot] Using Mutt - folder atime/mtime etc

2013-02-05 Thread David Woodfall

On Mon, Feb 04, 2013 at 11:56:53AM +0100, Thomas Leuxner t...@leuxner.net put 
forth the proposition:

* David Woodfall d...@dawoodfall.net 2013.02.04 03:19:


It looks as if folder_format option of mutt
as only for local folders, and will not work for IMAP.

I was grasping at straws and hoping someone here may know.


As mentioned this may be one for the mutt list. But it should work with IMAP:

set imap_user=u...@domain.tld
set folder=imap://host.domain.tld/
set spoolfile=imap://host.domain.tld/INBOX
set index_format=%4C %Z %2M %[!%Y.%m.%d %H:%M]  %-30.30F (%5c) %s


After much messing about with mutt and suggestions on the list, it
seems that when mutt displays the imap mailbox list it works, but the
normal folder browser doesn't work with %N.

Thanks for you help


Re: [Dovecot] Out of memory after upgrading to dovecot 2.0.21

2013-02-05 Thread Stan Hoeppner
On 2/5/2013 12:15 PM, Arnaud Abélard wrote:

 I changed the vsz_limit to 512MB and it seems better, but I'm still
 surprised my mailbox actually hit the memory limit since I doubt it's
 most used one.

According to the wiki, vsz_limit only affects login processes, not IMAP
processes, which is odd given the second error you posted, which seems
to indicate a relationship between vsz_limit and imap.

 dovecot: imap(abelard-a): Error: mmap() failed with file
 /vmail/a/b/abelard-a/dovecot.index.cache: Cannot allocate memory

 imap(abelard-a): Fatal: pool_system_realloc(131072): Out of memory

This error suggests your mail_process_size variable is set too low.  If
it is currently set to 128MB, increase it to 512MB.  The bulk of this
memory is used for mmap()ing files, and is virtual, not physical.  Thus
you don't actually need 51GB of RAM to support 100 users.

Also, a dovecot.index.cache file of size 224MB seems rather large.  My
largest list mail folder is 150MB, contains 17,647 msgs, and has a
dovecot.index.cache file of only 16MB.  I use mbox.  This would seem to
suggest you may have a single folder with tens of thousands of msgs in
it.  Or maybe indexes are handled differently for your mailbox
format--I've not used any others.  If the former, I suggest you cull
your large folders to get them down to manageable size.

-- 
Stan



Re: [Dovecot] Per user special-use folder names

2013-02-05 Thread Patrick Ben Koetter
* Radek Novotný rad...@seznam.cz:
 Hi all,
 
 let me ask a question, please. Is it possible in dovecot to set up
 per user special-use folder names?
 
 Imagine situation with two users where first prefere another
 language that second.

You don't need per-user folder SPECIAL-USE names, because the client must take
care of the correct mapping.

If the client runs in an German environment it might mount the special_use =
\Sent mailbox as Gesendete Objekte and if it is Czechian it might call it
Odeslaná pošta.

That's part of what makes SPECIAL-USE so sexy. It is language independent.
All it does is say This mailbox is reserved for that particular usage. How
you call it, is up to you (client).

p@rick



 
 mailbox Sent {
 special_use = \Sent
 }
 
 for english speaking users and
 
 mailbox Odeslaná pošta {
 special_use = \Sent
 }
 
 for czech speaking users.
 
 
 
 Thanks for your answers. Radek

-- 
[*] sys4 AG
 
http://sys4.de, +49 (89) 30 90 46 64
Franziskanerstraße 15, 81669 München
 
Sitz der Gesellschaft: München, Amtsgericht München: HRB 199263
Vorstand: Patrick Ben Koetter, Axel von der Ohe, Marc Schiffbauer
Aufsichtsratsvorsitzender: Joerg Heidrich
 


Re: [Dovecot] Per user special-use folder names

2013-02-05 Thread Michael M Slusarz

Quoting Patrick Ben Koetter p...@sys4.de:


That's part of what makes SPECIAL-USE so sexy. It is language independent.
All it does is say This mailbox is reserved for that particular usage. How
you call it, is up to you (client).


Well not quite.  The problem comes when you have *multiple* sent  
mailboxes on your server, which is perfectly acceptable and quite  
useful (e.g. an MUA allows multiple identities, and each identity uses  
a separate sent-mail mailbox).  You can't just blindly show the local  
translation for Sent for all of the mailboxes, or else you've now  
eliminated the user's ability to differentiate between them.


In practical use, SPECIAL-USE isn't tremendously helpful for  
auto-configuration of an MUA because of these kind of vagaries.


michael



Re: [Dovecot] Out of memory after upgrading to dovecot 2.0.21

2013-02-05 Thread Arnaud Abélard

On 02/05/2013 09:25 PM, Stan Hoeppner wrote:

On 2/5/2013 12:15 PM, Arnaud Abélard wrote:


I changed the vsz_limit to 512MB and it seems better, but I'm still
surprised my mailbox actually hit the memory limit since I doubt it's
most used one.


According to the wiki, vsz_limit only affects login processes, not IMAP
processes, which is odd given the second error you posted, which seems
to indicate a relationship between vsz_limit and imap.


Actually, service { vsz_limit } replaced mail_process_size (at least 
according to what dovecot said upon restart...) so service imap { 
vsz_limit=512MB } which i added earlier actually does what you suggested.





dovecot: imap(abelard-a): Error: mmap() failed with file
/vmail/a/b/abelard-a/dovecot.index.cache: Cannot allocate memory

imap(abelard-a): Fatal: pool_system_realloc(131072): Out of memory


This error suggests your mail_process_size variable is set too low.  If
it is currently set to 128MB, increase it to 512MB.  The bulk of this
memory is used for mmap()ing files, and is virtual, not physical.  Thus
you don't actually need 51GB of RAM to support 100 users.

Also, a dovecot.index.cache file of size 224MB seems rather large.  My
largest list mail folder is 150MB, contains 17,647 msgs, and has a
dovecot.index.cache file of only 16MB.  I use mbox.  This would seem to
suggest you may have a single folder with tens of thousands of msgs in
it.  Or maybe indexes are handled differently for your mailbox
format--I've not used any others.  If the former, I suggest you cull
your large folders to get them down to manageable size.


Yep I have a huge INBOX folder right now, I haven't archived my 2011 
mail yet (I keep the previous year in my INBOX).


Thanks,

Arnaud






--
Arnaud Abélard
jabber: arnaud.abel...@univ-nantes.fr / twitter: ArnY
Administrateur Système
DSI Université de Nantes
-



[Dovecot] Dovecot main process killed by KILL signal

2013-02-05 Thread For@ll

Hi,

I have a dovecot 2.0.7 installed on ubuntu 12.10 and often in dmesg I 
see init: dovecot main process (7104) killed by KILL signal, I must 
restarted dovecot, because I can't access to mailbox.


Server hardware:
2x1TB disks
4GB memory
1xCPU 2,4GHz
HARDWARE RAID 1
Mailbox ~ 1k

dovecot.conf

# 2.1.7: /etc/dovecot/dovecot.conf
# OS: Linux 3.5.0-22-generic x86_64 Ubuntu 12.10 xfs
auth_mechanisms = plain login
auth_verbose = yes
default_client_limit = 1500
info_log_path = /var/log/dovecot.info
log_path = /var/log/dovecot.log
log_timestamp = %Y-%m-%d %H:%M:%S 
mail_location = maildir:/var/mail/virtual/%d/%n/
maildir_very_dirty_syncs = yes
managesieve_notify_capability = mailto
managesieve_sieve_capability = fileinto reject envelope 
encoded-character vacation subaddress comparator-i;ascii-numeric 
relational regex imap4flags copy include variables body enotify 
environment mailbox date ihave

namespace {
  inbox = yes
  location =
  prefix = INBOX.
  separator = .
  type = private
}
passdb {
  args = /etc/dovecot/dovecot-sql.conf
  driver = sql
}
passdb {
  driver = pam
}
plugin {
  sieve_dir = /var/mail/virtual/%d/%n/sieve
  sieve_global_dir = /var/mail/virtual/sieve
}
protocols = sieve imap pop3
service auth {
  unix_listener /var/spool/postfix/private/auth {
group = postfix
mode = 0660
user = postfix
  }
  unix_listener auth-master {
mode = 0600
user = vmail
  }
  user = root
}
service imap-login {
  inet_listener imap {
port = 0
  }
  process_limit = 512
}
service imap {
  process_limit = 512
  service_count = 0
}
service managesieve-login {
  inet_listener sieve {
port = 33919
  }
}
service pop3-login {
  inet_listener pop3 {
port = 0
  }
}
ssl_cert = /etc/postfix/rssl/ssl.crt
ssl_key = /etc/postfix/rssl/ssl.key
userdb {
  args = uid=1003 gid=1003 home=/var/mail/virtual/%d/%n allow_all_users=yes
  driver = static
}
userdb {
  driver = passwd
}
verbose_proctitle = yes
protocol lda {
  auth_socket_path = /var/run/dovecot/auth-master
  info_log_path = /var/log/dovecot-lda.log
  lda_mailbox_autocreate = yes
  lda_mailbox_autosubscribe = yes
  log_path = /var/log/dovecot-lda-errors.log
  mail_plugins = sieve
}
protocol sieve {
  disable_plaintext_auth = no
}
protocol pop3 {
  pop3_uidl_format = %08Xu%08Xv
}

Any ideas what's wrong, where is the problem?



Re: [Dovecot] Dovecot main process killed by KILL signal

2013-02-05 Thread For@ll

W dniu 2013-02-05 22:49, For@ll pisze:

Hi,

I have a dovecot 2.0.7 installed on ubuntu 12.10 and often in dmesg I
see init: dovecot main process (7104) killed by KILL signal, I must
restarted dovecot, because I can't access to mailbox.

Sorry, I have dovecot 2.1.7.




[Dovecot] Possible sort optimization (?)

2013-02-05 Thread Michael M Slusarz

Maybe this is just noise... but I can reproduce this fairly reliably.

Mailbox with 21,000+ messages

This query:

a UID SORT RETURN (ALL COUNT) (DATE) UTF-8 SUBJECT foo

is always about 10 percent slower than this split query (I've done  
this 4-5 times, and the numbers are similar):


a UID SEARCH RETURN (SAVE) CHARSET UTF-8 SUBJECT foo
b UID SORT RETURN (ALL COUNT) (DATE) UTF-8 UID $

(The particular query I used matched 5 messages out of the 21,000+)


My not-very-scientific benchmarking process:

1.) Stop dovecot process
2.) Delete all dovecot index files for that mailbox
3.) Flush linux paging cache (sync  echo 3  /proc/sys/vm/drop_caches)
4.) Restart dovecot
5.) Access dovecot via command-line (PREAUTH)
6.) SELECT mailbox
7.) Issue command(s)


Could be a potential area for performance improvement or could simply  
be lazy benchmarking.


michael



[Dovecot] dsync: Invalid server handshake

2013-02-05 Thread Dusan Zivadinovic
Hi list,

I recently tried to backup mailboxes from an older server machine to
a new one in order to move the service to the new machine.
Both machines are in the same LAN, I used this command:

dsync -R -u username backup ssh -i .ssh/id_rsa username@192.168.1.11 
/opt/local/bin/dsync

and I get this error:

dsync-local(dz): Error: Invalid server handshake: dsync-server   2
dsync-remote(dz): Error: Invalid client handshake: dsync-client 1

# lokal machine: dovecot 2.0.19apple1 running on Mac OS X 10.8.2 Server
using standard Apple-Dovecot-Configuration

# remote machine: 2.1.12 running on Mac OS X 10.4 using PAM authentication


I cant seem to find any documentation to this error, nor do I find any dsync 
entries
in the logs of both machines. Does anyone have a hint?


Thank you,

Dusan



Re: [Dovecot] dsync backup questions

2013-02-05 Thread Joe Beaubien
On Mon, Feb 4, 2013 at 9:01 PM, Timo Sirainen t...@iki.fi wrote:

 On Mon, 2013-02-04 at 00:57 +, Ben Morrow wrote:
  I can't give authoratitive answers to either of these, but...
 
  At  6PM -0500 on  3/02/13 you (Joe Beaubien) wrote:
  
   I'm currently trying to setup remote backups of my emails but i'm
 running
   into issues (mdbox format, indexes and storage in the same folder
   hierarchy).
  
   Local backup command: dsync -u my_user backup /backups/my_user
  
   (1) Recently, I noticed that the local backup takes up twice the size
 as
   the original mail location (8gb vs. 4gb). I purged alot of emails from
 the
   original location, so the size shrunk, but the local backup just keeps
 on
   getting bigger. I couldn't find any dsync option that would delete
 extra
   emails.
  
   - Question: Why isn't the local backup synced properly and remove the
 extra
   emails?
 
  Are you running 'doveadm purge' on the backed-up dbox? It looks to me as
  though dsync doesn't do that. I don't know if there's any (simple) way
  to do that without a running Dovecot instance attached the dbox
  directory: it's not entirely clear to me whether doveadm will run
  locally without contacting a doveadm-server instance running under
  Dovecot, nor how to point 'doveadm purge' at an arbitrary directory.

 Right. doveadm -o mail=mdbox:/backups/my_user purge


This worked (at least it seems to, the source and destination are roughly
the same size)..

However, if the the original email location has already been purge, does
the backup email location also need to be purged?


  It might be easier to dsync to a Maildir instead. This should preserve
  all the Dovecot-visible metadata, and dsyncing back to the dbox for
  restore should put it all back.

 Better sdbox than maildir.


I'd rather stick to mdbox for my remote backups. I have a single email
account with over 1.5 million emails in it. With a 1-file-per-message
storage, this would be slow/hell to run. Unless there is a better way.



   (2) What is the best why to copy this local backup to a remote location
   that does NOT have the possibility to run dsync.
  
   - Question 1: is rsync safe to use and will this data work for restore?
  
   - Question 2: Would it be safe to simply rsync the original
 mail_location
   to the remote server?
 
  AFAICT, if Dovecot is stopped on both sides of the transfer it should be
  safe. If either side has a currently running Dovecot instance (or any
  other Dovecot tools, like a concurrent dsync) using that dbox, it's
  likely rsync will copy an inconsistent snapshot of the data, and the
  result will be corrupted.

 It won't be badly corrupted though. At worst Dovecot will rebuild the
 indexes, which takes a while. And most users probably won't get any
 corruption at all.



I think there was a misunderstanding of the setup. dsync is only running on
the local side, the remote side is a dumb rsync server that I don't fully
control.


Re: [Dovecot] dsync backup questions

2013-02-05 Thread Joe Beaubien
On Tue, Feb 5, 2013 at 10:41 PM, Joe Beaubien joe.beaub...@gmail.comwrote:




 On Mon, Feb 4, 2013 at 9:01 PM, Timo Sirainen t...@iki.fi wrote:

 On Mon, 2013-02-04 at 00:57 +, Ben Morrow wrote:
  I can't give authoratitive answers to either of these, but...
 
  At  6PM -0500 on  3/02/13 you (Joe Beaubien) wrote:
  
   I'm currently trying to setup remote backups of my emails but i'm
 running
   into issues (mdbox format, indexes and storage in the same folder
   hierarchy).
  
   Local backup command: dsync -u my_user backup /backups/my_user
  
   (1) Recently, I noticed that the local backup takes up twice the size
 as
   the original mail location (8gb vs. 4gb). I purged alot of emails
 from the
   original location, so the size shrunk, but the local backup just
 keeps on
   getting bigger. I couldn't find any dsync option that would delete
 extra
   emails.
  
   - Question: Why isn't the local backup synced properly and remove the
 extra
   emails?
 
  Are you running 'doveadm purge' on the backed-up dbox? It looks to me as
  though dsync doesn't do that. I don't know if there's any (simple) way
  to do that without a running Dovecot instance attached the dbox
  directory: it's not entirely clear to me whether doveadm will run
  locally without contacting a doveadm-server instance running under
  Dovecot, nor how to point 'doveadm purge' at an arbitrary directory.

 Right. doveadm -o mail=mdbox:/backups/my_user purge


 This worked (at least it seems to, the source and destination are roughly
 the same size)..

 However, if the the original email location has already been purge, does
 the backup email location also need to be purged?


  It might be easier to dsync to a Maildir instead. This should preserve
  all the Dovecot-visible metadata, and dsyncing back to the dbox for
  restore should put it all back.

 Better sdbox than maildir.


 I'd rather stick to mdbox for my remote backups. I have a single email
 account with over 1.5 million emails in it. With a 1-file-per-message
 storage, this would be slow/hell to run. Unless there is a better way.



   (2) What is the best why to copy this local backup to a remote
 location
   that does NOT have the possibility to run dsync.
  
   - Question 1: is rsync safe to use and will this data work for
 restore?
  
   - Question 2: Would it be safe to simply rsync the original
 mail_location
   to the remote server?
 
  AFAICT, if Dovecot is stopped on both sides of the transfer it should be
  safe. If either side has a currently running Dovecot instance (or any
  other Dovecot tools, like a concurrent dsync) using that dbox, it's
  likely rsync will copy an inconsistent snapshot of the data, and the
  result will be corrupted.

 It won't be badly corrupted though. At worst Dovecot will rebuild the
 indexes, which takes a while. And most users probably won't get any
 corruption at all.



 I think there was a misunderstanding of the setup. dsync is only running
 on the local side, the remote side is a dumb rsync server that I don't
 fully control.



what i was trying to ask with my last question is the following:

I'm trying to do remote backups of an mdbox setup, and considering that I
only have dsync running on the local side (not on the remote side), is it
safe to simply do an rsync of the mail_location to the remote server, or
should I first do a dsync (make a local duplicate) and then rsync the
duplcate out to the remote server?

(wish i had a whiteboard right about now).

Thanks,

-Joe


Re: [Dovecot] Per user special-use folder names

2013-02-05 Thread Joseph Tam

On Wed, 6 Feb 2013, Michael M Slusarz wrote:


Quoting Patrick Ben Koetter p...@sys4.de:


That's part of what makes SPECIAL-USE so sexy. It is language independent.
All it does is say This mailbox is reserved for that particular usage. How
you call it, is up to you (client).


Well not quite.  The problem comes when you have *multiple* sent
mailboxes on your server, which is perfectly acceptable and quite
useful (e.g. an MUA allows multiple identities, and each identity uses
a separate sent-mail mailbox).  You can't just blindly show the local
translation for Sent for all of the mailboxes, or else you've now
eliminated the user's ability to differentiate between them.


On a related topic, what's the easiest way to alias various common
mailbox names to one physical mailbox?  For example, mapping Trash,
Deleted Messages, Junk to the same mailbox?

Would you use the SPECIAL-USE, or is there a better way to do this?
Namescape configuration?  Virtual plugin?

Joseph Tam jtam.h...@gmail.com


[Dovecot] dovecot-2.2: dsync to imapc not working

2013-02-05 Thread Evgeny Basov
Hello.

I have dovecot installation

# dovecot --version
20130205 (03a0af22100d+)

built with imapc backend.

I'm tried to sync mailboxes from another server after clean mail
directory localy:

# dsync -v -o imapc_user=u...@example.org -o  imapc_password=pass 
-o imapc_host=imap.example.org -o imapc_features=rfc822.size -o
mailbox_list_index=no  backup -R -f -u u...@example.org imapc:

and get this message

dsync(u...@example.org): Error: Exporting mailbox INBOX failed: Backend
doesn't support GUIDs, sync with header hashes instead

Repeated command returns this one message:

dsync(u...@example.org): Error: Mailbox INBOX sync: mailbox_delete
failed: INBOX can't be deleted.

What wrong with this build?

Maybe there is another way to do it? For example create backup in local
temporary directory and synchronize this one and working storage.