Re: [Dovecot] Very High Load on Dovecot 2 and Errors in mail.err.

2012-06-20 Thread Jesper Dahl Nyerup
On Jun 11  23:37, Jesper Dahl Nyerup wrote:
 We're still chasing the root cause in the kernel or the VServer patch
 set. We'll of course make sure to post our findings here, and I'd very
 much appreciate to hear about other people's progress.

We still haven't found a solution, but here's what we've got thus far:

 - The issue is not VServer specific. We're able to reproduce it on
   recent vanilla kernels.

 - The issue has a strong correlation with the number of processor cores
   in the machine. The behavior is impossible to provoke on a dual core
   workstation, but is very widespread on 16 or 24 core machines.

One of my colleagues has written a snippet of code that reproduces and
exposes the problem, and we've sent this to the Inotify maintainers and
the kernel mailing list, hoping that someone more familiar with the code
will be quicker to figure out what is broken.

If anyone's interested - either in following the issue or the code
snippet that reproduces it - here's the post:
http://thread.gmane.org/gmane.linux.kernel/1315430

As this is clearly a kernel issue, we're going to try to keep the
discussion there, and I'll probably not follow up here, until the issue
has been resolved.

Jesper.


signature.asc
Description: Digital signature


[Dovecot] sieve and namespace

2012-06-20 Thread Николай Клименко

HI
I'm tryin to set up sieve the way so it will put incoming message into 
Junk folder, which is described via namespace.
Unfortunately rule doesn't work and message is put into Inbox. If i 
change destination folder to folder not described via namespace in the 
same rule the message is placed to that folder.

please help

dovecot 1.2.9

namespace:
  type: private
  prefix: Junk/
  location: maildir:/opt/mail/Junk/INBOX:LAYOUT=fs
  hidden: yes
  list: yes
  subscriptions: yes


Re: [Dovecot] Very High Load on Dovecot 2 and Errors in mail.err.

2012-06-20 Thread Urban Loesch

Hi,

yesterday I disabled the inotify as mentioned in the previous post
and it works for me also. Thanks to all for the hint.

On 20.06.2012 08:35, Jesper Dahl Nyerup wrote:

On Jun 11  23:37, Jesper Dahl Nyerup wrote:

We're still chasing the root cause in the kernel or the VServer patch
set. We'll of course make sure to post our findings here, and I'd very
much appreciate to hear about other people's progress.


We still haven't found a solution, but here's what we've got thus far:

  - The issue is not VServer specific. We're able to reproduce it on
recent vanilla kernels.

  - The issue has a strong correlation with the number of processor cores
in the machine. The behavior is impossible to provoke on a dual core
workstation, but is very widespread on 16 or 24 core machines.


For the records:
I have the problem on 2 different machines with different CPU's
- PE2950 with 2x Intel Xeon X5450 3.00Ghz (8) CPU's (problem happens not so 
often as with PER610)
- PER610 with 2x Intel Xeon X5650 2.67GHz (24) CPU's



One of my colleagues has written a snippet of code that reproduces and
exposes the problem, and we've sent this to the Inotify maintainers and
the kernel mailing list, hoping that someone more familiar with the code
will be quicker to figure out what is broken.

If anyone's interested - either in following the issue or the code
snippet that reproduces it - here's the post:
http://thread.gmane.org/gmane.linux.kernel/1315430


As you described on the kernel maillinglist, I can confirm. The higher the
number of cpu's, the worse it gets.



As this is clearly a kernel issue, we're going to try to keep the
discussion there, and I'll probably not follow up here, until the issue
has been resolved.

Jesper.


Thanks
Urban


Re: [Dovecot] troncated email

2012-06-20 Thread Charles Marcus

On 2012-06-19 10:28 PM, Claude Gélinas cla...@phyto.qc.ca wrote:

I'm on fc16 with dovecot and Claws Mail version 3.8.0


We are much more interested in the dovecot version (and configuration - 
dovecot -n output is helpful there) than the version of Claws Mail.



All email in INBOX are troncated as they arrive. I only get the title,
from and date but no more core message

could someone guide me so I find a solution for my problem. cannot lose
my email


Since most of our Crystal Balls are broken, you will likely have to be 
much more precise in your request for help, by providing actual excerpts 
from logs while accessing mail, and you may even have to resort to 
enabling debugging...


Start here: http://wiki2.dovecot.org/WhyDoesItNotWork

Otherwise, you may get more help from a Fedora support list.

--

Best regards,

Charles


[Dovecot] Dovecot not liking AD config from wiki??

2012-06-20 Thread Kaya Saman
Hi,

I'm trying to setup Dovecot with MS AD and am using this as my guide:

http://wiki2.dovecot.org/HowTo/ActiveDirectoryNtlm


I can definitely access information on the AD server using wbinfo -g
and wbinfo -u.



Currently my dovecot.conf file looks like this:


# v1.1:
#auth_ntlm_use_winbind = yes
# v1.2+:
auth_use_winbind = yes

auth_winbind_helper_path = /usr/local/bin/ntlm_auth

protocols = imap

# It's nice to have separate log files for Dovecot. You could do this
# by changing syslog configuration also, but this is easier.
log_path = /var/log/dovecot.log
info_log_path = /var/log/dovecot-info.log

# Disable SSL for now.
ssl = no
disable_plaintext_auth = no

# We're using Maildir format
#mail_location = maildir:~/Maildir
mail_location = mbox:/mail:INBOX=/mail/%u

# If you're using POP3, you'll need this:
#pop3_uidl_format = %g

# Authentication configuration:
auth_verbose = yes
auth_debug = yes
auth_username_format = %n
auth_mechanisms = plain ntlm login
userdb {
  driver = static
  args = uid=501 gid=501 home=/mail/%u
  driver = static
  allow_all_users=yes
}



According to the documentation I should be using:

userdb static {
...
}

which seems to be Dovecot v1. config, and additionally the
allow_all_users=yes statement when added seems again v1. config
since Dovecot 2. won't even start?


In the meantime when not using allow_all_users Dovecot throws up these errors:

Jun 20 11:30:40 master: Warning: Killed with signal 15 (by pid=4149
uid=0 code=kill)
Jun 20 11:30:48 auth: Fatal: No passdbs specified in configuration
file. LOGIN mechanism needs one
Jun 20 11:30:48 master: Error: service(auth): command startup failed,
throttling for 2 secs
Jun 20 11:30:59 master: Warning: Killed with signal 15 (by pid=4182
uid=0 code=kill)
Jun 20 11:31:13 auth: Fatal: No passdbs specified in configuration
file. LOGIN mechanism needs one
Jun 20 11:31:13 master: Error: service(auth): command startup failed,
throttling for 2 secs
Jun 20 11:32:38 master: Warning: Killed with signal 15 (by pid=4245
uid=0 code=kill)
Jun 20 11:32:58 imap-login: Warning: Auth connection closed with 1
pending requests (max 0 secs, pid=4265, EOF)
Jun 20 11:32:58 auth: Fatal: master: service(auth): child 4266 killed
with signal 11 (core not dumped - set service auth {
drop_priv_before_exec=yes })

-- this was after adding:

passdb {
  driver = static
}


to the mix.


I'm using Dovecot 2.1.3 on FreeBSD 8.2 RELEASE x64.


Can anyone help me configuring Dovecot to authenticate?


Regards,


Kaya


[Dovecot] dovecot 2.1.5 performance

2012-06-20 Thread Angel L. Mateo

Hello,

	I'm migrating from 1.1.16 running in 4 debian lenny servers virtualized 
with xenserver and 1 core and 5GB of RAM to 2.1.5 running in 4 ubuntu 
12.04 servers with 6 cpu cores and 16GB of RAM virtualized with VMWare, 
but I'm having lots a performance problems. I don't think that 
virtualization platform could be the problem, because the new servers 
running in xenserver has the same problems than running in vmware.


	I have about 7 user accounts, most of them without real activity 
(they are students who doesn't read his email or have its account 
redirected to other provider). I have about 700-1000 concurrent imap 
connections.


	I have storage in nfs (nfsv3, the nfs server is a celerra), but indexes 
are in local filesystems (each server has its own index fs). Mailboxes 
are in maildir format.


	Old servers and actual director servers are load balanced with an 
radware appdirector load balancer (the new backend servers don't need to 
be balanced because I'm using a director farm)


	In the old platform I have scenario number 2 described at 
http://wiki2.dovecot.org/NFS, but in the new ones I have a director 
proxy directing all connections from each user to the same server (I 
don't specify any server for the user, director selects it according to 
the hash algorithm it has).


Some doubts I have for the recommended in that url:

* mmap_disable: both single and multi server configurations have 
mmap_disable=yes but in index file section says that you need it if you 
have your index files stored in nfs. I have it stored locally. Do I need 
mmap_disable=yes? What it's the best?
* dotlock_use_excl: it is set to no in both configurations, but the 
comment says that it is needed only in nfsv2. Since I have nfs3, I have 
it set it to yes.
* mail_nfs_storage: In single server is set to no, but in multi server 
it set to yes. Since I have a director in front of my backend server, 
what is the recommended?


	With this configuration, when I have a few connections (about 300-400 
imap connections) everything is working fine, but when I disconnect the 
old servers and direct all my users' connections to the new servers I 
have lot of errors. server loads increments to over 300 points, with a 
very high io wait. With atop, I could see that of my 6 cores, I have one 
with almost 100% waiting for i/o and the other with almost 100% idle, 
but load of the server is very, very high.


	With the old servers, I have performance problems, access to mail is 
slow, but it works. But with the new ones it doesn't work at all.


Any idea?

--
Angel L. Mateo Martínez
Sección de Telemática
Área de Tecnologías de la Información   _o)
y las Comunicaciones Aplicadas (ATICA)  / \\
http://www.um.es/atica_(___V
Tfo: 868887590
Fax: 86337



Re: [Dovecot] dovecot 2.1.5 performance

2012-06-20 Thread Angel L. Mateo

On 20/06/12 11:40, Angel L. Mateo wrote:

Hello,

 I'm migrating from 1.1.16 running in 4 debian lenny servers
virtualized with xenserver and 1 core and 5GB of RAM to 2.1.5 running in
4 ubuntu 12.04 servers with 6 cpu cores and 16GB of RAM virtualized with
VMWare, but I'm having lots a performance problems. I don't think that
virtualization platform could be the problem, because the new servers
running in xenserver has the same problems than running in vmware.

 I have about 7 user accounts, most of them without real
activity (they are students who doesn't read his email or have its
account redirected to other provider). I have about 700-1000 concurrent
imap connections.

 I have storage in nfs (nfsv3, the nfs server is a celerra), but
indexes are in local filesystems (each server has its own index fs).
Mailboxes are in maildir format.

 Old servers and actual director servers are load balanced with an
radware appdirector load balancer (the new backend servers don't need to
be balanced because I'm using a director farm)

 In the old platform I have scenario number 2 described at
http://wiki2.dovecot.org/NFS, but in the new ones I have a director
proxy directing all connections from each user to the same server (I
don't specify any server for the user, director selects it according to
the hash algorithm it has).

 Some doubts I have for the recommended in that url:

* mmap_disable: both single and multi server configurations have
mmap_disable=yes but in index file section says that you need it if you
have your index files stored in nfs. I have it stored locally. Do I need
mmap_disable=yes? What it's the best?
* dotlock_use_excl: it is set to no in both configurations, but the
comment says that it is needed only in nfsv2. Since I have nfs3, I have
it set it to yes.
* mail_nfs_storage: In single server is set to no, but in multi server
it set to yes. Since I have a director in front of my backend server,
what is the recommended?

 With this configuration, when I have a few connections (about
300-400 imap connections) everything is working fine, but when I
disconnect the old servers and direct all my users' connections to the
new servers I have lot of errors. server loads increments to over 300
points, with a very high io wait. With atop, I could see that of my 6
cores, I have one with almost 100% waiting for i/o and the other with
almost 100% idle, but load of the server is very, very high.

 With the old servers, I have performance problems, access to mail
is slow, but it works. But with the new ones it doesn't work at all.

 Any idea?


I forgot attaching my doveconf.

--
Angel L. Mateo Martínez
Sección de Telemática
Área de Tecnologías de la Información   _o)
y las Comunicaciones Aplicadas (ATICA)  / \\
http://www.um.es/atica_(___V
Tfo: 868887590
Fax: 86337


# 2.1.5: /etc/dovecot/dovecot.conf
# OS: Linux 3.2.0-24-generic x86_64 Ubuntu 12.04 LTS 
auth_cache_size = 20 M
auth_cache_ttl = 1 days
auth_debug = yes
auth_master_user_separator = *
auth_verbose = yes
default_process_limit = 1000
disable_plaintext_auth = no
log_timestamp = %Y-%m-%d %H:%M:%S
login_trusted_networks = 155.54.211.176/28
mail_debug = yes
mail_fsync = always
mail_location = maildir:~/Maildir:INDEX=/var/indexes/%n
mail_nfs_storage = yes
mail_privileged_group = mail
maildir_stat_dirs = yes
mdbox_rotate_size = 20 M
passdb {
  args = /etc/dovecot/master-users
  driver = passwd-file
  master = yes
  pass = yes
}
passdb {
  args = /etc/dovecot/dovecot-ldap.conf.ext
  driver = ldap
}
passdb {
  args = session=yes dovecot
  driver = pam
}
plugin {
  lazy_expunge = .EXPUNGED/ .DELETED/ .DELETED/.EXPUNGED/
  sieve = ~/.dovecot.sieve
  sieve_dir = ~/sieve
  sieve_extensions = +imapflags
  sieve_max_redirects = 15
  zlib_save = gz
  zlib_save_level = 6
}
postmaster_address = postmas...@um.es
service anvil {
  client_limit = 2003
}
service auth {
  client_limit = 3000
  unix_listener auth-userdb {
mode = 0666
  }
}
service doveadm {
  inet_listener {
port = 24245
  }
}
service imap {
  process_limit = 5120
  process_min_avail = 6
  vsz_limit = 512 M
}
service lmtp {
  inet_listener lmtp {
port = 24
  }
  process_min_avail = 10
  vsz_limit = 512 M
}
service pop3 {
  process_min_avail = 6
}
ssl = no
ssl_cert = /etc/ssl/certs/dovecot.pem
ssl_key = /etc/ssl/private/dovecot.pem
userdb {
  driver = prefetch
}
userdb {
  args = /etc/dovecot/dovecot-ldap.conf.ext
  driver = ldap
}
protocol lda {
  mail_plugins =  sieve
}
protocol lmtp {
  mail_plugins =  sieve
}
local 155.54.211.160/27/27 {
  doveadm_password = ]dWhu5kB
}


Re: [Dovecot] dovecot 2.1.5 performance

2012-06-20 Thread Joseba Torre

El 20/06/12 11:46, Angel L. Mateo escribió:

On 20/06/12 11:40, Angel L. Mateo wrote:

Hello,

I'm migrating from 1.1.16 running in 4 debian lenny servers
virtualized with xenserver and 1 core and 5GB of RAM to 2.1.5 running in
4 ubuntu 12.04 servers with 6 cpu cores and 16GB of RAM virtualized with
VMWare, but I'm having lots a performance problems. I don't think that
virtualization platform could be the problem, because the new servers
running in xenserver has the same problems than running in vmware.

I have about 7 user accounts, most of them without real
activity (they are students who doesn't read his email or have its
account redirected to other provider). I have about 700-1000 concurrent
imap connections.

I have storage in nfs (nfsv3, the nfs server is a celerra), but
indexes are in local filesystems (each server has its own index fs).
Mailboxes are in maildir format.

Old servers and actual director servers are load balanced with an
radware appdirector load balancer (the new backend servers don't need to
be balanced because I'm using a director farm)

In the old platform I have scenario number 2 described at
http://wiki2.dovecot.org/NFS, but in the new ones I have a director
proxy directing all connections from each user to the same server (I
don't specify any server for the user, director selects it according to
the hash algorithm it has).

Some doubts I have for the recommended in that url:

* mmap_disable: both single and multi server configurations have
mmap_disable=yes but in index file section says that you need it if you
have your index files stored in nfs. I have it stored locally. Do I need
mmap_disable=yes? What it's the best?
* dotlock_use_excl: it is set to no in both configurations, but the
comment says that it is needed only in nfsv2. Since I have nfs3, I have
it set it to yes.
* mail_nfs_storage: In single server is set to no, but in multi server
it set to yes. Since I have a director in front of my backend server,
what is the recommended?



As I see it, director ensures that only 1 server is accesing any given 
file, so you don't need any special conf (so mmap_disable=no  
mail_nfs_storage=no)


Re: [Dovecot] dovecot 2.1.5 performance

2012-06-20 Thread Timo Sirainen
On Wed, 2012-06-20 at 11:40 +0200, Angel L. Mateo wrote:

 * mmap_disable: both single and multi server configurations have 
 mmap_disable=yes but in index file section says that you need it if you 
 have your index files stored in nfs. I have it stored locally. Do I need 
 mmap_disable=yes? What it's the best?

mmap_disable is used only for index files, so with local indexes use
no. (If indexes were on NFS, no would probably still work but I'm
not sure if the performance would be better or worse. Errors would also
trigger SIGBUS crashes.)

 * dotlock_use_excl: it is set to no in both configurations, but the 
 comment says that it is needed only in nfsv2. Since I have nfs3, I have 
 it set it to yes.

yes is ok.

 * mail_nfs_storage: In single server is set to no, but in multi server 
 it set to yes. Since I have a director in front of my backend server, 
 what is the recommended?

With director you can set this to no.

   With this configuration, when I have a few connections (about 300-400 
 imap connections) everything is working fine, but when I disconnect the 
 old servers and direct all my users' connections to the new servers I 
 have lot of errors. 

Real errors that show up in Dovecot logs? What kind of errors?

 server loads increments to over 300 points, with a 
 very high io wait. With atop, I could see that of my 6 cores, I have one 
 with almost 100% waiting for i/o and the other with almost 100% idle, 
 but load of the server is very, very high.

Does the server's disk IO usage actually go a lot higher, or is it
simply waiting without doing much of anything? I wonder if this is
related to the inotify problems:
http://dovecot.org/list/dovecot/2012-June/066474.html

Another thought: Since indexes are stored locally, is it possible that
the extra load comes simply from building the indexes on the new
servers, while they already exist on the old ones?

 mail_fsync = always

v1.1 did the equivalent of mail_fsync=optimized. You could see if that
makes a difference.

 maildir_stat_dirs = yes

Do you actually need this? It causes unnecessary disk IO and probably
not needed in your case.

 default_process_limit = 1000

Since you haven't enabled high-performance mode for imap-login processes
and haven't otherwise changed the service imap-login settings, this
means that you can have max. 1000 simultaneous IMAP SSL/TLS connections.



Re: [Dovecot] dovecot 2.1.5 performance

2012-06-20 Thread Angel L. Mateo

On 20/06/12 12:05, Timo Sirainen wrote:

On Wed, 2012-06-20 at 11:40 +0200, Angel L. Mateo wrote:


* mmap_disable: both single and multi server configurations have
mmap_disable=yes but in index file section says that you need it if you
have your index files stored in nfs. I have it stored locally. Do I need
mmap_disable=yes? What it's the best?


mmap_disable is used only for index files, so with local indexes use
no. (If indexes were on NFS, no would probably still work but I'm
not sure if the performance would be better or worse. Errors would also
trigger SIGBUS crashes.)


* dotlock_use_excl: it is set to no in both configurations, but the
comment says that it is needed only in nfsv2. Since I have nfs3, I have
it set it to yes.


yes is ok.


* mail_nfs_storage: In single server is set to no, but in multi server
it set to yes. Since I have a director in front of my backend server,
what is the recommended?


With director you can set this to no.


Ok, I'm going to change it.


With this configuration, when I have a few connections (about 300-400
imap connections) everything is working fine, but when I disconnect the
old servers and direct all my users' connections to the new servers I
have lot of errors.


Real errors that show up in Dovecot logs? What kind of errors?


Lot of errors like:

Jun 20 12:42:37 myotis31 dovecot: imap(vlo): Warning: Maildir 
/home/otros/44/016744/Maildir/.INBOX.PRUEBAS: Synchronization took 278 
seconds (0 new msgs, 0 flag change attempts, 0 expunge attempts)
Jun 20 12:42:38 myotis31 dovecot: imap(vlo): Warning: Transaction log 
file /var/indexes/vlo/.INBOX.PRUEBAS/dovecot.index.log was locked for 
279 seconds



and in the relay server, lots of timeout errors delivering to lmtp:

un 20 12:38:29 xenon14 postfix/lmtp[12004]: D48D55D4F7: to=d...@um.es, 
relay=pop.um.es[155.54.212.106]:24, delay=150, delays=0.09/0/0/150, 
dsn=4.4.0, status=deferred (host pop.um.es[155.54.212.106] said: 451 
4.4.0 Remote server not answering (timeout while waiting for reply to 
DATA reply) (in reply to end of DATA command))



server loads increments to over 300 points, with a
very high io wait. With atop, I could see that of my 6 cores, I have one
with almost 100% waiting for i/o and the other with almost 100% idle,
but load of the server is very, very high.


Does the server's disk IO usage actually go a lot higher, or is it
simply waiting without doing much of anything? I wonder if this is
related to the inotify problems:
http://dovecot.org/list/dovecot/2012-June/066474.html

	Now we have rollbacked to the old servers, so I don't know. Next time 
we try, I'll check this.



Another thought: Since indexes are stored locally, is it possible that
the extra load comes simply from building the indexes on the new
servers, while they already exist on the old ones?


I don't think so, because:

* In the old servers, we have no director like mechanism. One IP is 
always directed to the same server (during a session timeout, today 
could be one server and tomorrow another different), but mail is 
delivered randomly through one of the server.
* Since last week (when we started migration) all mail is delivered into 
the mailboxes by the new servers, passing through director. So new 
server's indexes should be updated.



mail_fsync = always


v1.1 did the equivalent of mail_fsync=optimized. You could see if that
makes a difference.


I'll try this.


maildir_stat_dirs = yes


Do you actually need this? It causes unnecessary disk IO and probably
not needed in your case.

	My fault. I understood the explanation completely wrong. I thought that 
yes should do what actually does no. I have fixed it.



default_process_limit = 1000


Since you haven't enabled high-performance mode for imap-login processes
and haven't otherwise changed the service imap-login settings, this
means that you can have max. 1000 simultaneous IMAP SSL/TLS connections.


I know it. I have to tune it.

--
Angel L. Mateo Martínez
Sección de Telemática
Área de Tecnologías de la Información   _o)
y las Comunicaciones Aplicadas (ATICA)  / \\
http://www.um.es/atica_(___V
Tfo: 868887590
Fax: 86337




Re: [Dovecot] dovecot 2.1.5 performance

2012-06-20 Thread Wojciech Puchar



I know it. I have to tune it.

--
he did not only changed Dovecot but OS. I would bet it is his OS problem - 
as he stated 100% of single core is used while 6 are available. something 
definitely not dovecot dependent.


i would recommend installing exactly the same version of old dovecot on 
new OS and test it.


Re: [Dovecot] Trouble with Trash

2012-06-20 Thread Oscar del Rio

On 06/19/12 08:32 PM, Daniel Parthey wrote:

Dominic Prattdove...@bestewogibt.de  schrieb:


As already said... I don't think it's TB:
http://www.imagebanana.com/view/ht4sofoj/thunderbird.jpg

since you do not seem to have enabled the Trash plugin, Dovecot will not delete 
anything by itself.


The only other way I can think of that Dovecot could delete messages 
would be if there is a doveadm expunge cron job running on the server.


Re: [Dovecot] Dovecot Maildir - How to Seperate mail folders

2012-06-20 Thread Guido Weiler
 Date: Mon, 18 Jun 2012 16:53:39 +0300
 From: Timo Sirainen t...@iki.fi
 To: Dovecot Mailing List dovecot@dovecot.org
 Subject: Re: [Dovecot] Dovecot Maildir - How to Seperate mail folders
 Message-ID: a828dce4-9b1d-48ac-b1a1-c77c4d639...@iki.fi
 Content-Type: text/plain; charset=us-ascii

 On 18.6.2012, at 12.17, Guido Weiler wrote:

  01 OK Logged in.
  02 list  *
  * LIST (\HasNoChildren) / INBOX
  * LIST (\Noselect \HasChildren) / greetings
  * LIST (\HasNoChildren) / greetings/INBOX
  02 OK List completed.
  03 select greetings/INBOX
  03 NO Mailbox doesn't exist: INBOX
  04 select greetings
  04 NO Mailbox doesn't exist: greetings
  
  ---
  
  What is this \Noselect mailbox showing up and why is it saying 
  greetings/INBOX in the third row when in fact there  isn't a mailbox 
  with this name?
  
  I am very sorry for having to bother you again, but I don't know what we 
  are doing wrong here.
  (Dovecot version is 1.1.16)

 Fixed in newer versions, upgrade.

--

Thank you. Can you tell me if this bug belongs to the LIST command only?
Or is it generally impossible to SELECT such mailboxes with this version?

Best Regards,

Guido Weiler


Re: [Dovecot] Dovecot Maildir - How to Seperate mail folders

2012-06-20 Thread Charles Marcus

Guido, when Timo says its time to upgrade, upgrade.

On 2012-06-20 10:06 AM, Guido Weiler weiler.gu...@bergersysteme.com wrote:

Date: Mon, 18 Jun 2012 16:53:39 +0300
From: Timo Sirainent...@iki.fi
To: Dovecot Mailing Listdovecot@dovecot.org
Subject: Re: [Dovecot] Dovecot Maildir - How to Seperate mail folders
Message-ID:a828dce4-9b1d-48ac-b1a1-c77c4d639...@iki.fi
Content-Type: text/plain; charset=us-ascii

On 18.6.2012, at 12.17, Guido Weiler wrote:


01 OK Logged in.

02 list  *

* LIST (\HasNoChildren) / INBOX
* LIST (\Noselect \HasChildren) / greetings
* LIST (\HasNoChildren) / greetings/INBOX
02 OK List completed.

03 select greetings/INBOX

03 NO Mailbox doesn't exist: INBOX

04 select greetings

04 NO Mailbox doesn't exist: greetings

---

What is this \Noselect mailbox showing up and why is it saying greetings/INBOX 
in the third row when in fact there  isn't a mailbox with this name?

I am very sorry for having to bother you again, but I don't know what we are 
doing wrong here.
(Dovecot version is 1.1.16)


Fixed in newer versions, upgrade.


--

Thank you. Can you tell me if this bug belongs to the LIST command only?
Or is it generally impossible to SELECT such mailboxes with this version?

Best Regards,

Guido Weiler



--

Best regards,

Charles Marcus
I.T. Director
Media Brokers International, Inc.
678.514.6200 x224 | 678.514.6299 fax


[Dovecot] GlusterFS + Dovecot

2012-06-20 Thread Romer Ventura
Hello,

 

Has anyone used GlusterFS as storage file system for dovecot or any other
email system..?

 

It says that it can be presented as a NFS, CIFS and as GlusterFS using the
native client, technically using the client would allow the machine to read
and write to it, therefore, I think that Dovecot would not care about it.
Correct?

 

Anyone out there used this setup??

 

Thanks.



Re: [Dovecot] GlusterFS + Dovecot

2012-06-20 Thread Timo Sirainen
On 20.6.2012, at 18.50, Romer Ventura wrote:

 Has anyone used GlusterFS as storage file system for dovecot or any other
 email system..?

I've heard Dovecot complains about index corruption once in a while with 
glusterfs, even when not in multi-master mode. I wouldn't use it without some 
heavy stress testing first (with imaptest tool).



[Dovecot] Dovecot list IMAP archives with thunderbird?

2012-06-20 Thread Alex Crow

Hi,

I'm trying to access the IMAP archives with Thunderbird but can't seem 
to get it to work. I have tried an unencrypted connection, SSL and TLS 
but with no success. Any ideas?


Thanks

Alex

--
This message is intended only for the addressee and may contain
confidential information.  Unless you are that person, you may not
disclose its contents or use it in any way and are requested to delete
the message along with any attachments and notify us immediately.

Transact is operated by Integrated Financial Arrangements plc
Domain House, 5-7 Singer Street, London  EC2A 4BQ
Tel: (020) 7608 4900 Fax: (020) 7608 5300
(Registered office: as above; Registered in England and Wales under number: 
3727592)
Authorised and regulated by the Financial Services Authority (entered on the 
FSA Register; number: 190856)



[Dovecot] Problem with Dovecot 2.0/2.1 and MySQL 5.1

2012-06-20 Thread Mark Schmale
Hi everyone, 

since some time I got problems with dovecot  mysql. 
I got the problem with version 2.0.x and upgraded to 2.1.7 to check if
its gone. But its not :(
The logs just tell me this: 
dovecot: auth: Error: auth worker: Aborted request: Worker process died
unexpectedly

If I change to a sqlite setup, everything works fine. 

Here are some informations. I hope someone can tell me whats wrong with
my system/setup. I really dont think that this is a bug because someone
else should have hit that before me. 

doveconf - n
  # 2.1.7: /etc/dovecot/dovecot.conf
  # OS: Linux 3.2.2-hardened-r1 x86_64 Gentoo Base System release 2.1 
  auth_verbose = yes
  mail_location = maildir:~/%d/mail/%n
  managesieve_notify_capability = mailto
  managesieve_sieve_capability = fileinto reject envelope
encoded-character vacation subaddress comparator-i;ascii-numeric
relational regex imap4flags copy include variables body enotify
environment mailbox date ihave namespace inbox { inbox = yes location = 
mailbox Drafts {
  special_use = \Drafts
}
mailbox Junk {
  special_use = \Junk
}
mailbox Sent {
  special_use = \Sent
}
mailbox Sent Messages {
  special_use = \Sent
}
mailbox Trash {
  special_use = \Trash
}
prefix = 
  }
  passdb {
args = /etc/dovecot/dovecot-sql.conf
driver = sql
  }
  plugin {
sieve = ~/.dovecot.sieve
sieve_dir = ~/sieve
  }
  protocols = imap pop3 lmtp sieve
  service auth {
unix_listener /var/spool/postfix/private/auth {
  group = postfix
  mode = 0660
  user = postfix
}
unix_listener auth-userdb {
  group = vmail
  user = vmail
}
  }
  ssl_cert = /etc/ssl/dovecot/server.pem
  ssl_key = /etc/ssl/dovecot/server.key
  userdb {
args = /etc/dovecot/dovecot-sql.conf
driver = sql
  }
  protocol lda {
mail_plugins = sieve
  }

contents of dovecot-sql.conf: 
default_pass_scheme = PLAIN-MD5
driver = mysql
connect = host=localhost dbname=config user=user password=pass

password_query = SELECT CONCAT( u.username,  '@', d.name ) AS user,
password FROM mail_user AS u LEFT JOIN mail_domains AS d ON u.domain =
d.id WHERE u.username = '%n' AND d.name = '%d'

user_query = SELECT home, uid, gid FROM mail_user AS u LEFT JOIN
mail_domains AS d ON u.domain = d.id WHERE u.username = '%n' AND d.name
= '%d'

// linebreaks added by mailclient for readability 

bt full with gdb: 
  [Thread debx7fb891e85977 216be49381eb03d180103cdf6eb90483
  fields_count = 2
  name = 0x0
  #1  sql_query_callback (result=0x7fb891e82f60,
  sql_request=0x7fb891e82c08) at passdb-sql.c:87 auth_request =
  0x7fb891e82a80 _module = optimized out
  module = optimized out
  passdb_result = PASSDB_RESULT_INTERNAL_FAILURE
  password = 0x0
  scheme = optimized out
  ret = optimized out
  __FUNCTION__ = sql_query_callback
  #2  0x7fb891c3c940 in driver_sqlpool_query_callback
  (result=0x7fb891e82f60, request=0x7fb891e82e50) at
driver-sqlpool.c:635 db = 0x7fb891e66540 conn = 0x0
  conndb = 0x7fb891e66910
  #3  0x7fb891c3dbe0 in driver_mysql_query (db=optimized out,
  query=optimized out, callback=0x7fb891c3c8c0
  driver_sqlpool_query_callback, context=0x7fb891e82e50) at
  driver-mysql.c:296 result = 0x7fb891e82f60 #4  0x7fb891c3cc41 in
  driver_sqlpool_query (_db=0x7fb891e66540, query=0x7fb891e561c8 SELECT
  CONCAT( u.username,  '@', d.name ) AS user, password FROM mail_user AS
  u LEFT JOIN mail_domains AS d ON u.domain = d.id WHERE u.username =
  'masch' AND d.name = 'masch.it', callback=0x7fb891c31960
  sql_query_callback, context=0x7fb891e82c08) at driver-sqlpool.c:657
  db = 0x7fb891e66540 request = 0x7fb891e82e50 conn = 0x7fb891e667c0 #5
  0x7fb891c23b49 in auth_worker_handle_passv (args=0x7fb891e560b8,
  id=1, client=optimized out) at auth-worker-client.c:200 auth_request
  = 0x7fb891e82a80 passdb = optimized out password = 0x7fb891e55ff2
  somepassword passdb_id = 1 #6  auth_worker_handle_line
(line=optimized out, client=optimized out) at
auth-worker-client.c:559 args = out0x7fb891e560a8
  id = 1
  ret = false
  #7  auth_worker_input (client=0x7fb891e80650) at
  auth-worker-client.c:647 _data_stack_cur_id = 3
  line = optimized out
  ret = true
  #8  0x7fb89179f4b6 in io_loop_call_io (io=0x7fb891e80970) at
  ioloop.c:379 ioloop = 0x7fb891e5e390
  t_id = 2
  #9  0x7fb8917a043f in io_loop_handler_run (ioloop=optimized out)
  at ioloop-epoll.c:213 ctx = 0x7fb891e69100
  events = optimized out
  event = 0x7fb891e69170
  list = 0x7fb891e809c0
  io = optimized out
  tv = {tv_sec = 59, tv_usec = 999508}
  msecs = optimized out
  ret = 1
  i = optimized out
  j = optimized out
  call = optimized out
  #10 0x7fb89179ed50 in io_loop_run (ioloop=0x7fb891e5e390) at
ioloop.c:398 No locals.
  #11 0x7fb891786a87 

Re: [Dovecot] troncated email

2012-06-20 Thread Claude Gélinas
Le Wed, 20 Jun 2012 05:36:56 -0400,
Charles Marcus cmar...@media-brokers.com a écrit :

 On 2012-06-19 10:28 PM, Claude Gélinas cla...@phyto.qc.ca wrote:
  I'm on fc16 with dovecot and Claws Mail version 3.8.0
 
 We are much more interested in the dovecot version (and configuration
 - dovecot -n output is helpful there) than the version of Claws Mail.
 
  All email in INBOX are troncated as they arrive. I only get the
  title, from and date but no more core message
 
  could someone guide me so I find a solution for my problem. cannot
  lose my email
 
 Since most of our Crystal Balls are broken, you will likely have to
 be much more precise in your request for help, by providing actual
 excerpts from logs while accessing mail, and you may even have to
 resort to enabling debugging...
 
 Start here: http://wiki2.dovecot.org/WhyDoesItNotWork
 
 Otherwise, you may get more help from a Fedora support list.
 

here is the dovecot -n

# 2.0.20: /etc/dovecot/dovecot.conf
# OS: Linux 3.2.7-1.fc16.x86_64 x86_64 Fedora release 16 (Verne) 
disable_plaintext_auth = no
mail_location = maildir:~/mail/INBOX:LAYOUT=fs
maildir_very_dirty_syncs = yes
managesieve_notify_capability = mailto
managesieve_sieve_capability = fileinto reject envelope encoded-character 
vacation subaddress comparator-i;ascii-numeric relational regex imap4flags copy 
include variables body enotify environment mailbox date ihave
mbox_write_locks = fcntl
passdb {
  driver = pam
}
plugin {
  sieve = ~/.dovecot.sieve
  sieve_dir = ~/sieve
}
service imap-login {
  inet_listener imap {
address = localhost
  }
}
service pop3-login {
  inet_listener pop3 {
address = localhost
  }
}
ssl_cert = /etc/pki/dovecot/certs/dovecot.pem
ssl_key = /etc/pki/dovecot/private/dovecot.pem
userdb {
  driver = passwd
}

Each email I receive in the INBOX folder are missing the core of the
message. I see who sent it, I have the title but the rest is empty. 

In other folder the email come in just fine

Claude 


[Dovecot] how to use new style namespace for INBOX

2012-06-20 Thread ml
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

dear honorable doctor timo

reading the list I saw appear a new style for the writing of INBOX.
namely this example

mailbox Drafts {
  special_use = \Drafts
}
mailbox Junk {
  special_use = \Junk
}
mailbox Sent {
  special_use = \Sent
}
mailbox Sent Messages {
  special_use = \Sent
}
mailbox Trash {
  special_use = \Trash
}
prefix =


I do not know how to use it can you help me now is my config

~]# /usr/sbin/dovecot -n
# 2.1.7: /etc/dovecot/dovecot.conf
# OS: Linux 2.6.34.6--grs-ipv6-32 i686 CentOS release 5.8 (Final)
auth_mechanisms = plain login
base_dir = /var/run/dovecot/
lda_mailbox_autocreate = yes
lda_mailbox_autosubscribe = yes
listen = [::]
log_path = /var/log/maillog
log_timestamp = %Y-%m-%d %H:%M:%S
login_log_format_elements = user=%u method=%m rip=%r lip=%l %c
mail_debug = yes
mail_location = maildir:~/Maildir
mail_max_userip_connections = 30
mail_plugins =  quota  trash zlib
mailbox_list_index = yes
maildir_broken_filename_sizes = yes
managesieve_notify_capability = mailto
managesieve_sieve_capability = comparator-i;octet
comparator-i;ascii-casemap fileinto reject envelope encoded-character
vacation subaddress comparator-i;ascii-numeric relational regex
imap4flags copy include variables body enotify environment mailbox date
spamtest spamtestplus virustest
namespace {
  inbox = yes
  location =
  prefix =
  separator = .
}
passdb {
  driver = pam
}
plugin {
  autocreate = Trash
  autocreate2 = Junk
  autocreate3 = Sent
  autocreate4 = Drafts
  autosubscribe = Trash
  autosubscribe2 = Junk
  autosubscribe3 = Sent
  autosubscribe4 = Drafts
  deleted_to_trash_folder = Trash
  plugin = $mail_plugins  autocreate managesieve  sieve quota
  quota = maildir:User quota
  quota_exceeded_message = Quota exceeded, please go to
http://www.fakessh.eu/over_quota_help.html for instructions on how to
fix this.
  quota_rule = *:storage=10GB
  quota_rule2 = Trash:storage=+10%
  quota_rule3 = Spam:storage=+20%
  quota_warning = storage=95%% quota-warning 95 %u
  quota_warning2 = storage=80%% quota-warning 80 %u
  sieve = ~/.dovecot.sieve
  sieve_before = /var/sieve-scripts/roundcube.sieve
  sieve_dir = ~/sieve
  sieve_global_path = whatever
  trash = /etc/dovecot/dovecot-trash.conf.ext
  zlib_save = bz2
  zlib_save_level = 9
}
protocols = sieve imap pop3
service anvil {
  client_limit = 6000
}
service auth {
  client_limit = 6000
  process_limit = 1
  unix_listener /var/spool/postfix/private/auth {
group = postfix
mode = 0600
user = postfix
  }
  unix_listener auth-master {
mode = 0666
  }
  unix_listener auth-userdb {
mode = 0666
  }
  vsz_limit = 64 M
}
service imap-login {
  client_limit = 0
  inet_listener imap {
port = 0
  }
  inet_listener imaps {
address = * , [::]
port = 993
  }
  process_limit = 1024
  service_count = 1
  vsz_limit = 64 M
}
service imap {
  process_limit = 1024
  process_min_avail = 0
  service_count = 1
  vsz_limit = 64 M
}
service managesieve-login {
  inet_listener managesieve-login {
address = * , [::]
port = 2000
  }
  process_limit = 1
  vsz_limit = 64 M
}
service pop3-login {
  inet_listener pop3 {
port = 0
  }
  inet_listener pop3s {
address = * , [::]
port = 995
  }
  process_limit = 1
  vsz_limit = 64 M
}
service quota-warning {
  executable = script /usr/local/bin/quota-warning.sh
  unix_listener quota-warning {
user = mail
  }
  user = dovecot
}
ssl_ca = /etc/pki/tls/certs/class3.crt
ssl_cert = /etc/pki/tls/certs/ks3.kimsufi.com.cert
ssl_key = /etc/pki/tls/private/ks3.kimsufi.com.key
ssl_verify_client_cert = yes
userdb {
  driver = passwd
}
userdb {
  driver = passwd
}
version_ignore = yes
protocol imap {
  imap_client_workarounds = delay-newmail tb-extra-mailbox-sep
  imap_max_line_length = 64 k
  mail_plugins =  quota  trash zlib   autocreate quota imap_quota
imap_zlib zlib
}
protocol pop3 {
  mail_plugins = autocreate quota quota autocreate deleted_to_trash zlib
  pop3_client_workarounds = outlook-no-nuls oe-ns-eoh
  pop3_logout_format = top=%t/%p, retr=%r/%b, del=%d/%m, size=%s
}
protocol lda {
  hostname = ks3.kimsufi.com
  info_log_path =
  log_path =
  mail_plugins = autocreate  sieve  quota
  postmaster_address = postmas...@fakessh.eu
  sendmail_path = /usr/lib/sendmail
}
protocol sieve {
  managesieve_implementation_string = dovecot
  managesieve_logout_format = bytes ( in=%i : out=%o )
  managesieve_max_line_length = 65536
}
- -- 
  http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xC2626742
  gpg --keyserver pgp.mit.edu --recv-key C2626742

  http://urlshort.eu fakessh @
  http://gplus.to/sshfake
  http://gplus.to/sshswilting
  http://gplus.to/john.swilting
  https://lists.fakessh.eu/mailman/
  This list is moderated by me, but all applications will be accepted
  provided they receive a note of presentation
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.14 (GNU/Linux)
Comment: Using 

Re: [Dovecot] migrating sql virtual 1 to 2, namespace configuration error: inbox=yes namespace missing

2012-06-20 Thread Voytek Eymont
Timo,
 thanks

Timo Sirainen t...@iki.fi wrote:

Easiest fix: remove 15-mailboxes.conf


This didn't seem to fix it, though, perhaps I failed to test it properly

Alternative fix: modify this namespace to actually work. Probably
adding inbox=yes inside it is enough to do that.

With some trepidation, I inserted the string where I thought it should go, and, 
bingo, it started working as expected.

I probably should removed the full path from SQL query, and put in the Conf 
file as docs suggest, but I might leave that for another day.

Thank you again, 
Voytek
-- 
Swyped on Motrix with K-9 Mail. 
Please excuse my brevity.