[Dovecot] disable_plaintext_auth = no as no effect on IMAP/POP3 logins

2012-06-14 Thread Mikkel
 =
mode = 0600
user =
  }
  group =
  idle_kill = 4294967295 secs
  privileged_group =
  process_limit = 1
  process_min_avail = 0
  protocol =
  service_count = 0
  type =
  unix_listener stats {
group =
mode = 0600
user =
  }
  user = $default_internal_user
  vsz_limit = 18446744073709551615 B
}
shutdown_clients = yes
ssl = required
ssl_ca =
ssl_cert = /etc/pki/dovecot/certs/dovecot.pem
ssl_cert_username_field = commonName
ssl_cipher_list = ALL:!LOW:!SSLv2:!EXP:!aNULL
ssl_client_cert =
ssl_client_key =
ssl_crypto_device =
ssl_key = /etc/pki/dovecot/private/dovecot.pem
ssl_key_password =
ssl_parameters_regenerate = 1 weeks
ssl_protocols = !SSLv2
ssl_verify_client_cert = no
stats_command_min_time = 1 mins
stats_domain_min_time = 12 hours
stats_ip_min_time = 12 hours
stats_memory_limit = 16 M
stats_session_min_time = 15 mins
stats_user_min_time = 1 hours
submission_host =
syslog_facility = mail
userdb {
  args =
  default_fields =
  driver = prefetch
  override_fields =
}
userdb {
  args = /local/config/dovecot-sql.conf
  default_fields =
  driver = sql
  override_fields =
}
valid_chroot_dirs =
verbose_proctitle = no
verbose_ssl = no
version_ignore = no
protocol lda {
  mail_plugins = quota quota sieve trash
}
protocol imap {
  imap_client_workarounds = delay-newmail tb-extra-mailbox-sep 
tb-lsub-flags

  imap_logout_format = bytes=%i/%o
  mail_plugins = quota quota imap_quota trash
}
protocol pop3 {
  mail_plugins = quota quota
  pop3_logout_format = top=%t/%p, retr=%r/%b, del=%d/%m, size=%s
  pop3_uidl_format = %08Xu%08Xv
}


Regards, Mikkel


Re: [Dovecot] disable_plaintext_auth = no as no effect on IMAP/POP3 logins

2012-06-14 Thread Mikkel

I just found the solution by coincidence.

It appears there is a configuration file named:
 /etc/dovecot/conf.d/10-ssl.conf

In that file the following line was active ssl = required
That setting apparently overrides what disable_plaintext_auth has to say.

After commenting out the ssl=required entry everything works as expected :-)

Regards, Mikkel

Den 14/06/12 10.14, Mikkel skrev:

Hello

In my installation the disable_plaintext_auth does not appear to take
effect.
I can see that the value is correct using doveconf -a but it doesn't
change anything.

Whenever attempting to log in using IMAP I get this:
* BAD [ALERT] Plaintext authentication not allowed without SSL/TLS, but
your client did it anyway. If anyone was listening, the password was
exposed.
ls NO [PRIVACYREQUIRED] Plaintext authentication disallowed on
non-secure (SSL/TLS) connections.

POP3 login attempts give this error:
-ERR Plaintext authentication disallowed on non-secure (SSL/TLS)
connections

Besides adding disable_plaintext_auth=no to dovecot.conf I also tried
adding it specifically to the imap section.
I also tried to invoke it just for certain networks, like this:

remote 0.0.0.0 {
   disable_plaintext_auth = no
}

But none of this takes any effect either. Adding the testing network as
trusted networks is working fine removing the error.
But I would rather not add the whole internet to the trusted network
section just to allow plain text logins in imap.

I'm in the process of migrating form 1.1 to 2.1 so this configuration is
for testing things out and is mainly based on the default configuration
files comming with the centos installation.
I should add that everything else in this setup is working fine.


I did many searches for information on this topic but nothing I could
find apply to my case.

I'm sorry to post such a long conf but I'm not sure what parts I could
have safely omitted.
Here goes:


# doveconf -a
# 2.1.1: /etc/dovecot/dovecot.conf
# OS: Linux 2.6.32-220.17.1.el6.x86_64 x86_64 CentOS release 6.2 (Final)
auth_anonymous_username = anonymous
auth_cache_negative_ttl = 2 mins
auth_cache_size = 0
auth_cache_ttl = 2 mins
auth_debug = no
auth_debug_passwords = no
auth_default_realm = plain
auth_failure_delay = 2 secs
auth_first_valid_uid = 500
auth_gssapi_hostname =
auth_krb5_keytab =
auth_last_valid_uid = 0
auth_master_user_separator =
auth_mechanisms = plain
auth_realms = plain login  digest-md5 cram-md5 apop ntlm
auth_socket_path = auth-userdb
auth_ssl_require_client_cert = no
auth_ssl_username_from_cert = no
auth_use_winbind = no
auth_username_chars =
abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ01234567890.-_@
auth_username_format = %Lu
auth_username_translation =
auth_verbose = no
auth_verbose_passwords = no
auth_winbind_helper_path = /usr/bin/ntlm_auth
auth_worker_max_count = 30
base_dir = /var/run/dovecot
config_cache_size = 1 M
debug_log_path =
default_client_limit = 1000
default_idle_kill = 1 mins
default_internal_user = dovecot
default_login_user = dovenull
default_process_limit = 100
default_vsz_limit = 256 M
deliver_log_format = msgid=%m: %$
dict_db_config =
director_doveadm_port = 0
director_mail_servers =
director_servers =
director_user_expire = 15 mins
disable_plaintext_auth = no
dotlock_use_excl = no
doveadm_allowed_commands =
doveadm_password =
doveadm_proxy_port = 0
doveadm_socket_path = doveadm-server
doveadm_worker_count = 0
dsync_alt_char = _
first_valid_gid = 1
first_valid_uid = 105
hostname = usrmta01.talkactive.net
imap_capability =
imap_client_workarounds =
imap_id_log =
imap_id_send =
imap_idle_notify_interval = 2 mins
imap_logout_format = in=%i out=%o
imap_max_line_length = 64 k
imapc_host =
imapc_master_user =
imapc_password =
imapc_port = 143
imapc_rawlog_dir =
imapc_ssl = no
imapc_ssl_ca_dir =
imapc_ssl_verify = yes
imapc_user = %u
import_environment = TZ
info_log_path = /var/log/dovecot/dovecot.run
instance_name = dovecot
last_valid_gid = 0
last_valid_uid = 0
lda_mailbox_autocreate = no
lda_mailbox_autosubscribe = no
lda_original_recipient_header =
libexec_dir = /usr/libexec/dovecot
listen = *, ::
lmtp_proxy = no
lmtp_save_to_detail_mailbox = no
lock_method = fcntl
log_path = /var/log/dovecot/dovecot.err
log_timestamp = %b %d %H:%M:%S 
login_access_sockets =
login_greeting = Dovecot ready.
login_log_format = %$: %s
login_log_format_elements = user=%u method=%m rip=%r lip=%l mpid=%e %c
login_trusted_networks =
mail_access_groups =
mail_attachment_dir =
mail_attachment_fs = sis posix
mail_attachment_hash = %{sha1}
mail_attachment_min_size = 128 k
mail_cache_fields = flags
mail_cache_min_mail_count = 0
mail_chroot =
mail_debug = no
mail_fsync = always
mail_full_filesystem_access = no
mail_gid =
mail_home =
mail_location =
mail_log_prefix = %s(%u): 
mail_max_keyword_length = 50
mail_max_lock_timeout = 0
mail_max_userip_connections = 10
mail_never_cache_fields = imap.envelope
mail_nfs_index = yes
mail_nfs_storage = yes
mail_plugin_dir = /usr/lib64/dovecot
mail_plugins = quota

Re: [Dovecot] Questions regarding dbox migration

2009-10-15 Thread Mikkel

Timo Sirainen skrev:

On Wed, 2009-10-14 at 23:59 +0200, Mikkel wrote:
In case of mdbox wouldn't you have the very same problem since larger 
files may be fragmented all over the disk just like many small files in 
a directory might?


I guess this depends on filesystem. But the files would typically be
about 2 MB of size. I think filesystems usually copy more data around to
avoid fragmentation.

In any case if there are expunged messages, files containing them would
be recreated (nightly or something). That'll unfragment the files.



It would be nice if this recreation interval (nightly, weekly, monthly?) 
was made tunable.
Some users would have mailboxes a several hundred megabytes and having 
to recreate thousands of these every night because of a single mail 
getting expunged a day could result in a huge performance hit.



And finally one thing I've also been thinking about has been that
perhaps new mails could be created into separate individual files. A
nightly run would then gather them together into a larger file.



I think this could be a great idea in some setups and a pretty bad one 
in others.
In my setup for instance incoming emails account for about half the disk 
write activity and though activity is somewhat lower during the night 
there still is a lot of activity around the clock.


In the proposed design all the incoming emails would have to be written 
to the disk twice (if I got it right?) and at a time when there would 
still be a relatively high activity (because there always is).


So if this should increase performance overall there would have to be a 
somewhat large initial gain in order to justify the double writing.
Also most emails are either read shortly after arriving or not at all so 
the primary access to the mails would happen while the emails are still 
located in single files.


My point is that there could be many reasons why such a design might 
actually lead to poorer performance so this should probably be tested 
extensively before being implemented.


But this could be a really nice solution if implemented in such a way 
the incoming mails are stored in single files on a separate configurable 
device (which would optimally be a flash device) and then moved to the 
actual storage at night.



So I can definitely see the point in mdbox but I better stay away from 
it, using NFS... :/


What kind of index related errors have you seen in logs? Dovecot can
handle most index corruptions without losing much (if any) data.
Everything related to dovecot.index.cache can at least be ignored.



The errors I get are like these two:

dovecot: Oct 01 15:13:05 Error: POP3(acco...@domain): Transaction log 
file /local/account_homedir/Maildir/dovecot.index.log: marked corrupted


dovecot: Oct 01 15:13:57 Error: IMAP(another_acco...@domain): Corrupted 
transaction log file /local/another_homedir/Maildir/dovecot.index.log 
seq 229: duplicate transaction log sequence (229) (sync_offset=32860)




Regards, Mikkel


Re: [Dovecot] Questions regarding dbox migration

2009-10-15 Thread Mikkel



Timo Sirainen skrev:
 On Thu, 2009-10-15 at 10:55 +0200, Mikkel wrote:
 Some users would have mailboxes a several hundred megabytes and having
 to recreate thousands of these every night because of a single mail
 getting expunged a day could result in a huge performance hit.

 It doesn't matter what the user's full mailbox size is. It matters how
 many of these max. ~2 MB dbox files have expunged messages. Typically
 users would expunge only new mails, so typically there would be only a
 single file that needs to be recreated.


That way it seems pretty clever. But if the number of messages in each 
mdbox file (we are talking about multi-dox and not single-dbox right?) 
is configurable via mdbox_rotate_size (ranging from 1 to infinity I 
guess) then how can you assume that each file is no more than ~2 MB?


  dovecot: Oct 01 15:13:57 Error: IMAP(another_acco...@domain): 
Corrupted

 transaction log file /local/another_homedir/Maildir/dovecot.index.log
 seq 229: duplicate transaction log sequence (229) (sync_offset=32860)

 Hmm. This means that Dovecot thought that dovecot.index.log file was
 rotated, but when it reopened it it actually had the same sequence
 number in the header. What NFS server are you using? One possible reason
 for this is if file's inode number suddenly changes.

I'm using a NAS appliance (Sun S7410) that is basically just a modified 
Solaris 10 NFS server on x86 hardware (AMD).
The system has had some stability issues and therefore it's difficult to 
tell whether that specific error happened during normal operation or if 
it's just a special case.


The client also runs Solaris 10 but this is the Sparc version.

Solaris uses NFSv4 as default (and I haven't changed this).

Anyway do you think that dovecot would be able to recover from that kind 
of error without loss of information if dbox/mdbox where used instead of 
Maildir?



Regards, Mikkel


[Dovecot] Questions regarding dbox migration

2009-10-14 Thread Mikkel
It has been my wish to move to dbox for a long time hoping to reduce the 
number of writes which is really killing us.


Now I wonder what may be the best way of doing so. I'm considering some 
sort of intermediate migration where the existing Maildir users are 
changed to single-dbox and then later upgraded to 2.0 and changed to 
multi-dbox when it becomes stable.
But is this a reasonable task to perform on a production system or at 
all? The alternative is to wait for 2.0 to become completely stable and 
then go all the way at once.
I would however much prefer to take it one step at a time and I think it 
would be safer to start out with single-dbox as it appears less complex.


Now the big question is whether multi-dbox and single-dbox are 
compatible formats.
If a Maildir-dbox migration is made on a system running dovecot v. 1.1, 
would it then be trivial later changing to multi-dbox after upgrading to 
2.0 or is a completely new migration then needed?
Would this scenario be much different if the system is upgraded to 
version 1.2 before the change to single-dbox?


Kind regards, Mikkel



Re: [Dovecot] quota error dovecot 1.1.13

2009-10-14 Thread Mikkel

Brightblade skrev:

Hi all,

I've troubles with dovecot and quota. Sometimes mails to users are bounced because quota 
is full. I'm reading quota_rules from LDAP. One du -h . from maildir show 
user is under quota ( at 70 % more or less), less than 99%, i delete maildirsize and 
dovecot recreate it with ldap-quota value at first line and second line is current-quota 
mails. Current-quota is near quota max value so dovecot execute quota_warning (99%).
Why is dovecot thinking quota is at 99% when du -h show me its value is near 
70% ? any guess?


You should read the file named maildirsize in the specific user's homedir.
I'll tell you whether this is caused by a wrong interpretation of your 
ldap values or whether dovecot didn't count the usage right.



The first line in the maildirsize file tells what values it got from the 
 database and the subsequent lines are counting the actual usage. 
Comparing the values on the first line to your ldap database is the 
first thing you should do.



Getting what dovecot considers the actual usage from the maildirsize 
file requires that you know how it works. One in a while dovecot saves 
the current usage to the second line of the maildirsize file. Then 
afterwards every time the there is a change (mail arrives or is deleted) 
one line showing that change is appended to the file.
When a certain limit is reached all the lines are summed together and 
the second line of the file is updates (while all subsequent lines are 
deleted).


This means that just reading the second line of the file will give you a 
pretty good indication of whether dovecot has counted the usage correctly.


If dovecot has made an error just delete the maildirsize which will 
cause dovecot to recount the usage.



Regards, Mikkel


Re: [Dovecot] Questions regarding dbox migration

2009-10-14 Thread Mikkel

Timo Sirainen skrev:

On Oct 14, 2009, at 7:03 AM, Mikkel wrote:

It has been my wish to move to dbox for a long time hoping to reduce 
the number of writes which is really killing us.


BTW. Have you tried maildir_very_dirty_syncs=yes setting? That should 
reduce disk i/o, and I'm really interested in hearing how much.




I don't think I've tried that one. Earlier on I experimented with 
fsync_disable=yes (which made a huge difference by the way) but that was 
before I started using mail_nfs_storage=yes and mail_nfs_index=yes


I would like to try using maildir_very_dirty_syncs=yes but is it 
advisable in combination with NFS?




Regards, Mikkel


Re: [Dovecot] Questions regarding dbox migration

2009-10-14 Thread Mikkel

Timo Sirainen skrev:

On Oct 14, 2009, at 7:03 AM, Mikkel wrote:

Now the big question is whether multi-dbox and single-dbox are 
compatible formats.


Kind of, but not practically.

If a Maildir-dbox migration is made on a system running dovecot v. 
1.1, would it then be trivial later changing to multi-dbox after 
upgrading to 2.0 or is a completely new migration then needed?
Would this scenario be much different if the system is upgraded to 
version 1.2 before the change to single-dbox?


Migrating from single-dbox to multi-dbox isn't any easier than maildir 
- multi-dbox.



From your comments it appears like dbox and mdbox are quite different 
in many ways. Is mdbox going to replace dbox completely or are you 
expecting to keep both formats?
My point is: what's going to be the difference between dbox and mbox 
with mdbox_rotate_size set small enough to allow only one mail per file?




I'm trying to get v2.0.0 out pretty quickly though. v2.0.beta1 should 
hopefully be out in less than a month. The main problem with it is 
actually how to make it enough backwards compatible that everyone won't 
start hating me.


That is great news. I'm looking very much forward to make the shift to 
dbox/mdbox.
A new mail storage format is really, really, really interesting from my 
point of view. CPU and RAM are both very cheap by now but disk I/O 
remains as expensive as always and really is the only limiting factor 
these days and is what's driving up the costs of running any larger mail 
system.
The performance gains you mentioned previously this year would probably 
cut the total datacenter costs in half and that is quite an accomplishment.


How stable do you think dbox and mdbox are at the moment?


Regards, Mikkel




Re: [Dovecot] Questions regarding dbox migration

2009-10-14 Thread Mikkel

Timo Sirainen skrev:
  The main difference is that mdbox needs to lock files when saving

messages. That's not especially nice with NFS. Single-dbox currently
locks index files, but it can be made entirely lockless eventually.
There are also some other differences like:

 - in mdbox all messages exist in storage/ directory, while in dbox
messages exist in separate mailbox directories
 - mdbox has a separate storage/dovecot.map.index that needs to be
updated
 - in mdbox message can be copied by doing a couple of small appends to
index files, while with dbox the file needs to be hard linked (and
currently even that's not done)



So basically you prefer mdbox but are maintaining dbox because of its 
almost lockless design which is better for NFS users?


Do you consider it to be viable having two different dbox formats or are 
you planning to keep only one of them in a long term perspective?



Regards, Mikkel


Re: [Dovecot] Questions regarding dbox migration

2009-10-14 Thread Mikkel

Timo Sirainen skrev:

On Wed, 2009-10-14 at 21:14 +0200, Mikkel wrote:
It has been my wish to move to dbox for a long time hoping to reduce 
the number of writes which is really killing us.
BTW. Have you tried maildir_very_dirty_syncs=yes setting? That should 
reduce disk i/o, and I'm really interested in hearing how much.


I don't think I've tried that one. Earlier on I experimented with 
fsync_disable=yes (which made a huge difference by the way) but that was 
before I started using mail_nfs_storage=yes and mail_nfs_index=yes


I would like to try using maildir_very_dirty_syncs=yes but is it 
advisable in combination with NFS?


It should be fine with NFS if indexes are also on NFS. Although I just
fixed a bug related to it:
http://hg.dovecot.org/dovecot-1.2/rev/7956cc1086e1



The system is currently running dovecot version 1.1.19. Would you 
consider it safe to try it on that version as well?



Indexes on NFS are problematic now though if multiple servers can access
the mailbox at the same time. mail_nfs_index=yes is supposed to help
with that, but it's not perfect either. Long term solution would be for
Dovecots in different machines to talk to each others directly instead
of through NFS.


Is worse now than previously?

I have been running at production setup with two servers accessing the 
same Maildir data from NFS without any problems for quite a while now. 
Load is spread randomly between the two servers so I can only assume 
that by coincidence they sometimes try to access the same mailbox.
This has functioned quote well with many versions of the 1.1.x dovecot 
releases so unless some new issues have been introduced I don't think I 
should fear anything in that regard :-)



Regards, Mikkel



Re: [Dovecot] Questions regarding dbox migration

2009-10-14 Thread Mikkel

Timo Sirainen wrote:


And you've actually been looking at Dovecot's error log? Good if it
doesn't break, most people seem to complain about random errors.


Well, it does complain once in a while but it has never resulted in data 
being lost in any way. But I guess your point is that this might happen 
with dbox under the same circumstances.

A very good reason to wait for 2.0 I guess...

Regards, Mikkel


Re: [Dovecot] Questions regarding dbox migration

2009-10-14 Thread Mikkel

Timo Sirainen wrote:

On Wed, 2009-10-14 at 23:41 +0200, Mikkel wrote:

Timo Sirainen wrote:

And you've actually been looking at Dovecot's error log? Good if it
doesn't break, most people seem to complain about random errors.
Well, it does complain once in a while but it has never resulted in data 
being lost in any way. But I guess your point is that this might happen 
with dbox under the same circumstances.

A very good reason to wait for 2.0 I guess...


Well, the NFS caching issues aren't going away in v2.0 yet. v2.1 or so
perhaps..



But it should be able to heal itself using the backup files in version 
2.0, right? How often are they created anyway?


Regards, Mikkel


Re: [Dovecot] Questions regarding dbox migration

2009-10-14 Thread Mikkel

Timo Sirainen skrev:

On Wed, 2009-10-14 at 23:04 +0200, Mikkel wrote:
So basically you prefer mdbox but are maintaining dbox because of its 
almost lockless design which is better for NFS users?


Do you consider it to be viable having two different dbox formats or are 
you planning to keep only one of them in a long term perspective?


I'm planning on keeping both of them. And it's not necessarily only
because of NFS users. Multi-dbox was done mainly because filesystems
suck (mailbox gets fragmented all around the disk). Maybe if filesystems
in future suck less, single-dbox will be better. Or perhaps SSDs make
the fragmentation problem mostly irrelevant.



You are talking about directories being fragmented right?
In case of mdbox wouldn't you have the very same problem since larger 
files may be fragmented all over the disk just like many small files in 
a directory might?



And note that there are no real world statistics on how much faster
multi-dbox is compared to single-dbox (or maildir). Maybe the difference
isn't all that big after all. Or maybe it's a lot bigger than I thought.
I've no idea.


I think the impact on imap operations and mail delivery probably would 
very little due to bigger files.


But pop3 users just download everything once in a while and should 
benefit tremendously from just having to read one file sequentially as 
opposed to read many small files.




So I can definitely see the point in mdbox but I better stay away from 
it, using NFS... :/


Regards, Mikkel


Re: [Dovecot] Questions regarding dbox migration

2009-10-14 Thread Mikkel

Timo Sirainen wrote:

On Wed, 2009-10-14 at 23:52 +0200, Mikkel wrote:
But it should be able to heal itself using the backup files in version 
2.0, right? 


That's the theory anyway. :)


How often are they created anyway?


Whenever dovecot.index file would normally get recreated, the old one is
first link()ed to dovecot.index.backup. So that depends on various
things. You could compare timestamp differences between dovecot.index
and dovecot.index.log in your current system. Perhaps I should add some
extra code that guarantees that .backup gets created at least once .. a
day? week? .. (assuming there have been changes)


I would like it to happen as often as possible unless you expect a lot 
of I/O to happen on this account.
In that case perhaps it could be made a configurable option so the 
system administrator could decide for himself which risk/performance 
profile would fit in the specific situation?


Regards, Mikkel


Re: [Dovecot] dbox redesign

2009-02-12 Thread Mikkel

Hi Timo

I have a few comments. Please just disregard them if I have 
misunderstood your design.


Regarding your storage plan
I find it very important that users can be stored in different locations 
because:
1. Discount users could be placed on cheap storage while others are 
offered premium service on expensive hardware
2. It's easy to scale if you just add another LUN from your SAN or mount 
from NAS
3. In order to avoing huge directories you can put users into subdirs 
with each subdir containing only say 1000 users each
All this is very easy to achieve in 1.1 because you can return 
individual storage dirs for indexes and data from the user db.
I'm not sure from reading your post whether this will still be possible 
but I believe it’s a very important thing.



Regarding 7.
I very much for all the self healing you describe.
There is nothing worse than huge complex systems that fail just because 
of some minor error that could easily be fixed without manual intervention.

But also I'm a little worried in this regard.

Maildir is so robust that nothing can really go wrong. But here you have 
index files and data files located in different places.
Imagine the index file being on one NFS mount whilst the data resides on 
another.
Or if the administrator is purposely loading a different index file or 
data file from a backup.


Worst case scenario is that the self healing takes a manual operation 
for a failure and breaks something.
It should be very resilient to temporarily losing access to all files in 
this operation (could happen very often on NFS mounts).


Also I imagine the self-healing going into loops if it doesn't 
understand what’s going on.
If the data changes dues to manual intervention or par of the file 
system can be accessed you could imagine the self healing process trying 
again and again to fix something that isn't its job to fix.

In that case it would be better if it just skipped the apparent failures.


Timo wrote:
I'm also wondering if it's better for each mailbox to have its separate
dovecot.index.cache file or if there should be one cache file for the
map index.
I think you should consider more files as the general choice (not only 
regarding cache files).
Imagine many dovecot servers accessing the same storage simultaneously. 
I figure it would be a lot easier if they weren’t all trying to 
read/update one essential file at the same time (with only one file, 
load can’t be spread across multiple mounts and everything goes down if 
the mount with the essential file is inaccessible).
If there is serious data corruption and you have only one file then all 
operations are paused while the self healing is trying to figure out 
what went wrong (and what happens if different servers decide to do 
self-healing on this one file at the same time?).
With one file per maildir only a small portion of the users are 
affected, the load is spread and really bad file corruption doesn’t 
break everything for thousands of users.


Other than that I’m just really glad that dbox is progressing. I 
consider it the feature.
Dbox is the email administrator’s wet dream. I’m already dreaming of 
completely avoiding the scalability issues of large Maildirs (which is 
the biggest challenge today in my opinion) and reducing the IO. Buying 
more IO is an order of magnitude more expensive than getting more RAM or 
CPU power (and dovecot barely needs any RAM and CPU anyway).


Best wishes, Mikkel



Re: [Dovecot] Backing Up

2008-10-31 Thread mikkel
 Dave McGuire wrote:
 On Oct 29, 2008, at 3:42 PM, Scott Silva wrote:
 What is the best way to do a (server-side) backup of all mail in a
 user's mail?

 I usually just rsync the /home directories to another server. The
 inital sync
 can take a while, but it gets faster after there is a base to work
 from.

   ...and it's much less painful if you're using maildir instead of mbox!

-Dave

 I have to wonder.  I have a mailserver that I do a bootable complete
 image copy of with all files and O/S in two hours to an Ultrium-2 tape,
 95 GB.  When I switch to maildir, I will go from some 25,000 mbox files
 to 2.5 to 3 million files...I can't believe that isn't going to hurt and
 will force me into incrementals.


My thoughts on rsync.

You may want to consider that incremental backups won’t help you much if
you use Maildir. Incremental or full rsync still has to generate a list of
all the files.

Whether it’ll work for you is impossible to say. I guess you’ll just have
to make a test.
But you're right that the large amount of files will be an issue.

Rsync seems to be loading information about each file into memory before
comparing the lists of files and doing the actual transfer.
That may be a lot of memory if you have a lot of files.

I sometimes overcome this by rsync’ing each user or domain one at a time.
That way you will also limit issues of files no longer existing once the
transfer begins (makes rsync generate errors).

You can estimate the time needed to list all the files.
Try and use iostat to get a rough idea of how many OIPS your system
handles under max stress load and how many it handles under normal
operation.
The difference is the amount available to you during the backup.
Divide the total number of files with the number of available IOPS.

Say you have 100 IOPS available then it will take roughly 8 hours
(3,000,000/100/3600=8.3 hours) to generate the list of 3,000,000 files.
The afterwards transfer will probably be a lot faster.
I'm not sure whether reading information about one file take up one IO
operation. But that way of calculating the time to generate the lists
wasn't much off last time I tried.


One option that I would prefer if I were to backup the entire store with
one command would be generating a snapshot of the file system.
And then rsync or cp that snapshot. That way you’ll always get a
consistent backup and you won’t have to worry about how long the backup
takes to finish.

Regards, Mikkel




Re: [Dovecot] Backing Up

2008-10-31 Thread mikkel
 [EMAIL PROTECTED] wrote:
 One option that I would prefer if I were to backup the entire store with
 one command would be generating a snapshot of the file system.
 And then rsync or cp that snapshot. That way you’ll always get a
 consistent backup and you won’t have to worry about how long the backup
 takes to finish.

 Snapshot seems like an excellent idea to avoid files missing files
 moving between /cur and /new. However, it should be pointed out that
 this is extra io for the server (with LVM at least) whilst the backup is
 running

I only have experience wuth UFS (FreeBSD) and ZFS (Solaris).
Snapshots on UFS is a horrible thing for large file systems.

Snapshots on ZFS is marvellous (which I use). It does not result in any
extra IO whatsoever due to some clever designing.
If you have the option of using ZFS it's definitely the best way to do it.

Regards, Mikkel



Re: [Dovecot] Backing Up

2008-10-30 Thread mikkel
Timo Sirainen:
 One possibility is to just wait for dbox with multiple-messages-per-file
 feature. I can't really say when it'll be ready (or when I'll even start
 implementing it), but I know I want to use it myself and some companies
 have also recently been asking about it.



Have you considered making dbox a major priority for v. 1.2?

I have been holding back on v.1.2 because I don’t really see the big
improvements in it that I saw in v.1.0 and v.1.1.
With 1.0 and 1.1 I hurried off using them in production environments even
while they where still in beta (of course only after proper testing)
because they posed so many advantages (primarily speed and stability) over
other solutions.

Since I’m focused almost entirely on stability and speed, and very little
on fancy functionality, what v.1.0 offers in terms of functionality is
just fine. What drove me towards 1.1 were speed improvements (and
stability on NFS).
I remember you made a post about not many people testing v.1.2.
I think the reason may be that most users feel the same as me. They’d like
to se a major feature that benefits their primary needs, which isn’t in
term of functionality but more in term of speed improvements.
Dbox could be that feature as I think there isn’t much room for further
developing the Maildir format (and as far as I can see you have gone as
far as possible with regards to optimizing speed while working within the
boundaries of the Maildir standard).

Maildir is nice compared to mbox but it really isn’t optimal. In days
where IOPS is the most difficult resource to get into your server (and
dovecot already using close to nothing in terms of CPU time and memory)
having one file per e-mail is less than sub-optimal especially when a
large amount of users just downloads the whole mailbox using POP3 (not to
mention backing up Maildirs).

Now don't take this as a critic, I love your software.
I just would really like to se dbox evolve and think it would be a major
driving force for v.1.2 :)

Develop dbox, Do it. Do it naoughw! (preferably pronounced with a
schwarzeneggerish accent like in the last three seconds of this splendid
video http://www.youtube.com/watch?v=adc3MSS5Ydc).

Best regards, Mikkel




Re: [Dovecot] Backing Up

2008-10-30 Thread mikkel
 On Oct 30, 2008, at 2:35 PM, [EMAIL PROTECTED] wrote:
 Maildir is nice compared to mbox but it really isn’t optimal. In days
 where IOPS is the most difficult resource to get into your server (and
 dovecot already using close to nothing in terms of CPU time and
 memory)
 having one file per e-mail is less than sub-optimal especially when a
 large amount of users just downloads the whole mailbox using POP3
 (not to
 mention backing up Maildirs).

It seems to me that a database like Postgres or MySQL would be the
 best bet.


That's a matter of opinion. Moving mail storage to a database would
probably be the last thing I would ever do (I'm not saying it's not the
right thing for some people. I'm just not one of them).
I'm using mysql for storing the users database but that’s another story.

Adding a database is one additional level of complexity. One more program
to govern. In my opinion it's nice to know that as long as the disk is
readable nothing can go completely wrong.

The database in my case would be roughly 400 GB holding some 60 million
records.
Just imagine if one single byte got written to the wrong place. Power
outage, OS crash, software bug or whatever could easily result in this (I
regularly experience mysql tables that crash on their own from heavy use).
Having to run a repair on a table of that size whilst all users are eager
to get to their data must be a nightmare of proportions.

Just imagine backing the thing up, exporting 60.000.000 SQL queries.
Not to say importing them again if something should go really wrong.
Actually I'n not even sure it would be faster. When the index files grow
to several gigabytes they kind of loose their purpose.


Maildir is very resilient to various errors. It is virtually impossible to
corrupt a maildir (at least I've never experienced anything).
Also you can backup up the thing without worrying about anything accessing
it at the same time.
Mbox less so but still a lot better than having one huge database.

Dbox would be the ultimate compromise between crash resilience and a low
number of files (not to mention the enormous potential for speed gains).

Regards, Mikkel




Re: [Dovecot] Backing Up

2008-10-30 Thread mikkel
 [EMAIL PROTECTED] wrote:
 Just imagine backing the thing up, exporting 60.000.000 SQL queries.
 Not to say importing them again if something should go really wrong.
 Actually I'n not even sure it would be faster. When the index files grow
 to several gigabytes they kind of loose their purpose.

 There are many businesses backing up way-more data than that and it it
 isn't 60,000,000 queries -- it is one command.  But if you use serious
 hardware backing up isn't really needed.  RAID, redundant/hot-swap
 servers, etc. make backing up /extra redundancy/.  :-)


Why make things complicated and expensive when you can make them cheap and
simple?
Anything is possible if you wanna pay for it (in terms of hardware,
administration and licenses).
I have focused primarily on making it as simple as possible.

And while running a 400 GB with 60.000.000 records database isn't
impossible it would be if it were to run on the same hardware that now
comprises the system.
Roughly 1000 IOPS is plenty to handle all mail operations.

I seriously doubt that it would be enough to even supply one lookup a
second on that huge db (and even less over NFS as is now being used).
And I assume that a hundreds of lookups a second would be required to
handle the load.

So it would require a lot more resources and still give nothing but
trouble (risk of crashed database and backup issues that now aren't
there).

By the way data is stored in a SAN it needs to be backed up.
500 GB SATA disks takes a day to synchronize if one breaks down and we
can't really take that chance (Yes I will eventually move the data to
smaller 15.000 RPM disks but there is no need to pay for them before its
necessary). Also there is the risk of data being deleted by a mistake,
hacker attacks or software malfunctioning.

But we really are moving off-topic here.

Regards, Mikkel



Re: [Dovecot] Quota handling on NFS Maildir

2008-02-22 Thread mikkel
 On 2/21/2008, [EMAIL PROTECTED] ([EMAIL PROTECTED]) wrote:
 At the moment I'm running 1.1beta14

 I'm sure Timo appreciates any/all help he can get, but if you are going
 to go through the trouble of running the development beta versions,
 don't you think it would make sense to always use the latest - and
 especially, don't report problems on anything but the latest?


Please!

Look at what I'm writing - this appears to have nothing to do with a
specific beta.
Also nothing appears to be committed to HG relating this issue since beta14.

... going to go through the trouble of running the development beta
versions ...

No trouble in that regards, most of the 1.1betas have been stable enough
for production and a lot better than 1.0 if performance is also considered
an important parameter.
But installing the latest one without waiting a little to see if anything
comes up seems like a bad idea.


Regards, Mikkel




Re: [Dovecot] Quota handling on NFS Maildir

2008-02-22 Thread mikkel
 On Thu, 2008-02-21 at 11:03 +0100, [EMAIL PROTECTED] wrote:
 I have been doing some testing and it appears that the updating of the
 file maildirsize is sometimes delayed.

 When do you see this delaying?


When I telnet manually to pop3 and imap and type commands to delete
certain messages, while keeping an eye on the maildirsize file.
This isn't always consistent, but it appears that the updates to
maildirsize are grouped together before being committed.

 I figure this is because Dovecot collects theese changes in the
 transactions or index file and then once in a while writes them to
 maildirsize.

 POP3 is run in a single transaction, but the mails are expunged and
 maildirsize is updated only when QUIT is run. So you shouldn't be able
 to notice a situation where maildirsize doesn't match the maildir
 contents.

So what happens if QUIT is never run? If the the connection is broken
before ending properly?
Does the IMAP connection also have to be terminated properly before the
updates are written?

This may be the cause of the issue since some users in a production
environment will always break the connection (loss of internet
connectivity, the client program crashing or just generally badly behaving
e-mail clients).


 Both IMAP and POP3 use the same code to access mailboxes and update
 quota, so I can't really think of anything.

 Although it's of course possible to use different settings for
 POP3/IMAP.

 mail_plugins(imap): quota imap_quota trash
 mail_plugins(pop3): quota

 OK, both use quota plugin..


Yes and I know it works. When executing the pop3 remove commands myself I
can see that changes are actually written to maildirsize.

 plugin:
   quota: maildir
   quota_rule2: Trash:storage=10M:messages=100

 I guess quota_rule comes from userdb? Is it the same for both imap/pop3?
 Although that shouldn't matter since the maildirsize should be updated
 in any case..


The queries are exactly alike for POP3 and IMAP.

Thanks for looking into this.

Regards, Mikkel



Re: [Dovecot] Quota handling on NFS Maildir

2008-02-22 Thread mikkel
 On Feb 22, 2008, at 11:26 AM, [EMAIL PROTECTED] wrote:
 Do you mean that you can see a situation with QUIT or EXPUNGE command
 when maildir's contents don't match maildirsize file? Or if you mean
 only setting the deleted flags, that's normal then because nothing
 really gets deleted yet anyway.

The delay wasn't long enough for me to physically test if the quota was
our of sync meanwhile.
My estimate is 5-10 seconds and it could be the system waiting for I/O or
something. I just figured that maybe this was due to grouping of the
transactions and thought that if so, then maybe it was possible that this
code could fail in some situations.

 This may be the cause of the issue since some users in a production
 environment will always break the connection (loss of internet
 connectivity, the client program crashing or just generally badly
 behaving
 e-mail clients).

 For POP3 maybe, but they'd probably get duplicate messages downloaded
 then too, unless the client is smart with UIDLs.

I can see your point. If your coding does'n not allow for situations where
dovecot gets interrupted in between doing the actual delete and update the
maildirsize then I have no idea what happens.
I just figured that maybe there was a week spot somewhere that could break
in certain situations (like NFS locking troubles or something) :)

 With IMAP could it be just that users have marked messages deleted and
 their client hides them, but the messages are never expunged? Or that
 the messages have been moved to Trash mailbox..

When I du the user's storage I can see that there is a lot less data
than maildirsize claims (so this isn't due to hidden mails or mails in
Trash).

Apparently the large majority of users have no problems whatsoever while e
few specific users experience this every two weeks on average (when they
finally reach the upper quota limit and report the error).
The solution for now is to remove the file maildirsize so the quota will
be recounted.

The way this happens it seems to me that it's not just a random NFS
locking error but actually a bug somewhere and that some users manage to
always trigger the bug (which seems to be triggered sometimes when pop3 is
used) while others do not.
But it's pretty difficult to get close to the cause.

Does Dovecot actually check whether updating the maildirsize is successful
or not after calling the operations (e.g. what happens if the code is
unable to read from or write to maildirsize)?


Regards, Mikkel



Re: [Dovecot] sieve problem

2008-02-22 Thread mikkel
 [EMAIL PROTECTED] wrote:

 Also the errors are better the newer the version of dovecot (upgrading
 may
 help in that regard).


 I'd upgraded to version dovecot-1.0.10-0_66.
 Now I get more logs in dovecot-deliver.log, but things still not work:

 deliver([EMAIL PROTECTED]): Feb 22 13:09:30 Info:
 msgid=[EMAIL PROTECTED]: Couldn't open mailbox
 .Junk:
 Invalid mailbox name
 deliver([EMAIL PROTECTED]): Feb 22 13:09:30 Info: sieve
 runtime
 error: Fileinto: Generic Error
 deliver([EMAIL PROTECTED]): Feb 22 13:09:30 Error:
 sieve_execute_bytecode(/var/spool/mail/sieve-scripts/[EMAIL 
 PROTECTED]//.dovecot.sievec)
 failed

 The .Junk folder exists. It is in the root of the virtual users mail
 directory:
 /var/spool/mail/vhosts/neobytesolutions.com.europa/vazi/.Junk

 Here is the sieve script:

 require [fileinto];
 if header :contains subject [test] {
  fileinto .Junk;
 }


Hi

First of all I think that you should post to the dovecot list instead of
me in private.
This way I'm the only one to see your post and I'm not sure that I can help.
Also other users may benefit from your experience if it's available online.

When you post to [EMAIL PROTECTED] a copy is sent to me anyway.

I'm not sure that I can help. I think you should check your permissions on
the folder.
And I think you should try with: fileinto Junk;
Instead of: fileinto .Junk;
(without the preceding dot that is)


The preceding dot will automatically be added to the folder name by
dovecot so it may create confusion if you use it too.
Dovecot always uses the dot-prefix on folder names in the file system but
this isn't really part of the IMAP/Sieve folder name that you wanna use.


Regards, Mikkel



Re: [Dovecot] Deliver core dump in b13 (hg 20080102)

2008-01-16 Thread mikkel
On Sat, January 12, 2008 6:39 pm, [EMAIL PROTECTED] wrote:
 On Sat, January 12, 2008 10:48 am, Timo Sirainen wrote:
 On Thu, 2008-01-10 at 14:35 +0200, Timo Sirainen wrote:

 Ah, it's because it's the current directory. It would be better if
 home dir wouldn't be the same as mail dir
 (http://wiki.dovecot.org/VirtualUsers#homedirs), but I'll see if I
 can do something about this.

 Fixed: http://hg.dovecot.org/dovecot/rev/5dda06c19eb1

 But isn't this fix masking the real error; namely that either I my
 directory layout isn't propper or Dovecot is flushing the wrong directory
 in this specific case when using Sieve redirect (home directory is then
  flushed instead of the maildir directory that it normally flushes).


Hi Timo

What's your take on this?
Should I change the layout (and what layout should I use then) or is this
a Dovecot bug?
My layout is like the first example show here (Mail directory under home):
http://wiki.dovecot.org/VirtualUsers#homedirs


(The new bug is described here:
http://dovecot.org/list/dovecot/2008-January/028062.html)

Do you need any further information?

Regards, Mikkel



Re: [Dovecot] Deliver core dump in b13 (hg 20080102)

2008-01-16 Thread mikkel
/auth
  mode: 432
  user: postfix
  group: postfix
master:
  path: /var/run/dovecot/auth-master
  mode: 384
  user: vmail
plugin:
  quota: maildir
  quota_rule2: Trash:storage=10M:messages=100
  trash: /local/config/dovecot-trash.conf

Regards, Mikkel



Re: [Dovecot] Deliver core dump in b13 (hg 20080102)

2008-01-13 Thread mikkel
On Sat, January 12, 2008 10:48 am, Timo Sirainen wrote:
 On Thu, 2008-01-10 at 14:35 +0200, Timo Sirainen wrote:

 Ah, it's because it's the current directory. It would be better if home
  dir wouldn't be the same as mail dir
 (http://wiki.dovecot.org/VirtualUsers#homedirs), but I'll see if I can
 do something about this.

 Fixed: http://hg.dovecot.org/dovecot/rev/5dda06c19eb1


It fixed the flushing errors, but it still returns a delivery report
saying e-mail undeliverable and deliver creates a core dump (even though
the e-mail is correctly delivered to both accounts).

The e-mail is delivered, but afterwards Deliver fails and returns the
error (listed in the bottom of this e-mail).
Beats me why.


You wrote in a previous mail:
 msgid=20080107212004.782C817DB8 at mta01.euro123.dk: saved mail to INBOX
 deliver(mikkel at euro123.dk): Jan 07 22:20:14 Panic: file
index-mail.c: line
 1042 (index_mail_close): assertion failed: (!mail-data.destroying_stream)
I'm still unable to reproduce this myself. It most likely requires
something specific in dovecot.index.cache file. Is this with maildir?

I used to get this a lot in b9-b11 (don't know about b12 since it wouldn't
compile on Solaris).
But then it disapeared completely in b13 except in this specific case.



Regards, Mikkel


[EMAIL PROTECTED] tail  -f /local/log/deliver.log | grep @euro123.dk
deliver([EMAIL PROTECTED]): Jan 13 10:44:52 Info:
msgid=[EMAIL PROTECTED]: forwarded to [EMAIL PROTECTED]
deliver([EMAIL PROTECTED]): Jan 13 10:44:53 Info:
msgid=[EMAIL PROTECTED]: saved mail to INBOX
deliver([EMAIL PROTECTED]): Jan 13 10:44:53 Panic: file index-mail.c: line
1042 (index_mail_close): assertion failed: (!mail-data.destroying_stream)
deliver([EMAIL PROTECTED]): Jan 13 10:44:53 Error: Raw backtrace: 0x85b28
- 0x4e1fc - 0x4e690 - 0x554cc - 0x22880 - 0x205b8
deliver([EMAIL PROTECTED]): Jan 13 10:44:54 Info:
msgid=[EMAIL PROTECTED]: saved mail to INBOX




Re: [Dovecot] Deliver core dump in b13 (hg 20080102)

2008-01-12 Thread mikkel
On Sat, January 12, 2008 10:48 am, Timo Sirainen wrote:
 On Thu, 2008-01-10 at 14:35 +0200, Timo Sirainen wrote:
 Ah, it's because it's the current directory. It would be better if home
  dir wouldn't be the same as mail dir
 (http://wiki.dovecot.org/VirtualUsers#homedirs), but I'll see if I can
 do something about this.

 Fixed: http://hg.dovecot.org/dovecot/rev/5dda06c19eb1



Thanks, thats great:)

I'll test it on monday to see if that fixed it completely.

But isn't this fix masking the real error; namely that either I my
directory layout isn't propper or Dovecot is flushing the wrong directory
in this specific case when using Sieve redirect (home directory is then
flushed instead of the maildir directory that it normally flushes).

Also I wonder if there is a potential issue if somebody decides to use /
as the home-directory (probably wouldn't happen). Then I guess changing to
the parent directory isn't possible.
I tried to look at your commit but my C-skills (don't have any) aren't
quite enough to decide whether you took this into consideration already:)


Regards, Mikkel


Re: [Dovecot] Deliver core dump in b13 (hg 20080102)

2008-01-10 Thread mikkel

Just tested hg 20080110 with the same result.

- Mikkel



[Dovecot] Deliver core dump in b13 (hg 20080102)

2008-01-07 Thread mikkel
Hi there


I'm redirecting e-mails from [EMAIL PROTECTED] to another account
([EMAIL PROTECTED]) using sieves redirect with sieve 1.1.3. The e-mail
is redirected just fine.
But deliver still creates a core dump and returns an e-mail
undeliverable to the sending account (even though delivery is
successful).


My .dovecot.sieve contains just this:
redirect [EMAIL PROTECTED];
keep;


As stated before the e-mail is both forwarded and delivered locally just
fine.
But the sending account receives this error:
[EMAIL PROTECTED]: Command died with signal 6:
/opt/freeware/dovecot-20080102/libexec/dovecot/deliver


Also deliver.log shows some errors (shown below).

The “nfs_flush_file_handle_cache_dir” error makes me wonder - seems like
it tries to delete the homedir of the accounts maildir storage.

Also it seems to me like a minor error since the delivery functions as it
should, apart from the delivery error it returns and the core dump.

Regards, Mikkel


[EMAIL PROTECTED] tail -f /log/deliver.log | grep @euro123.dk
deliver([EMAIL PROTECTED]): Jan 07 22:20:02 Error:
nfs_flush_file_handle_cache_dir: rmdir(/nfs/euro123.dk/mikkel) failed:
Invalid argument
deliver([EMAIL PROTECTED]): Jan 07 22:20:02 Error:
nfs_flush_file_handle_cache_dir: rmdir(/nfs/euro123.dk/mikkel) failed:
Invalid argument
deliver([EMAIL PROTECTED]): Jan 07 22:20:02 Error:
nfs_flush_file_handle_cache_dir: rmdir(/nfs/euro123.dk/mikkel) failed:
Invalid argument
deliver([EMAIL PROTECTED]): Jan 07 22:20:02 Info:
msgid=[EMAIL PROTECTED]: forwarded to [EMAIL PROTECTED]
deliver([EMAIL PROTECTED]): Jan 07 22:20:03 Info:
msgid=[EMAIL PROTECTED]: saved mail to INBOX
deliver([EMAIL PROTECTED]): Jan 07 22:20:03 Panic: file index-mail.c: line
1042 (index_mail_close): assertion failed: (!mail-data.destroying_stream)
deliver([EMAIL PROTECTED]): Jan 07 22:20:03 Error: Raw backtrace: 0x855c0
- 0x4decc - 0x4e360 - 0x55204 - 0x2266c - 0x2040c
deliver([EMAIL PROTECTED]): Jan 07 22:20:04 Error:
nfs_flush_file_handle_cache_dir: rmdir(/nfs/euro123.dk/mikkel) failed:
Invalid argument
deliver([EMAIL PROTECTED]): Jan 07 22:20:05 Error:
nfs_flush_file_handle_cache_dir: rmdir(/nfs/euro123.dk/mikkel) failed:
Invalid argument
deliver([EMAIL PROTECTED]): Jan 07 22:20:05 Error:
nfs_flush_file_handle_cache_dir: rmdir(/nfs/euro123.dk/mikkel) failed:
Invalid argument
deliver([EMAIL PROTECTED]): Jan 07 22:20:05 Error:
nfs_flush_file_handle_cache_dir: rmdir(/nfs/euro123.dk/mikkel) failed:
Invalid argument
deliver([EMAIL PROTECTED]): Jan 07 22:20:05 Error:
nfs_flush_file_handle_cache_dir: rmdir(/nfs/euro123.dk/mikkel) failed:
Invalid argument
deliver([EMAIL PROTECTED]): Jan 07 22:20:05 Error:
nfs_flush_file_handle_cache_dir: rmdir(/nfs/euro123.dk/mikkel) failed:
Invalid argument
deliver([EMAIL PROTECTED]): Jan 07 22:20:05 Error:
nfs_flush_file_handle_cache_dir: rmdir(/nfs/euro123.dk/mikkel) failed:
Invalid argument
deliver([EMAIL PROTECTED]): Jan 07 22:20:05 Error:
nfs_flush_file_handle_cache_dir: rmdir(/nfs/euro123.dk/mikkel) failed:
Invalid argument
deliver([EMAIL PROTECTED]): Jan 07 22:20:06 Error:
nfs_flush_file_handle_cache_dir: rmdir(/nfs/euro123.dk/mikkel) failed:
Invalid argument
deliver([EMAIL PROTECTED]): Jan 07 22:20:06 Error:
nfs_flush_file_handle_cache_dir: rmdir(/nfs/euro123.dk/mikkel) failed:
Invalid argument
deliver([EMAIL PROTECTED]): Jan 07 22:20:06 Error:
nfs_flush_file_handle_cache_dir: rmdir(/nfs/euro123.dk/mikkel) failed:
Invalid argument
deliver([EMAIL PROTECTED]): Jan 07 22:20:06 Error:
nfs_flush_file_handle_cache_dir: rmdir(/nfs/euro123.dk/mikkel) failed:
Invalid argument
deliver([EMAIL PROTECTED]): Jan 07 22:20:06 Error:
nfs_flush_file_handle_cache_dir: rmdir(/nfs/euro123.dk/mikkel) failed:
Invalid argument
deliver([EMAIL PROTECTED]): Jan 07 22:20:06 Error:
nfs_flush_file_handle_cache_dir: rmdir(/nfs/euro123.dk/mikkel) failed:
Invalid argument
deliver([EMAIL PROTECTED]): Jan 07 22:20:06 Error:
nfs_flush_file_handle_cache_dir: rmdir(/nfs/euro123.dk/mikkel) failed:
Invalid argument
deliver([EMAIL PROTECTED]): Jan 07 22:20:06 Error:
nfs_flush_file_handle_cache_dir: rmdir(/nfs/euro123.dk/mikkel) failed:
Invalid argument
deliver([EMAIL PROTECTED]): Jan 07 22:20:07 Error:
nfs_flush_file_handle_cache_dir: rmdir(/nfs/euro123.dk/mikkel) failed:
Invalid argument
deliver([EMAIL PROTECTED]): Jan 07 22:20:07 Error:
nfs_flush_file_handle_cache_dir: rmdir(/nfs/euro123.dk/mikkel) failed:
Invalid argument
deliver([EMAIL PROTECTED]): Jan 07 22:20:07 Error:
nfs_flush_file_handle_cache_dir: rmdir(/nfs/euro123.dk/mikkel) failed:
Invalid argument
deliver([EMAIL PROTECTED]): Jan 07 22:20:07 Error:
nfs_flush_file_handle_cache_dir: rmdir(/nfs/euro123.dk/mikkel) failed:
Invalid argument
deliver([EMAIL PROTECTED]): Jan 07 22:20:07 Error:
nfs_flush_file_handle_cache_dir: rmdir(/nfs/euro123.dk/mikkel) failed:
Invalid argument
deliver([EMAIL PROTECTED]): Jan 07 22:20:07 Error:
nfs_flush_file_handle_cache_dir: rmdir(/nfs/euro123.dk/mikkel) failed:
Invalid argument
deliver([EMAIL

[Dovecot] Compile error introduced in b13 (Solaris 10)

2008-01-03 Thread mikkel
Any news on the b13 compile issue on Solaris 10 (both Sparc and X86)?

Would be nice to know whether it's our own headache or something thats not
supposed to be and thus expected to be fixed in the next release.

Just to be clear this didn't occur in any other release (I've compiled all
previous v1.1 betas on the same setup including b12).

Ref.:
http://dovecot.org/list/dovecot/2007-December/027702.html
http://dovecot.org/list/dovecot/2007-December/027704.html

Regards, Mikkel


Re: [Dovecot] Compile error introduced in b13 (Solaris 10)

2008-01-03 Thread mikkel
On Fri, January 4, 2008 1:56 am, [EMAIL PROTECTED] wrote:
 Any news on the b13 compile issue on Solaris 10 (both Sparc and X86)?


I found this looking through your commits:
http://hg.dovecot.org/dovecot/rev/d45c3058b91a

Guess that'll fix it, thanks.



[Dovecot] b13 Compile error on Solaris 10 (Sparc)

2007-12-31 Thread mikkel
All betas so far have compiled without problems on my setup, but something
breaks in b13.

This problem occurs with both make and gmake.

Below are outputs from make and gmake.

Regards, Mikkel



make  all-recursive
Making all in src
Making all in lib
make  all-am
if gcc -DHAVE_CONFIG_H -I. -I. -I../..-I/opt/pkgsrc/pkg/include/mysql 
-std=gnu99 -g -O2 -Wall -W -Wmissing-prototypes -Wmissing-declarations
-Wpointer-arith -Wchar-subscripts -Wformat=2 -Wbad-function-cast -MT
queue.o -MD -MP -MF .deps/queue.Tpo -c -o queue.o queue.c; \
then mv -f .deps/queue.Tpo .deps/queue.Po; else rm -f
.deps/queue.Tpo; exit 1; fi
In file included from queue.c:5:
queue.h:13: error: redefinition of `struct queue'
queue.h: In function `queue_idx':
queue.h:37: error: structure has no member named `tail'
queue.h:37: error: structure has no member named `area_size'
queue.c: In function `queue_init':
queue.c:12: error: structure has no member named `arr'
queue.c:13: error: structure has no member named `area_size'
queue.c:14: error: structure has no member named `arr'
queue.c:14: error: structure has no member named `arr'
queue.c:15: error: structure has no member named `area_size'
queue.c: In function `queue_grow':
queue.c:31: error: structure has no member named `full'
queue.c:31: error: structure has no member named `head'
queue.c:31: error: structure has no member named `tail'
queue.c:33: error: structure has no member named `area_size'
queue.c:34: error: structure has no member named `arr'
queue.c:35: error: structure has no member named `area_size'
queue.c:36: error: structure has no member named `arr'
queue.c:36: error: structure has no member named `arr'
queue.c:37: error: structure has no member named `area_size'
queue.c:39: error: structure has no member named `area_size'
queue.c:39: error: structure has no member named `head'
queue.c:39: error: structure has no member named `area_size'
queue.c:39: error: structure has no member named `head'
queue.c:40: error: structure has no member named `arr'
queue.c:40: error: structure has no member named `arr'
queue.c:41: error: structure has no member named `area_size'
queue.c:42: error: structure has no member named `head'
queue.c:44: error: structure has no member named `arr'
queue.c:44: error: structure has no member named `arr'
queue.c:45: error: structure has no member named `head'
queue.c:46: error: structure has no member named `head'
queue.c:49: error: structure has no member named `head'
queue.c:49: error: structure has no member named `tail'
queue.c:50: error: structure has no member named `full'
queue.c: In function `queue_append':
queue.c:55: error: structure has no member named `full'
queue.c:57: error: structure has no member named `full'
queue.c:60: error: structure has no member named `arr'
queue.c:60: error: structure has no member named `head'
queue.c:61: error: structure has no member named `head'
queue.c:61: error: structure has no member named `head'
queue.c:61: error: structure has no member named `area_size'
queue.c:62: error: structure has no member named `full'
queue.c:62: error: structure has no member named `head'
queue.c:62: error: structure has no member named `tail'
queue.c: In function `queue_delete':
queue.c:71: error: structure has no member named `full'
queue.c:74: error: structure has no member named `tail'
queue.c:74: error: structure has no member named `tail'
queue.c:74: error: structure has no member named `area_size'
queue.c:79: error: structure has no member named `head'
queue.c:79: error: structure has no member named `head'
queue.c:79: error: structure has no member named `area_size'
queue.c:80: error: structure has no member named `area_size'
queue.c:85: error: structure has no member named `head'
queue.c:85: error: structure has no member named `tail'
queue.c:88: error: structure has no member named `arr'
queue.c:88: error: structure has no member named `tail'
queue.c:89: error: structure has no member named `arr'
queue.c:89: error: structure has no member named `tail'
queue.c:90: error: structure has no member named `tail'
queue.c:91: error: structure has no member named `tail'
queue.c:92: error: structure has no member named `tail'
queue.c:92: error: structure has no member named `area_size'
queue.c:96: error: structure has no member named `head'
queue.c:97: error: structure has no member named `arr'
queue.c:98: error: structure has no member named `arr'
queue.c:99: error: structure has no member named `head'
queue.c:100: error: structure has no member named `head'
queue.c:100: error: structure has no member named `head'
queue.c:100: error: structure has no member named `area_size'
queue.c:101: error: structure has no member named `area_size'
queue.c:103: error: structure has no member named `head'
queue.c:103: error: structure has no member named `area_size'
queue.c:103: error: structure has no member named `head'
queue.c:103: error: structure has no member named `tail'
queue.c: In function `queue_clear':
queue.c:113: error

Re: [Dovecot] 1.1b13 build in FreeBSD fails using 'make'; 'gmake' apparently required

2007-12-31 Thread mikkel
On Mon, December 31, 2007 1:16 pm, Gerard wrote:
 On Mon, 31 Dec 2007 12:56:16 +0100
 Riemer Palstra [EMAIL PROTECTED] wrote:


 On Mon, Dec 31, 2007 at 06:49:53AM -0500, Gerard wrote:

 Might I suggest the following:


 1) Contact the Dovecot port maintainer and inform him/her of your
 problem. They may not be aware of it, or it might be something else.

 But isn't the problem that the OP is *not* using the port? Assuming
 that e.g. make will always be GNU make, or tar always being GNU tar,
 isn't the best bet on most platforms.

 If he is not using the port and I fail to see any any good reason not
 to, then he is pretty much on his own. Putting a 'WARNING' notice up on
 'wiki' probably would not be a bad idea. It might server to prevent
 other users from wasting this forum's time with these sort of postings.
 Expecting Timo to craft a special build niche build is absurd. He
 already is working far too hard keeping Dovecot ahead of the pack.

 There is also the possibility that the OP needs some special function or
 handling of Dovecot that the port does not support. Informing the
 maintainer would be a good way of getting this sort of matter taken care
 of. The OP did not express any special needs, and my crystal ball is off
 for the holidays, so I am not able to fathom what they might or might not
 be.

 By the way, I would have thought that it was obvious that make != gmake.


Calm down please.
There are many good reasons for not using the port.

Dovecot is constantly releasing new betas that should be tested and it
might not be feasible to wait for them to go into ports.

On top of this the HG development never will make it into ports.

Now the point of releasing the betas is that the users should test them.
That may be somewhat difficult if the users are told to bugger off when
they complain that the betas won't compile.

Compile errors are actually put into the development tree from time to
time (there appear to be other compile errors with this specific beta
release as well) so reporting it to the list makes perfectly sense.


After all the feedback comes from the users.

Regards (and a happy new year), Mikkel.



Re: [Dovecot] 'pipe' plugin - is anyone interested (or using it)?

2007-12-05 Thread mikkel
On Wed, December 5, 2007 3:29 pm, Nicolas Boullis wrote:
...
 A few months ago, I sent a message to this list asking if people would
 be interested by a pipe plugin
...
 Since then, I have received no feedback...
...
 So, is anyone using this plugin? Is anyone using it?


Haven't tried it so I can add no useful comments on it’s use.

But a pipe plugin is in my opinion a very important feature.

On my postfix/dovecot system I'm currently forwarding mails that need
piping to a qmail server, since that’s the only way to do it easily on a
per user basis.
It seems irrational that dovecot itself doesn’t support this considering
dovecot being much ahead of qmail in every other aspect.

By the way I use it for much the same tasks as you do; for spam reporting
but also for certain e-mails that need be put into another database
besides it’s mailbox.

Regards, Mikkel




Re: [Dovecot] Dovecot write activity (mostly 1.1.x)

2007-11-05 Thread mikkel
On Sun, November 4, 2007 4:32 pm, Timo Sirainen wrote:

 I didn't know that mail_nfs_index=yes resulted in a forced chown.
 How come that's necessary with NFS but not on local disks?


 It's used to flush NFS attribute cache. Enabling it allows you to use
 multiple servers to access the same maildir at the same time while still
 having attribute cache enabled (v1.0 required actime=0). If you don't need
 this (and it's better if you don't), then just set the mail_nfs_* to no
 and it works faster.

 By the way I misinformed you about fsync_disable=yes.
 It was like that before i upgraded to v1.1, but v1.1 requires
 fsync_disable=no when mail_nfs_index=true so I had to disable it.

 So you use ZFS on the NFS server, but Dovecot is using NFS on client
 side? The fsyncs then just mean that the data is sent to NFS server, not a
 real fsync on ZFS.


Thanks a lot for the help - this changed a lot!
Dist writes fell to about 1/3 of before.
I guess the reason is that ZFS can now make use if it's caching capabilities.


Deliver’s activity is completely random since it's impossible to load
balance a connection based on the e-mail recipient, since only the ip is
known at the load balancing point.
Therefore I have fsync_disable=no for deliver.

It's easy to force the clients using imap/pop3 to the same server since it
can be based on the ip only.
Therefore I have fsync_disable=yes for imap/pop3.


This changed everything. Now there's a real performance gain upgrading
from 1.0.x to 1.1.x. About two or three times less disk activity overall
(reads were already improved) for both reads and writes.
That’s pretty neat!


[Dovecot] Dovecot I/O scheduling (all versions)

2007-11-04 Thread mikkel
I have experienced this on all versions of Dovecot that I’ve used (1.0 -
1.1b6) using IMAP (it’s difficult to test if it’s also there with POP3).

What I see is that if there is a peak in disk usage at the time of a
specific request that requests stalls.
The saturation of disk I/O is momentary but when it’s done (maybe after
one or two seconds) Dovecot still waits for its I/O operation instead of
continuing as soon as resources are available.

Now, I would expect Dovecot to only wait until there are resources
available and not stall for at long time (sometimes it stalls for like 10
seconds, and sometimes completely).

This could be OS specific (I’m using ZFS and Solaris 10 on Sparc) but it
may also be due to the way dovecot is programmed.
There’s always plenty of RAM and CPU available so that’s not what’s
causing troubles.

Is anyone else familiar with this issue?

Regards, Mikkel



[Dovecot] Dovecot write activity (mostly 1.1.x)

2007-11-04 Thread mikkel
I’m experiencing write activity that’s somewhat different from my previous
qmail/courier-imap/Maildir setup.
This more outspoken in v.1.1.x than v1.0.x (I’m using Maildir).

Write activity is about half that of read activity when measuring throughput.
But when measuring operations it’s about 5-7 times as high (measured with
zpool iostat on ZFS).

I think this might be due to the many small updates to index and cache files.
Anyway since writing is much more demanding to the disk than reads
dovecots ends up being slower (only a little though) than my old
qmail/courier-imap/Maildir was. And the old setup didn’t even benefit from
indexes like Dovecot does.
(What I mean saying “slower” is that it can service fewer users before it
“hits the wall”).

Of course there’s also lot’s of benefits using Dovecot. I’m just wondering
whether this was a thing that should be focused on for later versions
(maybe writes could be grouped together or something like that).
Dovecot is very cheap on the CPU side so the only real limit in terms of
scalability is the storage.

Regards, Mikkel



[Dovecot] Sieve error messages (sieve 1.1.2, dovecot 1.1b6)

2007-11-04 Thread mikkel
Sorry for spamming the list today:)

I’m getting a lot of these errors in the deliver log:

deliver([recipient]): [date] Info: sieve runtime error: Fileinto: Generic
Error
deliver([recipient]): [date]:18 Error:
sieve_execute_bytecode([path-to-sieve]) failed
deliver([recipient]): [date] Info: msgid=[msgid] : save failed to
[foldername]: Quota exceeded


In this case the account is over the defined Dovecot quota limit. Dovecot
knows what to do but sieve seems to throw its standard error message.
Is there something wrong with Sieve or is it just that the error is
misleading?

This has been present in all v1.1.x I think (some changes was made to the
error messages in b3 or b4 but believe this didn’t change).

Regards, Mikkel



Re: [Dovecot] Dovecot write activity (mostly 1.1.x)

2007-11-04 Thread mikkel
On Sun, November 4, 2007 1:51 pm, Timo Sirainen wrote:
 On Sun, 2007-11-04 at 13:02 +0100, [EMAIL PROTECTED] wrote:

 Write activity is about half that of read activity when measuring
 throughput. But when measuring operations it’s about 5-7 times as high
 (measured with
 zpool iostat on ZFS).

 Have you tried with fsync_disable=yes? ZFS's fsyncing performance
 apparently isn't all that great. I'm guessing this is also the reason for
 the I/O stalls in your other mail.


I'm using fsync_disable=yes already.
I know ZFS has issues. In my opinion it was never ready when it was
released but it has such nice features I’m trying to cope with its
specialities.
I've also disabled flush cache requests in ZFS since it made the
performance horrible.

If I’m the only one experiencing this then I guess I'll just have to
accept it as yet another ZFS curiosity :|

(Possibly this is also the answer to my other post regarding
stalled/delayed I/O)

- Mikkel



Re: [Dovecot] Dovecot write activity (mostly 1.1.x)

2007-11-04 Thread mikkel
On Sun, November 4, 2007 2:20 pm, Timo Sirainen wrote:
 Well, if you use only clients that don't really need indexes they could
 just slow things down. You could try disabling indexes to see how it works
 then (:INDEX=MEMORY to mail_location).

I tried that earlier and it did result in less writes.
It would be a nice-to-have option to be able to individually tell deliver,
imapd and popd whether they should update indexes and cache.


 Although v1.1 should be writing less to disk than v1.0, because v1.1
 doesn't anymore update dovecot.index as often. v1.1 versions before beta6
 had some problems updating cache file though. They could have updated it
 more often than necessary, and beta5 didn't really even update it at all.

Okay, then I really need to wait and see if things change (it'll probably
take a few days before the majority of e-mail accounts are re-indexed and
cached).

By the way writes increased noticeably when i upgraded from 1.0 to 1.1.
On the other hand reads decreased a lot as well. I guess the
fail-to-update-cache bug you mentioned could have a lot to do with that.


 (Possibly this is also the answer to my other post regarding
 stalled/delayed I/O)

 You could truss the hanging process to see what it's doing.


It's not an easy task since the delay is sometimes just a few (5-10)
seconds. And when there is a complete stall the client aborts before I can
find the process. But I'll give it a go.

Thanks for your input.



Re: [Dovecot] Dovecot write activity (mostly 1.1.x)

2007-11-04 Thread mikkel
On Sun, November 4, 2007 2:54 pm, [EMAIL PROTECTED] wrote:
 You could truss the hanging process to see what it's doing.
 It's not an easy task since the delay is sometimes just a few (5-10)
 seconds. And when there is a complete stall the client aborts before I can
  find the process. But I'll give it a go.


I tried trussing a normal running process. Here's what I see all the time:

stat64([path]/Maildir/new, 0xFFBFF470) = 0
stat64([path]/Maildir/cur, 0xFFBFF4E0) = 0
stat64([path]/Maildir/new, 0xFFBFF2F0) = 0
stat64([path]/Maildir/cur, 0xFFBFF258) = 0
stat64([path]/Maildir/dovecot-uidlist, 0xFFBFF1D0) = 0
chown([path]/Maildir/dovecot-uidlist, 105, -1) = 0
stat64([path]/Maildir/dovecot-uidlist, 0xFFBFF2F0) = 0
stat64([path]/Maildir/dovecot.index.log, 0xFFBFDAE0) = 0
chown([path]/Maildir/dovecot.index.log, 105, -1) = 0
stat64([path]/Maildir/dovecot.index.log, 0xFFBFDBF0) = 0

What i notice is that stat64 is very often called twice in a row on the
same file.
Also I notice that chown() is always run before a file is accessed.

Regarding chown it looks like dovecot either thinks that the file hasn't
got the rights it should have, or it just calls chown anyway to be sure.

I'm not a C-programmer so I have no idea whether it’s supposed to be like
that. But if it isn’t then perhaps it could explain the many writes
(chowning constantly)?

What do you think?


Re: [Dovecot] sync with gmail?

2007-11-02 Thread mikkel
On Fri, November 2, 2007 2:16 am, Neal Becker wrote:
 Now that gmail added imap, I've been trying to see if I can sync
 gmail/imap with my local dovecot maildir.  I've been trying offlineimap.
 So far, I've
 created a loop resulting in a directory with size  100 entries.

 Anyone try this and have a working setup?



Why not install simple IMAP server on your local machine and then use
imapsync to transfer the data (or use Dovecot if it’s already installed)?
All you need then is to tell your local IMAP to use Maildir.

Imapsync is very easy to use and also very reliable (my own oppinion).
Since it's Perl it's not as fast as it could be but it should be just fine
for your needs.


- Mikkel


[Dovecot] Sieve 1.1.2 - special chars

2007-10-30 Thread mikkel
I'm unable to use local chars in vacation/auto replies with sieve.
.dovecot.sieve.err gives the same error as when there's a parse error.

Example of error:
line 25: string 'æ ø å ö ä ü é á.
':


My dovecot is v. 1.1b5 and sieve is v. 1.1.2.

This issue was also present with dovecot 1.0.x

Is there a way around this?

- Mikkel



Re: [Dovecot] Problem with bodystructure/partial fetch using 1.1*

2007-10-19 Thread mikkel
I somehow solved this compiling with gmake instead of make.
I think it's related to libiconv and not to Dovecot.

Sorry for the confusion.

- Mikkel

On Wed, October 17, 2007 1:55 pm, [EMAIL PROTECTED] wrote:
 Hi there


 I'm using dovecot-1.1b2/b3 with deliver. All information is stored in an
 SQL database and no errors related to this issue are written to the logs
 (actually I get no errors at all).


 I've run into at problem; FETCH BODYSTRUCTURE is broken after upgrading
 to dovecot 1.1b2 (same issue with 1.1b3). BODUSTRUCTURE only returns
 information from the first section (with incorrect
 content-type/disposition) and discards everything else.

 This problem is somehow related to deliver;
 IMAP returns the correct BODYSTRUCTURE for e-mails received before the
 upgrade, but an incorrect for ones received after the upgrade. This tells
 me that IMAP is working properly but somehow the MIME sections are being
 corrupted doing mail delivery (if I fetch the complete body nothing seems
 to be wrong).

 Also if I fetch a section something is not right - instead of just the
 section I get part of the Content-type header as well (but only half the
 header).

 Obviously this makes various webmail clients go somewhat crazy while
 Outlook/Thunderbirds don’t mind (since they fetch the complete body and do
  the MIME parsing themselves).

 These are my compile options (GCC 3.4.3 on Solaris 10):
 CPPFLAGS=-I/opt/pkgsrc/pkg/include/mysql
 LDFLAGS=-L/opt/pkgsrc/pkg/lib/mysql ./configure
 --prefix=/opt/freeware/dovecot-1.1b2 --with-pop3d --with-deliver
 --with-mysql --with-prefetch-userdb --with-sql --with-gnu-ld --with-ssl=no
  --enable-header-install
 make; make install

 Everything else works flawlessly.


 Have anyone else experienced at problem like this?



 Best wishes, Mikkel



 Configuration:
 /opt/freeware/dovecot-1.1b3/sbin/dovecot -c /local/config/dovecot2.conf -n
  # 1.1.beta3: /local/config/dovecot2.conf
 Warning: fd limit 256 is lower than what Dovecot can use under full load
 (more than 768). Either grow the limit or change login_max_processes_count
  and max_mail_processes settings log_path: /local/log/dovecot.run
 info_log_path: /local/log/dovecot.run
 protocols: imap pop3
 ssl_disable: yes
 disable_plaintext_auth: no
 login_dir: /opt/freeware/dovecot-1.1b3/var/run/dovecot/login
 login_executable(default):
 /opt/freeware/dovecot-1.1b3/libexec/dovecot/imap-login
 login_executable(imap):
 /opt/freeware/dovecot-1.1b3/libexec/dovecot/imap-login
 login_executable(pop3):
 /opt/freeware/dovecot-1.1b3/libexec/dovecot/pop3-login
 login_process_per_connection: no
 first_valid_uid: 105
 first_valid_gid: 105
 mmap_disable: yes
 dotlock_use_excl: yes
 mail_nfs_storage: yes
 mail_nfs_index: yes
 mail_executable(default): /opt/freeware/dovecot-1.1b3/libexec/dovecot/imap
  mail_executable(imap): /opt/freeware/dovecot-1.1b3/libexec/dovecot/imap
 mail_executable(pop3): /opt/freeware/dovecot-1.1b3/libexec/dovecot/pop3
 mail_plugins(default): quota imap_quota trash
 mail_plugins(imap): quota imap_quota trash
 mail_plugins(pop3): quota
 mail_plugin_dir(default): /opt/freeware/dovecot-1.1b3/lib/dovecot/imap
 mail_plugin_dir(imap): /opt/freeware/dovecot-1.1b3/lib/dovecot/imap
 mail_plugin_dir(pop3): /opt/freeware/dovecot-1.1b3/lib/dovecot/pop3
 imap_client_workarounds(default): outlook-idle delay-newmail
 tb-extra-mailbox-sep imap_client_workarounds(imap): outlook-idle
 delay-newmail tb-extra-mailbox-sep imap_client_workarounds(pop3):
 pop3_client_workarounds(default):
 pop3_client_workarounds(imap):
 pop3_client_workarounds(pop3): outlook-no-nuls oe-ns-eoh
 auth default: mechanisms: plain login digest-md5 cram-md5 ntlm rpa apop
 anonymous passdb:
 driver: sql
 args: /local/config/dovecot-sql2.conf
 userdb:
 driver: prefetch
 userdb:
 driver: sql
 args: /local/config/dovecot-sql2.conf
 socket:
 type: listen
 client:
 path: /var/spool/postfix/private/auth
 mode: 432
 user: postfix
 group: postfix
 master:
 path: /var/run/dovecot/auth-master
 mode: 384
 user: vmail
 plugin:
 quota: maildir
 quota_rule: *:storage=102400:messages=5000
 quota_rule2: Trash:storage=10M
 trash: /local/config/dovecot-trash.conf






[Dovecot] Problem with bodystructure/partial fetch using 1.1*

2007-10-17 Thread mikkel
Hi there

I'm using dovecot-1.1b2/b3 with deliver. All information is stored in an
SQL database and no errors related to this issue are written to the logs
(actually I get no errors at all).

I've run into at problem; FETCH BODYSTRUCTURE is broken after upgrading to
dovecot 1.1b2 (same issue with 1.1b3).
BODUSTRUCTURE only returns information from the first section (with
incorrect content-type/disposition) and discards everything else.

This problem is somehow related to deliver;
IMAP returns the correct BODYSTRUCTURE for e-mails received before the
upgrade, but an incorrect for ones received after the upgrade.
This tells me that IMAP is working properly but somehow the MIME sections
are being corrupted doing mail delivery (if I fetch the complete body
nothing seems to be wrong).

Also if I fetch a section something is not right - instead of just the
section I get part of the Content-type header as well (but only half the
header).

Obviously this makes various webmail clients go somewhat crazy while
Outlook/Thunderbirds don’t mind (since they fetch the complete body and do
the MIME parsing themselves).

These are my compile options (GCC 3.4.3 on Solaris 10):
CPPFLAGS=-I/opt/pkgsrc/pkg/include/mysql
LDFLAGS=-L/opt/pkgsrc/pkg/lib/mysql ./configure
--prefix=/opt/freeware/dovecot-1.1b2 --with-pop3d --with-deliver
--with-mysql --with-prefetch-userdb --with-sql --with-gnu-ld --with-ssl=no
--enable-header-install
make; make install

Everything else works flawlessly.

Have anyone else experienced at problem like this?


Best wishes, Mikkel


Configuration:
/opt/freeware/dovecot-1.1b3/sbin/dovecot -c /local/config/dovecot2.conf -n
# 1.1.beta3: /local/config/dovecot2.conf
Warning: fd limit 256 is lower than what Dovecot can use under full load
(more than 768). Either grow the limit or change login_max_processes_count
and max_mail_processes settings
log_path: /local/log/dovecot.run
info_log_path: /local/log/dovecot.run
protocols: imap pop3
ssl_disable: yes
disable_plaintext_auth: no
login_dir: /opt/freeware/dovecot-1.1b3/var/run/dovecot/login
login_executable(default):
/opt/freeware/dovecot-1.1b3/libexec/dovecot/imap-login
login_executable(imap):
/opt/freeware/dovecot-1.1b3/libexec/dovecot/imap-login
login_executable(pop3):
/opt/freeware/dovecot-1.1b3/libexec/dovecot/pop3-login
login_process_per_connection: no
first_valid_uid: 105
first_valid_gid: 105
mmap_disable: yes
dotlock_use_excl: yes
mail_nfs_storage: yes
mail_nfs_index: yes
mail_executable(default): /opt/freeware/dovecot-1.1b3/libexec/dovecot/imap
mail_executable(imap): /opt/freeware/dovecot-1.1b3/libexec/dovecot/imap
mail_executable(pop3): /opt/freeware/dovecot-1.1b3/libexec/dovecot/pop3
mail_plugins(default): quota imap_quota trash
mail_plugins(imap): quota imap_quota trash
mail_plugins(pop3): quota
mail_plugin_dir(default): /opt/freeware/dovecot-1.1b3/lib/dovecot/imap
mail_plugin_dir(imap): /opt/freeware/dovecot-1.1b3/lib/dovecot/imap
mail_plugin_dir(pop3): /opt/freeware/dovecot-1.1b3/lib/dovecot/pop3
imap_client_workarounds(default): outlook-idle delay-newmail
tb-extra-mailbox-sep
imap_client_workarounds(imap): outlook-idle delay-newmail
tb-extra-mailbox-sep
imap_client_workarounds(pop3):
pop3_client_workarounds(default):
pop3_client_workarounds(imap):
pop3_client_workarounds(pop3): outlook-no-nuls oe-ns-eoh
auth default:
  mechanisms: plain login digest-md5 cram-md5 ntlm rpa apop anonymous
  passdb:
driver: sql
args: /local/config/dovecot-sql2.conf
  userdb:
driver: prefetch
  userdb:
driver: sql
args: /local/config/dovecot-sql2.conf
  socket:
type: listen
client:
  path: /var/spool/postfix/private/auth
  mode: 432
  user: postfix
  group: postfix
master:
  path: /var/run/dovecot/auth-master
  mode: 384
  user: vmail
plugin:
  quota: maildir
  quota_rule: *:storage=102400:messages=5000
  quota_rule2: Trash:storage=10M
  trash: /local/config/dovecot-trash.conf