Postfix and catch all

2007-03-05 Thread Paul van der Vlis
Hello,

When I want to use catch-all with Postfix without virtual hosts, there
is an option luser_relay, but luser_relay works only for the default
Postfix local delivery agent.

Is there another way to make a catch-all-mailbox without using virtual
hosts?

With regards,
Paul van der Vlis.


-- 
http://www.vandervlis.nl/


Cyrus Home Page: http://cyrusimap.web.cmu.edu/
Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html


Re: Postfix and catch all

2007-03-05 Thread lartc
hi paul,

using ldap in postfix, i setup a mail alias:

@domain.com

all mail is going to the user ...

cheers

charles

On Mon, 2007-03-05 at 10:13 +0100, Paul van der Vlis wrote:
 Hello,
 
 When I want to use catch-all with Postfix without virtual hosts, there
 is an option luser_relay, but luser_relay works only for the default
 Postfix local delivery agent.
 
 Is there another way to make a catch-all-mailbox without using virtual
 hosts?
 
 With regards,
 Paul van der Vlis.
 




Cyrus Home Page: http://cyrusimap.web.cmu.edu/
Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html


subscribe

2007-03-05 Thread René Kockisch



Cyrus Home Page: http://cyrusimap.web.cmu.edu/
Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html


Re: Postfix and catch all

2007-03-05 Thread Martin Kraus
On Mon, Mar 05, 2007 at 10:13:36AM +0100, Paul van der Vlis wrote:
 Hello,
 
 When I want to use catch-all with Postfix without virtual hosts, there
 is an option luser_relay, but luser_relay works only for the default
 Postfix local delivery agent.
 
 Is there another way to make a catch-all-mailbox without using virtual
 hosts?
 
 With regards,
 Paul van der Vlis.

postfix parameter always_bcc

would this work?
mk

Cyrus Home Page: http://cyrusimap.web.cmu.edu/
Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html


Re: Slow lmtpd

2007-03-05 Thread Andre Nathan
On Sat, 2007-03-03 at 13:14 +0100, Simon Matter wrote:
 1) Try to put different kind of data (spool, meta databases) on
 independant storage (which means independant paths, disks, SAN
 controllers). For the small things like cyrus databases, putting them on
 separate local attached SCSI/SATA disks seems a very good idea. From what
 I know about AoE I think it will always suffer latency problems compared
 to FC or SCSI, simply because ethernet cards are not exactly designed for
 that kind of task.

I'll see what I can do about that, since it's a live system and it's
hard to take it down to add disks now... However, moving deliver.db to a
memory-based FS didn't help much, but it could be that there are other
I/O bottlenecks.

 Sorry it doesn't help you much, I just tried to share some of my experiences.

No problem, I really appreciate all the ideas.

Thanks,
Andre


Cyrus Home Page: http://cyrusimap.web.cmu.edu/
Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html


Re: Shared Seen Flag in Shared Folders

2007-03-05 Thread Tim Bannister
Back in 2005, Senandung Mendonan wrote:
 I'm looking for a way to get shared \Seen flag feature in shared
 folders, instead of the default per-user. One past discussion thread
 ended with this:-
 
 http://marc.theaimsgroup.com/?l=info-cyrusm=103784712122098w=2
…
 Anyone got this feature hacked into cyrus (and can/willing to share)?

The University of Manchester now uses Cyrus IMAP for staff email as
well as for students, and we have a minority of users who share folders
with other users. At present we don't offer any public shared folders.

First up, then, I'd like to list ourselves as interesting in using this
kind of feature. We aren't currently in a position to take a major part
in the development work required but we would be interested to hear from
anyone who is working on or considering making these kind of changes.

-- 
Tim Bannister
IT Services
The University of Manchester

w: http://www.manchester.ac.uk/itservices

Cyrus Home Page: http://cyrusimap.web.cmu.edu/
Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html


Re: Slow lmtpd

2007-03-05 Thread Andre Nathan
On Sat, 2007-03-03 at 14:23 +1100, Rob Mueller wrote:
 %util - Percentage of CPU time during which I/O requests were issued to the 
 device (bandwidth utilization for the device). Device saturation occurs when 
 this value is close to 100%.

Can values way above 100% be trusted? If so, it's pretty bad (this is
from a situation where there are 200 lmtp processes, which is the
current limit I set):

avg-cpu:  %user   %nice %system %iowait   %idle
   2.530.005.26   89.982.23

Device:rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/srkB/swkB/s
avgrq-sz avgqu-sz   await  svctm  %util
etherd/e0.0
 0.00   0.00  5.87 235.02  225.10 2513.77   112.55  1256.88
11.37 0.00  750.32 750.32 18074.51


avg-cpu:  %user   %nice %system %iowait   %idle
   1.720.003.73   94.450.10

Device:rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/srkB/swkB/s
avgrq-sz avgqu-sz   await  svctm  %util
etherd/e0.0
 0.00   0.00  4.44 140.73  317.74 1125.00   158.87   562.50
9.94 0.00 2500.46 2500.46 36296.94

 The other thing of interest would be the load on the machine, and processes 
 in D state.

Load average tends to get really high. It starts increasing really fast
after the number of lmtpd processes reaches the limit set in cyrus.conf,
and can easily get to 150 or 200. One of the moments where the problem
becomes significant is when our MTAs run their deferred queue. We have
around a dozen MTAs, and when they all run their queues, there is an
increase in the number of connections to lmtpd. While these are very
quick on our other mailboxes, in this one they take a lot of time to
finish, and most of the times I have to restart cyrus, because it never
reduces the amount of processes again, and thus connections start being
refused. The difference between the two kinds of servers are:

- The ones that don't have the problem use local disks instead of AoE
- The ones that don't have the problem are limited to 2000 domains
(around 8000 accounts), while the one using the AoE storage serves 4000
domains (around 2 accounts).

Anyone running cyrus with that many accounts?

 ps auxw | grep -v ' S'

root  1743  0.0  0.0  0 0 ?DMar01   0:05
[xfssyncd]
root  3116  0.0  0.0  0 0 ?DMar01   0:01
[xfssyncd]
cyrus15593  0.0  0.3  36288 13660 ?D11:48   0:00 imapd
cyrus16360  0.0  0.3  37752 14360 ?D11:54   0:00 imapd
cyrus17161  0.0  0.3  36304 13648 ?D11:59   0:00 imapd
cyrus17182  0.0  0.0 120736  3268 ?D12:00   0:00 lmtpd
cyrus17891  0.0  0.0 120872  3108 ?D12:04   0:00 lmtpd
cyrus17897  0.0  0.0 120696  3312 ?D12:04   0:00 lmtpd
cyrus18265  0.0  0.0 120896  3540 ?D12:07   0:00 lmtpd
cyrus18302  0.0  0.0 120760  3432 ?D12:07   0:00 lmtpd
cyrus18336  0.0  0.0 120720  2684 ?D12:07   0:00 lmtpd
cyrus18441  0.0  0.0 120684  2944 ?D12:08   0:00 lmtpd
cyrus18590  0.0  0.0 120920  3156 ?D12:09   0:00 lmtpd
cyrus18591  0.0  0.0 120724  2584 ?D12:09   0:00 lmtpd
cyrus18592  0.0  0.0 121332  2796 ?D12:09   0:00 lmtpd
cyrus18612  0.0  0.0 120716  3224 ?D12:09   0:00 lmtpd
cyrus18613  0.0  0.0 120716  3140 ?D12:09   0:00 lmtpd
cyrus18632  0.0  0.0 120696  3072 ?D12:09   0:00 lmtpd
cyrus18641  0.0  0.0 120676  2864 ?D12:09   0:00 lmtpd
cyrus18643  0.0  0.0 120720  2696 ?D12:09   0:00 lmtpd
cyrus18656  0.0  0.0 120692  3340 ?D12:09   0:00 lmtpd
cyrus18657  0.0  0.0 120676  2996 ?D12:09   0:00 lmtpd
cyrus18658  0.0  0.0 120716  2804 ?D12:09   0:00 lmtpd
cyrus18669  0.0  0.0 120680  2812 ?D12:09   0:00 lmtpd
cyrus18671  0.0  0.0 120716  2712 ?D12:09   0:00 lmtpd
cyrus18939  0.0  0.0 120692  2732 ?D12:11   0:00 lmtpd
cyrus18941  0.0  0.0 120716  3148 ?D12:11   0:00 lmtpd
cyrus18942  0.0  0.0 120752  2924 ?D12:11   0:00 lmtpd
cyrus18944  0.0  0.0 120704  2612 ?D12:11   0:00 lmtpd
cyrus18947  0.0  0.0 120688  2676 ?D12:11   0:00 lmtpd
cyrus18948  0.0  0.0 120688  2336 ?D12:11   0:00 lmtpd
cyrus18950  0.0  0.0 120684  2920 ?D12:11   0:00 lmtpd
cyrus18951  0.0  0.0 124080  2764 ?D12:11   0:00 lmtpd
cyrus18978  0.0  0.0 120712  3304 ?D12:11   0:00 lmtpd
cyrus18979  0.0  0.0 120740  2872 ?D12:11   0:00 lmtpd
cyrus19014  0.0  0.0 120712  2656 ?D12:11   0:00 lmtpd
cyrus19016  0.0  0.0 120708  2880 ?D12:11   0:00 lmtpd
cyrus19089  0.0  0.0 120692  2596 ?D12:12   0:00 lmtpd
cyrus19123  0.0  0.3  36240 13540 ?D12:12   0:00 imapd
cyrus19153  0.0  0.0  38012  3076 ?   

reconstruct deletes messages

2007-03-05 Thread Joseph Brennan


We're running into cases where running reconstruct removes message
files, sometimes all of the messages in a folder, leaving only the
directory and the cyrus.cache, cyrus.header, cyrus.index files.

This makes no sense to me at all.  I thought the only purpose of
reconstruct is to rebuild the index.  Under what circumstances
would it unlink files?

Joseph Brennan
Lead Email Systems Engineer
Columbia University Information Technology



Cyrus Home Page: http://cyrusimap.web.cmu.edu/
Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html


Re: Slow lmtpd

2007-03-05 Thread John Madden
On Mon, 2007-03-05 at 12:19 -0300, Andre Nathan wrote:
 On Sat, 2007-03-03 at 14:23 +1100, Rob Mueller wrote:
  %util - Percentage of CPU time during which I/O requests were issued to the 
  device (bandwidth utilization for the device). Device saturation occurs 
  when 
  this value is close to 100%.
 
 Can values way above 100% be trusted? If so, it's pretty bad (this is
 from a situation where there are 200 lmtp processes, which is the
 current limit I set):

No way -- set your lmtp processes to like 5 and configure your
concurrency in your MTA to the same value (or n-1).  There's no way your
disk system (or any other) is going to be able to handle 200 lmtpd's
writing simultaneously.

Even with our SAN, I only allow *3* lmtpd's to write concurrently.

John



-- 
John Madden
Sr. UNIX Systems Engineer
Ivy Tech Community College of Indiana
[EMAIL PROTECTED]


Cyrus Home Page: http://cyrusimap.web.cmu.edu/
Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html


Re: DBERROR: skiplist recovery mailboxes.db 0090 - suddenly all is failing!

2007-03-05 Thread Gregor Wenkelewsky
On Mon, 19 Feb 2007 22:26:10 +0100, Andrew Morgan wrote:

 On Mon, 19 Feb 2007, Gregor Wenkelewsky wrote:

 On Thu, 15 Feb 2007 18:15:53 +0100, Andrew Morgan wrote:

 Cyrus has been installed here just a few weeks ago, and after some hard
 days it was working smoothly and very well. Until suddenly, sadly today
 it started to fail completely with this error message in mail.warn,
 mail.error and syslog:

 cyrus/imap[..]: DBERROR: skiplist recovery /var/lib/cyrus/mailboxes.db: 
 0090 should be ADD or DELETE
 cyrus/imap[..]: DBERROR: opening /var/lib/cyrus/mailboxes.db: cyrusdb error

 You'll need to fix the corruption of the mailboxes.db file.  It is a
 skiplist format file in your case, so do a google search for
 skiplist.py.  You'll find a python utility that can do some better
 recovery than the cyrus tools.  The example is for cyrus seen-state files,
 but the same should work on the mailboxes.db as well.

 Fine, I succeeded with that! Did check the auto backup files before too,
 but they were all identical. It was too late probably. Then I used the
 skiplist.py from here: http://oss.netfarm.it/python-cyrus.php

 python ~/skiplist.py mailboxes.db mailboxes.txt
 rn mailboxes.db mailboxes.err
 cvt_cyrusdb /var/lib/cyrus/mailboxes.txt flat /var/lib/cyrus/mailboxes.db 
 skiplist
 chown cyrus mailboxes.db

 Fixed!

 Okay then, now it works, but how often will an error like this occur, can
 I do something to prevent it? First I thought of 0090 as some sort of
 error code, and I found only two error incidents with /line/ 0090 in
 Google... ;) ...but it is much more numerous.

 The 0090 is a skiplist offset/index within the file, so the error message
 could contain any number depending where the corruption happened.

 Can this be related to shutting down and rebooting the system? Could be a
 coincidence of course, but just after rebooting the error was there.

 Possibly, if Cyrus was stopped (kill -9?) in the middle of a skiplist
 operation.

I don't really know about that. Here is from the log during another
controlled shutdown and reboot, of course I had to make sure that my
mailboxes.db error would not occur on every reboot. (It did not occur
again.) These are the last lines, no sign of a kill -9 signal:

Feb 28 15:20:05 Server cyrus/master[3869]: exiting on SIGTERM/SIGINT
Feb 28 15:20:13 Server postfix/master[4103]: terminating on signal 15
Feb 28 15:20:15 Server exiting on signal 15

When the error happened, a squatter run was completed about half an
hour before, and ctl_cyrusdb checkpointing cyrus databases exactly
4 mins 27 secs before. And then, the last lines were:

Feb 15 08:10:27 Server cyrus/master[3795]: exiting on SIGTERM/SIGINT
Feb 15 08:10:35 Server postfix/master[4104]: terminating on signal 15

Server exiting is missing!?!??!

 Can it be related to Squatter? By default, Squatter was not set, but some
 days before the error I set Squatter to an hourly nice run. Now I turned
 it off again.

 I don't think squatter would have any relation, but I'm not running
 squatter here myself.

As far as I understand, squatter is only necessary if the IMAP function
to search in messages is being used. But then it helps to speed up the
search a lot. I guess we don't need squatter here too.

 You should also setup a cronjob to dump the mailboxes.db file to plaintext
 periodically (so it can be backed up).  Something like this works here:

 58 * * * * cyrus /usr/local/cyrus/bin/ctl_mboxlist -d  
 /var/spool/cyrus/config/mailboxes.db.dump

 Yes, I'll do that, though it's more like holding ready the plaster instead
 of preventing the injury.

 This has to be written to /etc/crontab like Squatter, correct? How often
 should it be running? Maybe it's only neccessary when new IMAP users and/or
 folders have been created??

 Yes, that command above is exactly what I have in my crontab file.  I
 can't remember why I have it run at 58 minutes after the hour.  :)

Actually I erred, squatter runs are defined in /etc/cyrus.conf
But anyway, that is less important.
Put the cyrus dump to crontab

 I feel queasy with an error that has no apparent reason. I wanted to
 build a system that can run without administration for months and, maybe,
 would sustain even a rare power failure. But there was no power failure
 and no sign of a disc error either... :-(

 We've been running Cyrus for a couple years now with skiplist for
 mailboxes.db.  So far we've never had a single corruption of mailboxes.db.
 Very rarely we'll get a corrupted username.seen file, which can be fixed
 using skiplist.py.

How do you recognize a corruption? I think it would be useful to have
and automated e-mail been sent as soon as some error occurs, so that
I can get to the system and fix it.
Last time Cyrus just started to repeat trying and failing to open the
db endlessly, thereby writing tons of messages to the log files until
stopped. Hence the malfunction would not be obvious if no one wants to
use e-mail during a few days (that is likely here) and no one 

cyrus cuts away the realm on the admin user

2007-03-05 Thread Marten Lehmann

Hello,

I want to authenticate the admin user against ldap as all other users in 
our setup. Our admin user is something like [EMAIL PROTECTED] whereby 
server is set as defaultdomain in imapd.conf.


When I login with a usual account it looks like this:

Mar  5 22:09:58 vmx saslauthd[27772]: do_auth : auth failure: 
[user=test] [service=imap] [realm=test.com] [mech=ldap] [reason=Unknown]


But when I'm using the admin-account (which I need to do with cyradm), 
then the realm disappears, not matter if I'm using [EMAIL PROTECTED] as the 
login or just admin:


Mar  5 22:09:43 vmx saslauthd[27771]: do_auth : auth failure: 
[user=admin] [service=imap] [realm=] [mech=ldap] [reason=Unknown]


But without the realm the verification against ldap fails. How can I 
tell cyrus to pass the realm?


Regards
Marten

Cyrus Home Page: http://cyrusimap.web.cmu.edu/
Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html


Re: Slow lmtpd

2007-03-05 Thread Rob Mueller



Can values way above 100% be trusted? If so, it's pretty bad (this is
from a situation where there are 200 lmtp processes, which is the
current limit I set):


I've never seen over 100%, and it doesn't seem to make sense, so I'm 
guessing it's a bogus value.



avg-cpu:  %user   %nice %system %iowait   %idle
  2.530.005.26   89.982.23


However this shows that the system is mainly waiting on IO as we expected.


Device:rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/srkB/swkB/s
avgrq-sz avgqu-sz   await  svctm  %util
etherd/e0.0
0.00   0.00  5.87 235.02  225.10 2513.77   112.55  1256.88
11.37 0.00  750.32 750.32 18074.51


Ugg, if you line those up, await = 750.32

await - The  average  time  (in  milliseconds)  for I/O requests issued to 
the device to be served. This includes the time spent by the requests in 
queue and the time spent servicing them.


So it's taking 0.75 seconds on average to service an IO request, that's 
really bad.



Load average tends to get really high. It starts increasing really fast
after the number of lmtpd processes reaches the limit set in cyrus.conf,
and can easily get to 150 or 200. One of the moments where the problem


Makes sense. There's 200 lmtpd processes waiting on IO, and in linux at 
least, the load average is calculated as number of processes not in sleep 
state basically.


Really you never want that many lmtpd processes, if they're all in use, it's 
clear you've got an IO problem. Limiting it to 10 or so is probably a 
reasonable number to avoid complete IO saturation and IO sevice delays.



- The ones that don't have the problem use local disks instead of AoE
- The ones that don't have the problem are limited to 2000 domains
(around 8000 accounts), while the one using the AoE storage serves 4000
domains (around 2 accounts).

Anyone running cyrus with that many accounts?


Yes, no problem, though using local disks.

I think the problem is probably the latency that AoE introduces into the 
disk path. A couple of questions


1. How many disks in the AoE array?
2. Are they all one RAID array, or multiple RAID arrays? What type?
3. Are they one volume, or multiple volumes?

Because of the latency for system - drive IO, the thing you want to try 
and do is allow the OS to send more outstanding requests in parallel. The 
problem is I don't know where in the FS - RAID - AoE path the 
serialising bits are, so I'm not sure what the best things to do to increase 
parallelism are, but the usualy things to try are more RAID arrays with less 
drives per array, and more volumes per RAID array. This gives more places 
for parallelism to occur assuming there's not something holding some 
internal lock somewhere.


Some of our machines have 4 RAID arrays divided up into 40 separate 
filesystems/volumes.


Rob


Cyrus Home Page: http://cyrusimap.web.cmu.edu/
Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html


Re: Slow lmtpd

2007-03-05 Thread Andre Nathan
On Tue, 2007-03-06 at 09:13 +1100, Rob Mueller wrote:
 I've never seen over 100%, and it doesn't seem to make sense, so I'm 
 guessing it's a bogus value.

Yeah, I talked to the Coraid guys and they told me iostat reports
incorrect values for AoE.

  avg-cpu:  %user   %nice %system %iowait   %idle
2.530.005.26   89.982.23
 
 However this shows that the system is mainly waiting on IO as we expected.

Yep, although I'd say it's a bit more than expected...

 Really you never want that many lmtpd processes, if they're all in use, it's 
 clear you've got an IO problem. Limiting it to 10 or so is probably a 
 reasonable number to avoid complete IO saturation and IO sevice delays.

The problem in limiting them to a lower value is that once the MTAs
start running their queues, their connections will start being refused,
since all lmtpd's will be in use, and the messages will go back to the
queue.

Maybe there is a number that will allow the system to react quickly
enough to avoid new connections being refused, but I tried with 50 and
it behaved as described above.

I had to reduce the default value of 
lmtp_destination_concurrency_limit in postfix to 10 (the default is
20), and change the value of queue_run_delay on some servers to avoid
having them all run their queues at the same time, because that ends up
causing the lmtpd process limit to be reached.

 1. How many disks in the AoE array?
 2. Are they all one RAID array, or multiple RAID arrays? What type?
 3. Are they one volume, or multiple volumes?

There is only one RAID-10 array using 8 disks. The whole system is
installed on this array, although directories like /var/lib/imap
and /var/spool/imap are mounted on different LVM volumes.

 Because of the latency for system - drive IO, the thing you want to try 
 and do is allow the OS to send more outstanding requests in parallel. The 
 problem is I don't know where in the FS - RAID - AoE path the 
 serialising bits are, so I'm not sure what the best things to do to increase 
 parallelism are, but the usualy things to try are more RAID arrays with less 
 drives per array, and more volumes per RAID array. This gives more places 
 for parallelism to occur assuming there's not something holding some 
 internal lock somewhere.

The Coraid people suggested me a larger array, using 14 disks to
increase the throughput through the use of more striping elements. I can
try this for the next servers to go into production, but changing the
current one will be harder.

Thanks a lot,
Andre


Cyrus Home Page: http://cyrusimap.web.cmu.edu/
Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html


Re: Slow lmtpd

2007-03-05 Thread Rob Mueller

I had to reduce the default value of
lmtp_destination_concurrency_limit in postfix to 10 (the default is
20), and change the value of queue_run_delay on some servers to avoid
having them all run their queues at the same time, because that ends up
causing the lmtpd process limit to be reached.


Yep, there's obviously a 2 sided limit here.

Too few lmtpds and postfix won't be able to deliver incoming mail fast 
enough, and thus the mail queue on the postfix side will build up.


Too many lmtpds and you'll IO overload the backends if you have to flush a 
large mail queue from the postfix side.


So it sounds like you had too many there, so you want to lower the 
destination concurrency limit, but don't make it so low that postfix can't 
deliver fast enough.


I guess the questions then is, in normal operating conditions when you're 
not flushing a postfix queue:

1. Is the cyrus server overloaded?
2. Does the postfix queue build up at all, or is delivering to lmtp fast 
enough?


If cyrus isn't overloaded, and postfix is delivering, and the problem only 
occurs when you're flushing a built up mail queue, then changing 
lmtp_destination_concurrency_limit to limit your lmtp connections is what 
you want.



The Coraid people suggested me a larger array, using 14 disks to
increase the throughput through the use of more striping elements. I can
try this for the next servers to go into production, but changing the
current one will be harder.


Sure that will help, the question is how much...

Rob


Cyrus Home Page: http://cyrusimap.web.cmu.edu/
Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html