Re: Recomendations for a 15000 Cyrus Mailboxes (Thanks)

2007-04-11 Thread lartc
Hi Nestor,

Would love to the impad.conf and cyrus.conf if you would be so kind as
to share them ...


Cheers

Charles


On Wed, 2007-04-11 at 01:15 -0500, Nestor A. Diaz wrote:
 Thank you all of you who give me that precious tips, i am going to put 
 them on practice, and i will tell you when the system will be finished 
 in order to share my experience with you.
 
 Slds.




Cyrus Home Page: http://cyrusimap.web.cmu.edu/
Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html


Re: Recomendations for a 15000 Cyrus Mailboxes

2007-04-11 Thread Greg A. Woods
At Tue, 10 Apr 2007 06:56:43 -0500, Nestor A. Diaz wrote:
Subject: Recomendations for a 15000 Cyrus  Mailboxes
 
 These are the plans: (comments on every number will be apreciated)
 
1. Linux LVM over a 600 GB RAID 10 ( 4 x 300 GB)

I would only ever be able to recommend NetBSD or FreeBSD, and I would
strongly recommend some form of external RAID controller, either SCSI or
Fibre Channel attached.

2. Which filesystem seems to be the better ? ext3 ? xfs ? reiserfs ?

FFS, or maybe FFSv2  :-)

3. Which options to format the filesystem ? acording to the chosed
   filesystem

2K fragments on FFS would be OK.

4. Which pop3 / imap proxy to use ?

Proxy?  What for?

5. Single instance or multiple instances of cyrus ? taking in mind
   that there should be the option to recover a mailbox or some mail
   of a mailbox without having to shut down the whole cyrus system.

Single instance for Cyrus, but with a separate MX host that can be
(quickly and easily) replicated and which does delivery via LMTP/TCP.
That's far too small a system to justify the complexity of having to
manage multiple systems.

Recovery of an individual mailbox can usually be done safely without
shutdown I think, though it might be good to be able to pause delivery
for that specific mailbox, and of course make sure the user isn't signed
on to access it either.  I'm not sure how to pause delivery for an
individual mailbox with Postfix, but one idea would be to simply have
the MX host return a temporary SMTP error (e.g. 451), and make sure
there's nothing for that mailbox in the incoming queue either.

6. Best way to perform backups ? LVM snapshots ? shutting down some
   cyrus partitions ? RAID10 hot swap ?

If you use FFSv2 snapshots then you'd probably be OK doing an rsync of
the snapsot to another (preferably off-site) backup host.  Perhaps you
would want to use rdiff-backup, though a plain rsync should be
sufficient for mailboxes.

-- 
Greg A. Woods

H:+1 416 218-0098 W:+1 416 489-5852 x122 VE3TCP RoboHack [EMAIL PROTECTED]
Planix, Inc. [EMAIL PROTECTED]   Secrets of the Weird [EMAIL PROTECTED]

Cyrus Home Page: http://cyrusimap.web.cmu.edu/
Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html


huge mail hangs lmtpd

2007-04-11 Thread Michael Menge

Hi,

I had again problems with an huge mail and lmtpd. I believe this is  
caused by the sieve regex filtering of the huge mail.


The lmtp uses about twice the size of the email of mem and all the cpu-time
it can get. Then postfix gets a time out, but the lmtpd still keeps running
and does not finish its work. At the next delivery attemt from postfix a new
lmtpd gets huge and takes as much cpu time it can get, while the first  
lmtp is still running and so on.


We run cyrus 2.3.8 and postfix 2.2.9 on a SLES10 system (pcre-6.4)
I tried the rcpe-patch from fastmail to solve this problem, but delivering the
same mail to the patched cyrus still showed the same behavior. Has  
anyone else had this problem and had success with the patch from  
fastmail?


We used the same regex with procmail and had no problems like this.
I would like to help to solve this problem.

Regards

 Michael Menge


M.Menge Tel.: (49) 7071/29-70316
Universitaet Tuebingen  Fax.: (49) 7071/29-5912
Zentrum fuer Datenverarbeitung  mail:  
[EMAIL PROTECTED]

Waechterstrasse 76
72074 Tuebingen


smime.p7s
Description: S/MIME krytographische Unterschrift

Cyrus Home Page: http://cyrusimap.web.cmu.edu/
Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html

Re: Recomendations for a 15000 Cyrus Mailboxes

2007-04-11 Thread Bron Gondwana
On Tue, Apr 10, 2007 at 04:58:23PM +0200, Timo Schoeler wrote:
 On Tue, 10 Apr 2007 06:56:43 -0500
 1. Linux LVM over a 600 GB RAID 10 ( 4 x 300 GB)
 2. Which filesystem seems to be the better ? ext3 ? xfs ?
  reiserfs ?
 
 Do NEVER use XFS on GNU/Linux. (C)XFS is a brilliant FS on sgi's IRIX
 machines, I never lost even a single in more than ten years.
 
 On GNU/Linux the implementation totally sucks. I'll stop my rant on
 GNU/Linux now ;)
 
 My guess: ext3. ReiferFS has some very annoying weaknesses that may
 affect you.

We had the opposite experience.  Turn off tails on Reiserfs and you'll
still get better storage rates than ext3, and the difference in heavily
loaded performance is amazing.  We have machines that have been humming
along just fine for months with significantly more users on them than
they had with the abortive ext3 build.

We also apply one patch to reiserfs.  It's a one liner, using
generic_write rather than the Hans Reiser special (5% faster, can
deadlock under heavy load)

We were in the process of working with Namesys people to get that one
resolved back into the kernel when priorities got a little refocussed
for them.

Also, split meta is really valuable, with much faster for the meta
partition.  We have data on giant SATA RAID5 arrays (+ replication,
+ backups) and meta on 10kRPM SATA in RAID1.

Here's 300 seconds worth of IO on one of our servers:

Device:rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/srkB/swkB/s avgrq-sz 
avgqu-sz   await  svctm  %util
sda  0.05  11.17  5.35  5.95   55.94  136.9227.9768.4617.07 
0.000.42   0.22   0.24
sdb  0.01 128.52 43.72 50.74  687.77 1434.09   343.88   717.0522.46 
2.41   25.54   2.86  27.02
sdc  0.01 156.61 83.14 73.47 1219.23 1840.68   609.61   920.3419.54 
3.91   24.98   2.47  38.64
sdd  0.00  52.79  7.49 14.55  100.58  538.9550.29   269.4829.01 
0.58   26.26   5.52  12.17
sde  4.55  52.16 17.98 16.58  218.33  549.94   109.17   274.9722.24 
1.36   39.47   5.11  17.67
sdf  0.33 255.53 99.87 36.73 3669.49 2334.67  1834.74  1167.3443.95 
1.62   11.85   2.76  37.64
sdg  0.00  60.41 31.85 31.18  552.26  732.64   276.13   366.3220.38 
1.63   25.92   4.40  27.76
sdh  2.28  51.95 23.36 11.57 1662.08  508.31   831.04   254.1662.14 
0.47   13.58   7.84  27.38
sdi  2.12  46.52 17.52 13.10 1457.10  476.69   728.55   238.3463.14 
0.45   14.68   6.44  19.71

sda is the system drive.  The pairs are:

meta:   sdb  sdc  |  sdf  sdg
data:   sdd  sde  |  sdh  sdi

Each 4 of which are an external drive unit with 4 fast small
drives in RAID1 and 8 big slow drives in RAID5.

Within the drives the layout is:
  [*-*-] [--*-]  [*-v-v] [-v*v-]

(bigger drives in the second unit)

Where 'v' is a separate low-IO data storage partition that's not
used for IMAP, '*' is a master and '-' is a replica.

As you can imagine, the stats are a bit random there, but this is
a pretty typical low-load time data set.  It's only really daytime
in Europe and parts of Asia at the moment, and we don't have as
many users there as the US.
 
 Best: ZFS on Solaris ;)

Have the fixed the mmap problems yet?  Otherwise, yeah - it looks
pretty funky.  I like the concepts.

I'm also interested in reiser4 if/when it stabilises.  It seems to
have been designed with our workloads specifically in mind!  There's
also dualfs which was mentioned on the lkml recently that I'd love to
play with if it's ever ported forward to the 2.6 series.

 3. Which options to format the filesystem ? acording to the chosed
filesystem

No options.  We mount:

rw,noatime,nodiratime,notail,data=journal

I'm not sure that notail is needed with more recent kernels, because
there were patches that supposedly fixed the issue with that, but why
mess with what works!

 4. Which pop3 / imap proxy to use ?

nginx.  Without a doubt.  Not only is it amazingly blindingly efficient
with epoll (and probably kqueue if you went FreeBSD), but it has a very
responsive and active author.  Don't read the code though, it's very
well written and tidy, but it will break your brain.  Here's someone who
ENJOYS writing state machines in C.

 5. Single instance or multiple instances of cyrus ? taking in mind
that there should be the option to recover a mailbox or some
  mail of a mailbox without having to shut down the whole cyrus system.

I like small.  It keeps the mailboxes.db small, and hence easier to scan
in a hurry.  If nothing else, it will improve IMAP LIST performance.
That said, all users who share mailboxes will need to be in the same
instance.

 6. Best way to perform backups ? LVM snapshots ? shutting down some
cyrus partitions ? RAID10 hot swap ?

I'm in the middle of rewriting ours.  It used to just be files, which was
really easy because they never change(tm).  It turns out not to be strictly
true if someone deletes 

limiting unsuccessful login attempts?

2007-04-11 Thread Per olof Ljungmark

Cyrus 2.2.12
saslauthd with OpenLDAP 2.3 directory
FreeBSD 5.5

Does anyone know a good way to limit the number of unsuccessful login 
attempts?


Thanks,

Cyrus Home Page: http://cyrusimap.web.cmu.edu/
Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html


Re: limiting unsuccessful login attempts?

2007-04-11 Thread Per olof Ljungmark

Dmitriy Kirhlarov wrote:

On Wed, Apr 11, 2007 at 02:15:52PM +0200, Per olof Ljungmark wrote:

Cyrus 2.2.12
saslauthd with OpenLDAP 2.3 directory
FreeBSD 5.5

Does anyone know a good way to limit the number of unsuccessful login attempts?


slapo-ppolicy(5)
pwdMaxFailure
?


Yes, looks like that would do it, thanks!


Cyrus Home Page: http://cyrusimap.web.cmu.edu/
Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html


Re: limiting unsuccessful login attempts?

2007-04-11 Thread Dmitriy Kirhlarov
On Wed, Apr 11, 2007 at 04:39:41PM +0200, Per olof Ljungmark wrote:
 Dmitriy Kirhlarov wrote:
 On Wed, Apr 11, 2007 at 02:15:52PM +0200, Per olof Ljungmark wrote:
 Cyrus 2.2.12
 saslauthd with OpenLDAP 2.3 directory
 FreeBSD 5.5
 
 Does anyone know a good way to limit the number of unsuccessful login 
 attempts?
 slapo-ppolicy(5)
 pwdMaxFailure
 ?
 
 Yes, looks like that would do it, thanks!

Just keep in mind -- cache using for saslauthd must be properly
configured.

WBR.
Dmitriy

Cyrus Home Page: http://cyrusimap.web.cmu.edu/
Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html


Re: Recomendations for a 15000 Cyrus Mailboxes

2007-04-11 Thread Rudy Gevaert

Nestor A. Diaz wrote:


These are the plans: (comments on every number will be apreciated)


I (at Ghent university) use:



  1. Linux LVM over a 600 GB RAID 10 ( 4 x 300 GB)


An EMC CX400 and CX500, with fibre channel and ata disks.  12 luns are 
400 GiG raid 5, 6 of them on FC disks (primairy) and 6 of them on ATA 
disks (replica).  An other lun is 50 gig (more later) on FC disks.  Each 
lun is currently uses 50% of disk space.



  2. Which filesystem seems to be the better ? ext3 ? xfs ? reiserfs ?


We are using xfs.


  3. Which options to format the filesystem ? acording to the chosed
 filesystem


nothing special.


  4. Which pop3 / imap proxy to use ?


we use perdition


  5. Single instance or multiple instances of cyrus ? taking in mind
 that there should be the option to recover a mailbox or some mail
 of a mailbox without having to shut down the whole cyrus system.


We have currently 7 backends, three on site A and four on site B. 
(backend 7 is special).   6 backends are replicated.  So each each 
physical machine is running a primary backend and a replicated backend.

For now backend 7 isn't replicated.


  6. Best way to perform backups ? LVM snapshots ? shutting down some
 cyrus partitions ? RAID10 hot swap ?


We run delayed expunge and backup to tape.


  7. Any other suggestion will be welcome.


We use postfix to delivery via lmtp over tcp to the backends.  All 
information is stored in an openldap directory server.  Each mailrelay 
and physical mailstore machine (that has a primary and replica) runs an 
openldap replica.  We use saslauthd to authorize the users.  We run 
cyrus with virtual domain support.


I'm currently setting up backend 7 that will run a patched saslauthd 
that supports multiple passwords per user.


The users are divided over the backend like this:

+-+--+
| count(mailhost) | mailhost |
+-+--+
|   20857 | mail1|
|1292 | mail2|
|3237 | mail3|
|   20926 | mail4|
|1339 | mail5|
|3296 | mail6|
+-+--+


As you can see mail1 and mail4 have much more accounts on them, but 
these mailboxes are much smaller.  Most of them are students.  Mail2 and 
mail5 are users with very big mailboxes.  In total we have 50947 
accounts on mail1-6, or about 1.2T of data.




I'm using 2.3.7 and eager to update, but will have to test that first.

Rudy

--
-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
Rudy Gevaert  [EMAIL PROTECTED]  tel:+32 9 264 4734
Directie ICT, afd. Infrastructuur  Direction ICT, Infrastructure dept.
Groep Systemen Systems group
Universiteit Gent  Ghent University
Krijgslaan 281, gebouw S9, 9000 Gent, Belgie   www.UGent.be
-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --

Cyrus Home Page: http://cyrusimap.web.cmu.edu/
Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html


Re: Recomendations for a 15000 Cyrus Mailboxes

2007-04-11 Thread John Madden
2. Which filesystem seems to be the better ? ext3 ? xfs ?
 reiserfs ?
 
 We are using xfs.

I tested xfs before going back to reiserfs.  I found the performance to
be acceptable aside from quite miserable backup performance once backups
kicked off.  (Although reiserfs is now pretty miserable, struggling to
pull 1MB/s off of fibre channel.)  How does your experience compare?

John




-- 
John Madden
Sr. UNIX Systems Engineer
Ivy Tech Community College of Indiana
[EMAIL PROTECTED]


Cyrus Home Page: http://cyrusimap.web.cmu.edu/
Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html


Replication design, configuration, and bandwidth

2007-04-11 Thread Anthony Chavez
Hello, info-cyrus!

I administer a network consisting of 8 nodes that are physically
separated over long distances.  Each of the 7 slave nodes is connected
to the master node via a VPN tunnel that flows over the global
Internet.

We are investigating mechanisms to provide high availability IMAP on
the slave nodes, and found Cyrus imapd's rolling replication to be a
good fit for this, based solely on the claims made in Cyrus'
documentation (we have not experimented yet).  Our primary goal is the
ability for users on slave nodes to access their mailstores if the
master node is down, with a secondary goal of failover for inbound
SMTP.

The design we have seems straightforward: enable inbound SMTP to be
delivered to the local (replicated) backend on each node (which is
then propaged to the master and/or other repicas).

One alternative to this design would be to only replicate the backend
on a subset of the nodes and just deploy frontends on the rest.

We are certainly open to other designs, but this is all that we've
come up with so far, and we're interested to know if anyone has
successfully implemented either of these designs.

We're also interested in knowing exactly how to configure replication
for multiple backends.  So far, we've only been able to come up with
{cyrus,imapd}.conf settings for 2 node replication.

We are operating under the assumption that replication in such a
scenario would require k*(n-1) times the bandwidth that the original
delivery required, where k is a compression factor and n is the total
number of replicas.  We've only scratched the surface on verifying
this, but I thought I'd take the opportunity to see if anyone more
familiar with this sort of deployment (or the source code) could
verify this.  Bandwidth is a big concern, since most of these nodes
are using T1s.

FWIW, we are aware of the potential problems with backup MXes, and to
combat this, spam filtering would be enforced on these backup MXes,
and since our LDAP DIT is replicated to each node, rejecting mail to
nonexistent recipients would be a possibility.

Thanks! ;-)

-- 
Anthony Chavez http://anthonychavez.org/
mailto:[EMAIL PROTECTED] jabber:[EMAIL PROTECTED]


pgpYQCZsjKVZo.pgp
Description: PGP signature

Cyrus Home Page: http://cyrusimap.web.cmu.edu/
Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html

Re: Recomendations for a 15000 Cyrus Mailboxes

2007-04-11 Thread Adam Tauno Williams
 2. Which filesystem seems to be the better ? ext3 ? xfs ?
  reiserfs ?
  We are using xfs.
 I tested xfs before going back to reiserfs.  I found the performance to
 be acceptable aside from quite miserable backup performance once backups
 kicked off.  (Although reiserfs is now pretty miserable, struggling to
 pull 1MB/s off of fibre channel.)  How does your experience compare?

We used XFS for a very long time,  but in the last server upgrade
switched to indexed ext3.  Not because of any specific problems with XFS
but because development  bug-fixing seems/seemed to have flat-lined and
I didn't want to get stuck with what was basically an unsupported
filesystem on a critical production system.  There are advantages to
being in the mainstream. :)

But I never had any performance problems with XFS, certainly not
regarding backup.  Throughput was much higher than 1Mb/s [which seems
ridiculously slow,  I'd suspect something else was in play].

XFS is still my favorite filesystem, with good performance, real tools,
and excellent EA support;  but I can't justify using it in production
anymore.  Actually I think XFS is the only filesystem with really
enterprise grade tools: with the ability to defrag, freeze, and grow on
the fly.


signature.asc
Description: This is a digitally signed message part

Cyrus Home Page: http://cyrusimap.web.cmu.edu/
Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html

Re: Recomendations for a 15000 Cyrus Mailboxes

2007-04-11 Thread Bron Gondwana

On Wed, 11 Apr 2007 07:51:57 -0700, David R Bosso [EMAIL PROTECTED] said:
 --On April 11, 2007 8:26:37 PM +1000 Bron Gondwana [EMAIL PROTECTED] 
 wrote:
 
 
  We also apply one patch to reiserfs.  It's a one liner, using
  generic_write rather than the Hans Reiser special (5% faster, can
  deadlock under heavy load)
 
 Would you mind sending this to me?  We've been very happy with reiserfs
 for 
 years, but would love anything that can improve reliability.
 
 -David

I may as well send them to the list!  I think we got this patch from
someone at SuSE (Chris Mason?) a long while back.

There are two patches attached - the 2.6.19.2 patch was made in response
to a posting on LKML about whether this patch was still needed for 2.6.19:

http://lkml.org/lkml/2007/1/31/70

in which Vladimir V. Saveliev said yes, you still need it, but you have to
use do_sync_write now.  I think that response may have been a private mail.


PLEASE NOTE: we are still using 2.6.16 everywhere because, while the 2.6.19
kernel worked fine, we noticed seriously worse performance on the one machine
we tried it on.  This may have been due to the Areca driver, or something
unrelated, but my personal suspicion is that it's the delayed bitmap loading.
Unfortunately, we didn't have time to test it, so just backed out to 2.6.16
series kernels.


Here's some information about tails and MMAP, which should be fixed in current
kernels:

http://lkml.org/lkml/2007/2/1/91

But we never had that issue because we use notail everywhere!
-- 
  Bron Gondwana
  [EMAIL PROTECTED]

diff -ur linux-2.6.14.1/fs/reiserfs/file.c linux-2.6.14.1-reiserfix-big/fs/reiserfs/file.c
--- linux-2.6.14.1/fs/reiserfs/file.c	Thu Oct 27 20:02:08 2005
+++ linux-2.6.14.1-reiserfix-big/fs/reiserfs/file.c	Thu Nov 10 18:37:16 2005
@@ -1336,6 +1336,8 @@
 		return result;
 	}
 
+  return generic_file_write(file, buf, count, ppos);
+
 	if (unlikely((ssize_t) count  0))
 		return -EINVAL;
 
--- linux-2.6.19.2/fs/reiserfs/file.c	2006-11-29 16:57:37.0 -0500
+++ linux-2.6.19.2-syncwrite/fs/reiserfs/file.c	2007-02-02 01:01:36.0 -0500
@@ -1358,6 +1358,8 @@
 		return result;
 	}
 
+	return do_sync_write(file, buf, count, ppos);
+
 	if (unlikely((ssize_t) count  0))
 		return -EINVAL;
 

Cyrus Home Page: http://cyrusimap.web.cmu.edu/
Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html