Re: Fwd: Re: still in the woods (tuning for postfix hub)

2000-11-22 Thread Arjan de Vet

In article <[EMAIL PROTECTED]> you write:

>Somebody here must know who is Mr. Hub Admin, no?

[EMAIL PROTECTED] I guess?

>Can somebody tell me who admins FreeBSD/postifx mail hubs?
>
>I need some help opening up FreeBSD for postfix and 200K msgs/day.

Below is an old posting of Peter Wemm to the postfix mailing list about
the FreeBSD mailing list server. It gives you some of the information
you are asking for.

Arjan

-- 
Arjan de Vet, Eindhoven, The Netherlands  <[EMAIL PROTECTED]>
URL: http://www.iae.nl/users/devet/   for PGP key: finger [EMAIL PROTECTED]

From: [EMAIL PROTECTED] (Peter Wemm)
Subject: Re: huge mailout tuning?
Date: 9 Jul 1999 07:16:38 +0200
Message-ID: <[EMAIL PROTECTED]>
References: <[EMAIL PROTECTED]>

"Dunstan, Andrew" wrote:
> 
> 
> > -Original Message-
> > From:   [EMAIL PROTECTED] [SMTP:[EMAIL PROTECTED]]
> > 
> > 2M deliveries in 12 hours is almost 50 a second. Now that is going
> > to hit your disk really hard. Each queue file involves synchronous
> > disk accesses for creating, writing the message to file and for
> > unlinking.  Remember, the issue is disk seeks, not I/O bandwidth.
> > 
>   [Andrew>]  Yes. Would using some sort of raid striping help?

If you are prepared to loose mail on a crash, run the filesystem with
the queue directory async, or something like the BSD softupdates code
to safely avoid directory sync writes.  (safely from the perspective of
filesystem recovery, not safety from lost mail).  If it's a bulk mailout
then perhaps this is a acceptable.

> > Given that SMTP is a somewhat chatty protocol (unlike HTTP), you
> > will need to run hundreds of deliveries in parallel. Postfix uses
> > select(), and by default is limited to 1024 open files. In order to
> > increase the limit, recompile with a larger size for FD_SETSIZE
> > in util/sys_defs.hm at the start of the #ifdef SUNOS5 section.
> > 
>   [Andrew>]  How do I run "hundreds of deliveries in parallel"? (and
> where does postfix multiplex io?)

On the freebsd.org machines, we recompiled postfix with FD_SETSIZE set to
(overkill) 4096  (freebsd's kernel implementation has variable sized
fd_sets).  We currently run with 250 delivery smtp processes, and have used
higher limits in the past.

We manage to fit the entire /var/spool/postfix queue in memory on that
machine and with softupdates, a good amount of writes are entirely avoided.
(ie: an incoming queue entry gets created, processed (fed to majordomo),
then deleted.  Under async writes, this data gets written back. Under
softupdates it's entirely cancelled out (I believe. :-)) since it was
deleted so quickly and never even partially existed on disk before it was
unlinked. Again, those machines are not processing critical or sensitive
mail so lost mail is unfortunate but tolerable.

I *think* we've peaked sustained delivery at about 60/sec for short periods
in some circumstances - but we have a *LOT* of slow and/or congested remote
sites. I'm pretty sure I've seen transient delivery rate spikes go quite
beyond that though but that's probably more luck and timing than anything.

Again, *from memory*, the mailing list traffic runs about 1/2 million
recipients per day and there is loads of spare capacity as a good deal of
this happens in bursts at peak times when a lot of folks read and post to
the lists at about the same time of day.

> > > I realise that a distributed solution might be necessary, but I want to
> > know
> > > what we can achieve with our current platform.
> > > 
> > > How can we best tune postfix to maximise delivery rate? Are there issues
> > > with queue directories becoming huge?
> > 
> > You can configure postfix to hash queues. By default, only the
> > directory for deferred logfiles is hashed. See the hash_queue_names
> > parameter.  Examples in conf/*
> > 
>   [Andrew>]  I set have hashing on for incoming, deferred and defer.
> Is that right? What is the penalty for increasing the hashing levels? (2
> levels for 2m files would still give me around 10,000 per directory (ouch)).
> So I'd prefer to go to 3 or even 4 levels if this doesn't involve a big
> performance hit.

Also, if you can get the machines, I'd suggest a small cluster of outgoing
sender machines.  Set up a round-robin DNS entry, say mail-outgoing.you.com
where mail-outgoing has 5 or 10 IP addresses or MX records.  Then use
mail-outgoing for your outbound host.  Your core box will send envelopes to
the outbound boxes for relaying in a fairly distributed way.  We use this
technique for freebsd.org mail delivery for some geographical mail exploder
relays. For example, mail-relay.de.freebsd.org points to 4 machines, and
*.de is exploded via them with a transports entry.  Two servers are primary
and share the load evenly and the other two are fallbacks on different
networks with different international links.  eg:
[5:06am]~/Mail-251> host mail-relay.de.freebsd.org
mail-relay.de.freebsd.org mail is handled (pri=100) by mail.rz.tu-cla

Fwd: Re: still in the woods (tuning for postfix hub)

2000-11-22 Thread Len Conrad

Sorry, Hackers,


 but I got no response from the -questions list for 2 days.

Somebody here must know who is Mr. Hub Admin, no?

tia,

Len

==


Can somebody tell me who admins FreeBSD/postifx mail hubs?

I need some help opening up FreeBSD for postfix and 200K msgs/day.

Thanks,

Len

==

>Delivered-To: [EMAIL PROTECTED]
>Delivered-To: [EMAIL PROTECTED]
>Subject: Re: still in the woods
>To: [EMAIL PROTECTED] (Postfix users)
>Reply-To: [EMAIL PROTECTED] (Postfix users)
>X-Time-Zone:  USA EST, 6 hours behind central European time
>Date: Tue, 21 Nov 2000 16:31:50 -0500 (EST)
>From: [EMAIL PROTECTED] (Wietse Venema)
>Sender: [EMAIL PROTECTED]
>X-RCPT-TO: <[EMAIL PROTECTED]>
>
>The FreeBSD mailing list server runs 500+ Postfix SMTP clients and
>it does not hurt the machine.
>
>Perhaps you need to ask the people who run that machine.
>
> Wietse
>
>Len Conrad:
> > Got a faily busy postfix relay hub, about 200K msgs / day.
> >
> > FreeBSD IMGate1.xxx.net 4.1.1-RELEASE FreeBSD 4.1.1-RELEASE #0: Tue
> > Sep 26 00:46:59 GMT 2000
> >
> > # postconf mail_version
> > mail_version = Snapshot-20001030
> >
> > ast pid: 67847;  load
> > averages:  0.19,  0.37,  0.47
> > up 7+00:47:48  16:07:03
> > 320 processes: 1 running, 319 sleeping
> > CPU states:  3.5% user,  0.0% nice,  3.8% system,  0.8% 
> interrupt, 91.9% idle
> > Mem: 78M Active, 103M Inact, 29M Wired, 6524K Cache, 35M Buf, 32M Free
> > Swap: 47M Total, 1464K Used, 46M Free, 3% Inuse
> >
> > We had these "too many files" pb's are initial install, and the
> > FreeBSD-hackers gave me some things to change that seem to stop the
> > but they're back even with these settings( an rebootin):
> >
> > /boot/rc.loader with
> >
> > set kern.ipc.maxsockets=4000   (from default of 1000, I think)
> >
> > and
> >
> > /etc/sysctl.conf with
> >
> > kern.maxfiles = 4096
> > kern.maxfilesperproc = 4096
> >
> > A section of "sysctl -a" shows:
> >
> > ITEMSIZE LIMITUSEDFREE  REQUESTS
> >
> > PIPE:160,0,  2,100,63091
> > unpcb:64,0,  6,122,  607
> > ripcb:96, 1064,  0, 84,   68
> > tcpcb:   288, 1064, 10, 32, 2428
> > udpcb:96, 1064, 11, 73, 2637
> > unpcb:64,0,  0,  0,0
> > socket:  160, 1064, 27, 48, 5752
> > AIOLIO:  704,0,  0,  0,0
> > AIOL: 64,0,  0,  0,0
> > AIOCB:   128,0,  0,  0,0
> > AIOP: 32,0,  0,  0,0
> > AIO:  96,0,  0,  0,0
> > NFSNODE: 288,0,  0,  0,0
> > NFSMOUNT:544,0,  0,  0,0
> > VNODE:   192,0,   3801,121, 3764
> > NAMEI:  1024,0,  0, 24,  3391019
> > VMSPACE: 192,0, 20, 44,50568
> > PROC:352,0, 23, 35,50571
> > DP fakepg:64,0,  0,  0,0
> > PV ENTRY: 28,   139130,   3868,  12497,  7249523
> > MAP ENTRY:40,0,351,491,   802213
> > KMAP ENTRY:   40, 3996, 61,169,13057
> > MAP: 100,0,  7,  3,7
> > VM OBJECT:   124,0,532,   2133,   604523
> >
> >
> > But the client is still getting these:
> >
> > Fatal Errors
> > 
> >bounce
> >   1   open file defer 63A8B51811: Too many open files in system
> >   1   open file defer 954FF516E4: Too many open files in system
> >   1   open file defer 21333516EF: Too many open files in system
> >   1   open file defer B7675516C8: Too many open files in system
> >   1   open file defer B6A3E51605: Too many open files in system
> >   1   open file defer 33E0B518AE: Too many open files in system
> >   1   open file defer 9521D51766: Too many open files in system
> >   1   open file defer 6848A5171E: Too many open files in system
> >   1   open file defer B5B285178E: Too many open files in system
> >   1   open file defer 300B45164D: Too many open files in system
> >   1   open file defer 624495172D: Too many open files in system
> >   1   open file defer B18F351765: Too many open files in system
> >   1   open file defer B2688516B4: Too many open files in system
> >   1   open file defer 2717951750: Too many open files in system
> >cleanup
> >  26   accept connection: Too many open files in system
> >qmgr
> >  10   socket: Too many open files in system
> >   1   open active 7123851A0C: Too many open files in system
> >   1   open active 2B05B517B4: Too many open files in system