On Thu, 20 Jan 2000, Paul Crittenden wrote:
> I have a Gateway 2000, model E-3110. It has a 266mHz processor with 128
> meg of memory and 6 gig of disk space. The disk is split into 1 gig for /
> and /var and 5 gig for /usr. The / is 23% full and /usr is 14% full. We
> am going to use this as a firewall here at Simpson. The machine is
> running OpenLinux 2.3 with the 2.2.14 kernel. Testing it everything works
> fine.
2.2.x (and earlier) Linux kernels suffer performance problems with
multiple interfaces, only you can discern if it's enough of a performance
issue to warrant swapping it out. It'll be fixed in 2.4.x kernels.
> It is mainly for students and not for staff and faculty machines. Students
> are given a 10.x.x.x address, from a dhcp server, and the firewall masq's
> that for the Internet. They will be using web services, ftp services, Real
> Audio, email and chat mainly. We don't allow ICQ. There are 300-400
> students that could possibly use it but most likely not all will be hitting
> it at a one time. My boss is concerned whether or not this is enough
> machine for this application or if it will be a bottleneck. Any thoughts?
As a proxy server, it'd be a little underpowered, but might be ok
depending on if you're allowing direct mail, mail through it, or mail
relay to its own server other than for RealAudio/RealVideo, which might
provide some saturation. As a packet filter, my *supposition* is that
it'll be worse (I've run proxies for a limited time on such hosts while
doing upgrades on the "real" machines and been fine.)
I'm not sure how you'll stop any prototcols that don't require inbound
connections with IPMasq, I prefer to heavily limit such connections, but
that's more me than anything. If by "chat" you mean IRC, it's as bad a
protocol as ICQ IMNSHO.
You'll probably want to build your kernel to optimize as a router so that
packet buffering is a little better.
I've got machines that are in the 350MHz range handling more users than
that as packet filters, but I'm not using NAT and the users are all
proxied keeping the rulesets down to less than a half-dozen machines, and
I don't allow any streaming protocols. They're hardly CPU constrained
though.
The last time I had to do this, I chose NetBSD and IPFilter.
My reasoning was:
A. IPFilter gives me different hardware/OS combinations should something
be seriously wrong with a single implementation. Switching OS' didn't
involve translating filter rules to a new platform.
B. IPFilter gave me the ability to handle state for stateless protocols
such as DNS, since I wasn't masquerading, I wouldn't get this with
ipchains.
C. Linux with multiple interfaces isn't optimal at the moment, and I hate
to have to upgrade kernels on production machines too soon, and my user
base is too large to take the chance of being "almost" able to handle
the load.
D. Linux's packet filtering code is going to change for the next set of
stable kernels, and I don't like change on my firewalls without
extensive testing. I don't get a lot of time to test, so I'd like to
stick with something that will be relatively stable for the foreseeable
future.
E. I already had a lot of Linux machines, and I don't like to run a
homogonous environment because I favor the slimmer chance of a bug
in multiple environments over the inability to "standardize."
After some research, NetBSD became my first choice, with OpenBSD second
and FreeBSD third. Solaris was my non-x86 choice. I went with NetBSD on
x86 hardware, and I've been pleased so far.
All of my users *must* use a different gateway to relay SMTP. They're all
stuck behind different proxy servers to handle any other allowed
protocols. I've got filtering routers all over my configuration, and my
infrastructure is on switched media with significant filtering.
That means that the interfaces don't have to do a great deal of work
discarding broadcast packets, packets for non-IP protocols, packets not
destined for things that aren't generally allowed, etc. Obviously, this
gives me significant performance advantages.
I'd suggest trying some benchmarking with some WebStone tests, and then
adding in some nmap scans spoofed from at least 75% of your address range
(you'll need to use multiple PCs) and maybe some virtual interface nmap
scans from as many hosts as you can fake. Do it on an isolated network
with a Web server and with and without your host in the middle. See what
the performance hit is, and more importantly see if utilization is
bearable during peak traffic times with and without the host involved.
You might also want to start with a plan of load-sharing over multiple
boxes with some pre-arranged scale points and change the scale points
based on how things work out.
Also, if your users are already Net-connected, you may want to see how
much traffic is currently being transferred each day and use that as a
baseline, with a bit of growth room.
With less than 500 users you shouldn't be outside the realm of
possibility, but a lot depends on how much info IPMasq keeps in its tables
and how long for.
Paul
-----------------------------------------------------------------------------
Paul D. Robertson "My statements in this message are personal opinions
[EMAIL PROTECTED] which may have no basis whatsoever in fact."
PSB#9280
-
[To unsubscribe, send mail to [EMAIL PROTECTED] with
"unsubscribe firewalls" in the body of the message.]