On Fri, 21 Jan 2000, David Lang wrote:
> Paul, i don't know what you are replying to, I was making the point that
> the box was massivly overpowered.
"Massively overpowered" is good enough for me to be replying to. The
issue (especially with IPMasq where retrans and timeouts aren't part of
the hosting OS as they are with proxies), and more especially with
streaming media protocols such as RealAudio/RealVideo, the issue isn't CPU
performance, it's latency. Faster CPUs decrease latency up to the point
where you're I/O and memory bound.
Traffic patterns for a bunch of students are significantly different than
those of a bunch of people in a corporation (from what I've seen, but my
experience with students is fairly limited.) If my coworker's children
are any measure, streaming media connections will abound.
Since Paul didn't say what kind of Internet connection he has or his
internal network topology. Without that, there's no way to tell if the
packet buffering issues will determine lag more than the OS, bus, and
memory ones.
I'd doubt though that all 3-400 of his users will be sitting on the same
Ethernet segment (if they are, the speed of the gateway won't much matter,
there will be worse problems.) The latency issues of 300 people all
hitting that gateway at once will be noticable (and measurable.) Enough
so that if the internal topology isn't laid out well, it's worth putting
in and outbound interfaces on the box just to drive down collisions on
the internal interface. Unfortunately, in current stable Linux kernels,
that means an immediate degragation in performance.
"It works." is significantly different from "It works well." which is
still different from "This is as good as it gets." I'd expect a liberal
arts college to put heavier use on streaming protocols and that's where
latency can bite the most.
The first night everyone's stuck inside because of (pick your favorite
natural weather phenoninom) it'd suck to see a meltdown.
With business users, you can figure on a high of about 15% active
concurrent sockets for your users. 20% is about the highest I've seen it
yet. In a college dorm in Iowa during a snowstorm I'd expect it to be
higher.
I've fielded about 20 Linux proxy servers over the last few years. I find
they work pretty well for HTTP and FTP where there's a slow connection
like a 56K Frame Relay circuit or a T-1. Throw in a significant ammount
of E-mail, and DNS cache and my base memory requirement tends to go to
128M (but then I find swapping bad) from 64M. Put more than about 250
concurrent sessions on it and I find the latency to be too high to be
comfortable in long-term production. Maybe your users aren't as critical,
or maybe your traffic patterns are significantly different, but at this
point there's no doubt that there are better solutions for the same
hardware that induce less latency for the proposed usage.
Paul
-----------------------------------------------------------------------------
Paul D. Robertson "My statements in this message are personal opinions
[EMAIL PROTECTED] which may have no basis whatsoever in fact."
PSB#9280
-
[To unsubscribe, send mail to [EMAIL PROTECTED] with
"unsubscribe firewalls" in the body of the message.]