I had to think twice if I'd just let this die down IMHO unresolved or if
I'd pursue my point further...

On Tue, 27 Nov 2001 at 13:44, Mike Maravillo wrote:
> > What I wanted to clarify was if for such things as the Linux kernel's
> > handling of router functions (firewalling, NAT, and other stuff that for
> > Linux is in kernelland) we need more throughput, or latency. And the same
> > question for multi-function servers (ie: file/web/mail/proxy server in
> > one) that are becoming more popular in third world countries like ours.
>
> Kernel compile help section of the preempt patch:
>
> +Preemptible Kernel
> +CONFIG_PREEMPT
> +  This option reduces the latency of the kernel when reacting to
> +  real-time or interactive events by allowing a low priority process to
> +  be preempted even if it is in kernel mode executing a system call.
> +  This allows applications to run more reliably even when the system is
> +  under load due to other, lower priority, processes.
> +
> +  Say Y here if you are building a kernel for a desktop system, embedded
> +  system or real-time system.  Say N if you are building a kernel for a
> +  system where throughput is more important than interactive response,
> +  such as a server system.  Say N if you are unsure.


The point which I'm seeking help clarifying is: with a Linux-based server
handling router functions, is throughput or latency more important?

Initially, as I mentioned earlier, my view was simple, thanks to, among
others, the compile help section of the preempt patch that I read before
enabling it, and other documentation I read before patching my kernel.
"Desktop? Preempt. Pure server? Don't preempt."

However my email discussion with somebody from the XFS mailing list (not
an SGI developer, though) has opened up a new possibility.

With router functions, you have multiple (from a few to very very many
depending on how much traffic you're dealing with) parallel kernel
processes handling the traffic management. You increase this with firewall
rules. You increase this when you make a server work as a proxy at the
same time, although one may debate that the proxy is primarily a userland
concern. You increase this when you make the server work as a web server,
too, but again, userland. And so on.

This seems similar to the MTU/MRU issue. Do you want large blobs of data
going on for longer amounts of time a piece? Or do you want to shuffle
between smaller slices at smaller amounts of time per piece? At the end of
the day with or without kernel preemption you will get all the jobs done,
of course. I think the question here is responsiveness.

And so I'm back to my unresolved question ... :)

 --> Jijo

--
Federico Sevilla III  :: [EMAIL PROTECTED]
Network Administrator :: The Leather Collection, Inc.
GnuPG Key: <http://jijo.leathercollection.ph/jijo.gpg>

_
Philippine Linux Users Group. Web site and archives at http://plug.linux.org.ph
To leave: send "unsubscribe" in the body to [EMAIL PROTECTED]

To subscribe to the Linux Newbies' List: send "subscribe" in the body to 
[EMAIL PROTECTED]

Reply via email to