[EMAIL PROTECTED] wrote:

Don't forget that a microkernel introduces communication overhead and
usually some extra scheduling overhead which in turn eats into performance.
I seem to remember there was a big squabble over who had the fastest
webserver until Linux introduced a kernel level http accelerator which
blew everyone else out of the water so badly that they first tried accusing
the kernel developers of cheating and when that didn't work they just
stopped playing that game, took the ball and went home.

That's because it's a stupid idea. Really, it is. We can put all the code of the entire system into one process and run it in ring-0 and we'll have the fastest system in the world! Until it crashes. Which is what the argument really boils down to. When something which is part of the "kernel" crashes, do you want it to take down the whole machine or do you want it to be contained and replacable? A web server should run in user space. End of story. Debatably a file system and a network stack should run in user space.. if you can get sufficient performance. For a desktop operating system (which, remember, is what we were talking about) today's hardware is so much overkill that you could run different parts of the kernel on different parts of a local network and still get adequate performance. So why are we still running it in ring-0? Because that's the system we have today and it doesn't make sense to reinvent the wheel if it is working. But it does make sense to keep working on it and slowly migrate things out of ring-0 and into their own process space so we can get rock solid stability and the flexibility needed to implement and debug new things more safely.

Trent
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html

Reply via email to