> Tannenbaum also thinks a 30% performance hit is acceptable. Maybe for some > enterprises *if* you can prove a reliability increase, but for the vast > majority no. Unless it can be knocked down to 10% or so it won't be > accepted. Even OSX which uses Mach rewrote the message passing to use > function calls (thus making it a macrokernel). Far more likely is you'll > see increased separation of modules and abstraction inside of a > macrokernel, ala VFS and linux modules.
Yes but don't forget that your 30% performance hit just got erased 6 months later when you bought a CPU that is twice as fast! Moore's Law takes care of that 30%. You could say the same thing about high level langs, GUIs, JVMs, etc. > As for HURD- its taken wrong design decision after wrong design decision. > As of a few years ago, it only supports 4GB hard rives because it mem-maps > the entire drive. Forget the fact noone had used a drive that small for a > decade- it was more elegant to write it that way. If that kind of > thinking is endemic to Hurd, it will never be released. That is interesting. I didn't know their design was lame. I'd curious to hear about any other bad design decisions they've made if you know of any. cs -- [email protected] http://www.kernel-panic.org/cgi-bin/mailman/listinfo/kplug-list
