Alan McKinnon pisze:
On Wednesday 30 January 2008, Mateusz Mierzwinski wrote:
Alan McKinnon pisze:

One could try listing these drivers in /etc/autoload.d/kernel-2.6
but the easiest is probably to compile them into the kernel
You have right. Standard unix kernel was designed to have all inside.
I don't know why some people still prefer modules than monolith
kernel. If you have modules, you must recompile all of them when new
ABI comes out. On monolithic kernel its all there. Also i don't know
if modules are not slower than monolith Kernel because of User space
to Kernel space connection. Compiled in modules makes kernel run
faster, and if it's server, then even 0.00001 sec makes the different
on functions execution and internal core communication. Partly beyond
of that problem is IPC module communication. But if there are
benefits then also are problems with error code execution or beta
drivers installed. Also if You have Molnar's Real Time Preemption
Model on Your kernel, You should choose monolithic kernel. PS: some
drivers are don't work as modules like they should. You see Windows -
there are drivers like Linux modules - what it makes? Blue screens of
death ;), but this is another story... ;)

That's an easy answer :-)

Standard Unix was designed for systems where you know exactly what hardware you have up front and there are no nasty surprises. Take for example an SGI box. How many PCI chipsets could it possibly have? Exactly one. So you know exactly which driver you will need.

Apple are still lucky in this regard, as the only hardware that goes in them is Apple's hardware and kernel configurationis then quite easy.

But we use the worst possible design that demented designers could ever come up with - PCs. The range of stuff available is staggering. The amount of dodgy hardware that claims to conform to spec but doesn't is even more staggering. So now exactly which modules are you going to compile in monolithically? What about hotpluggable hardware? As a vendor you have no way of knowing which funky hardware the user will ever plug into a notebook, and you simply cannot compile everything in (never mind the driver conflicts you will have).

Gentoo expects their users to compile their own kernels so to a large extent we can make stuff monolithic, apart from the hot-pluggable devices.

Binary distros cannot do this. To conserve memory they must load only the drivers for hardware that is actually present, and to do that one needs modules. I don;t know of a binary commercial distro that will gladly still support users who compile their own kernels - they usually stuff it all up gloriously. It's a no-brainer really.

I don't buy the speedup argument either and have never seena benchmark that proves modules are slower. The Linux kernel module loader is essentially self-modifying code and inserts modules into kernel space as if they had been there monolithically (within reason of course). Some drivers do behave differently between being modular and monolithic, but this is a function of a crappy coder and not a function of modularity :-) If a piece of kernel code causes a BSOD, then it will probably do it either way it is loadable as the code is probably crappy.

A Real Time kernel is not suitable for a PC either - I can't think of any RT application where a PC would be a *GOOD* choice. For that I would be using the embedded arches with a board where as designer I know exactly what is present and what isn't. Monolithic makes more sense in that case. I can imagine why Ingo made that choice - with RT he has to supply certain guarantees and probbaly can't do that with modules coming and going all the time.

Basically. modern Linux is essentially unusable without kernel modules.


Talking about modularize kernel i think this is an gentoo mailing list so every user know's his hardware - if not there is always GOOGLE, Gentoo HowTo and Hardware Manual. Most drivers in kernel are universal for one vendor family what makes more suitable to different types of chipsets (revisions A, B etc...). There is also true that maybee kernel modules are good for people with binary distro's but Gentoo is source based distribution - thank god - and every user should compile kernel for his hardware - modules not needed. Cheap code modules are also bad rule of cheap programmers, which don't know system and kernel structures. Afterwords thats how making usage of NDISWRAPPER is fundamental on Windows drivers hardware.

If we speak about realtime preemption model i think that You are mistaken saying that PC and realtime kernels (software) is not good choice. My licentiate work on University of Silesia (Poland, Katowice) is about usage of realtime services in computer LAN/WAN networks. I digging some materials about RTOS and realtime preemption model, realtime schedule algorithm and realtime applications critical points programming. I don't know if PC + Realtime preemption model is something wrong. When You need critical services for network such as multiplexed SDH traffic control and violation prevention You must have great power computer with RTOS, that can monitor min. 166MB/s traffic full duplex. Now-days computers have enough power to stand with RISC (Reduced Instruction Set Computing) machines - thats why Sun Solaris has arrived on PC's. Another big step is RTLinux with dual core - realtime core and Linux kernel working together.

Mateusz M.
--
gentoo-user@lists.gentoo.org mailing list

Reply via email to