On Tue, 15 May 2007 12:33:22 +1200 Mark Kirkwood <[EMAIL PROTECTED]> wrote:
> Grant wrote: > > I've been puzzling a bit lately over the best way to manage my > > kernel. I've always tried to keep it as minimal as possible, and I > > only enable things as I need them. I also don't build modules from > > the kernel at all. > > > > Is there a better way to go? I'm starting to think it might be > > better to build every single module and let the system load them as > > it needs > > A friend of mine does this for his production servers: > > 1/ builds the known needed things into the kernel > 2/ disables loadable modules completely > > This is probably not suitable for some use cases...(new raid card > ...ooops... redo kernel), but if you are deploying to known hardware > it is ok. > > Cheers > > Mark But Why? What's the benefit? If the code isn't being used, it isn't going to slow down the kernel is it? And the size of the kernel is irrelevant in my opinion -- the kernel is far from the predominant memory consumer on even a slow system. I think it's more likely that you'll have a problem with your kernel configuration than your kernel performance, and modules are the only way to add kernel support without rebooting. Furthermore, kernel modules have their own benefits -- increased run-time configuration, for example (as opposed to a boot parameter). No, I agree with volker: >everything needed for booting: in kernel >everything needed all the time: in kernel >everything that needs a good kicking once in a while (usb, sound): >modules everything that needs parameters: modules >everything that is not needed all the time: module that way, you can also build modules on-the-fly to suit your needs and then compile them into the kernel, if desired, the next time you rebuild it. -- [EMAIL PROTECTED] mailing list

