Webmaster wrote:
> I've found that there are so many options... I don't know which one I
> should choose and which one shouldn't...
> What is the different between "* built-in" and "M moudle"?
You got some replies which may have helped. I'll give an explanation
of the difference in physical terms, and give you my opinion about
how to proceed.
If a device driver is "built in" then the code for it gets linked
as a physical part of the kernel, and becomes part of the image
on the boot device which gets loaded into memory at initial boot.
If a device driver is built as a "module", then it is not physically
part of the kernel image, but information is included both in the
kernel and in the separate module containing the device driver, which
allows the device driver to be loaded and the final link done during
the boot process.
In order for this to take place, the device which contains the
actual physical image of the module must be mountable during
boot. This means that the device drivers necessary to access
the physical device where the module image is stored must be
present in the kernel at that time. Those may be modules, but
somewhere there must be an end to the regress of dependencies.
This means that at least some of the device drivers must be
"built in".
One way to do this, usually followed by the standard distributions,
is to "build in" the device drivers which enable the kernel to mount
a RAM disc, and to specify the RAM disc as something to be loaded
at initial boot. Then, all the other device drivers necessary for
initial boot are built, but are built as "modules", and placed in
the RAM disc image.
During boot, the initial RAM disc is loaded, and mounted by the
kernel. Then the startup code in the RAM disc probes the hardware,
and loads the device drivers (modules) necessary to access the
hardware present on the machine. After that, the physical disc
drives can be accessed, and mounted. Then, the device drivers
not necessary for boot get loaded as modules. These would
include such things as sound device drivers, display drivers,
etc. All the stuff not necessary to access the boot devices
on the system.
In order to do that, one must understand how the RAM disc
gets built, and what all must be present in it. This is
an unnecessary complication, when one is not building a
general purpose kernel which must boot with a wide variety
of physical configurations of hardware. You are not building
a kernel which can boot anywhere, you are building a kernel
for your machine.
So, it makes sense, with LFS, to build all device drivers
into the kernel, and make a custom kernel for the hardware
on which it is to be installed. It's not the only way, and
certainly if one is trying to build for a FLASH drive which
can be carried to a variety of machines it is necessary to
include many different drivers, but for a first build, it's
generally best to build only the drivers necessary for the
hardware present on the target machine, and to make them
all "built in".
Mac
--
p="p=%c%s%c;main(){printf(p,34,p,34);}";main(){printf(p,34,p,34);}
Oppose globalization and One World Governments like the UN.
This message made from 100% recycled bits.
You have found the bank of Larn.
I speak only for myself, and I am unanimous in that!
--
http://linuxfromscratch.org/mailman/listinfo/lfs-support
FAQ: http://www.linuxfromscratch.org/lfs/faq.html
Unsubscribe: See the above information page