On 23.02.2018 14:55, Norman Feske wrote:
Hello Valery,
I have the impression that your undertaking goes very much against the
grain of our regular work flow. Frankly speaking, I miss the point of
bypassing our tooling and instead manually working with boot modules. We
deliberately moved away from relying on the boot loader to load
individual ROM modules, for a number of reasons:
1. Not all kernels provide the a way for roottask (Genode's core) to
access individual boot modules. In particular seL4 or OKL4 do not.
2. On ARM there is not such concept. Boot loaders on ARM load a kernel
image only.
3. For the reasons above, Genode has to provide a way to include the
initial ROM modules in the boot image. Using this mechanism across
all base platforms reduces Genode's complexity and ensures that the
solution is well tested. In contrast, the prior used kernel-specific
code was more fragile.
4. We hit limitations of several multi-boot loaders. E.g., I am thinking
of the GRUB's maximum number of boot modules. With iPXE, we hit
different surprises such as a slightly different convention of how
boot modules are named. By using one unified mechanism, we rule out
these sources of trouble.
5. Shuffling boot modules manually is bug prone. By letting a run script
generate one image that contains all needed ingredients in their
current version eliminates the chance for inconsistencies between
the modules.
This all is understandable, and I'm aware of the most of all this
background. But on x86 and
the usual foc/nova kernels, the limitations of ARM bootloaders and some
x86 kernels
are not the case. Yes, GRUB has a limitation of 99 modules max. In my
own GRUB-
based loader I increased it to 200, or so. It was sufficient for my use
cases (I wrote
a multiboot kernel allowing to boot OS/2 with GRUB-like loaders. It
required about 70 modules,
and it was sufficient).
In short, we went into the deep end, experienced the limits of the
multi-boot approach, and decided for a more robust approach. Our tooling
reflects that. It makes it arguably difficult to edit init's
configuration on the fly - as you noted - and requires you to execute
the run script after each change. You present this as a limitation. But
I regard the approach of mutating the boot image manually as misguided.
It is not only bug prone but also evades the versioning of the
individual modifications. In contrast, if you embrace the work with run
scripts, you can always reproduce your scenario and naturally track
modifications using Git.
Yes, this all is understandable too. But I cannot run the "run" scripts
manually
after I changed the configuration. If they are contained inside the core
image,
it requires it to be rebuilt. But I cannot rebuild the image without
access to my
development machine. I don't change binaries, I mostly change the
configuration
files. I usually update the binaries after changing them and
recompiling, so all
binaries should be of the latest version. (this is regarding the versioning)
So, the reason why I want to bypass the build system is because ATM, I
have no
spare machine with Intel AMT technology, so I cannot deploy the image
generated
with the run script, to my test machine, automatically. (I think that
most people
outside the core Genode team have no ThinkPads with Intel AMT Support. I
have
one ThinkPad, but it is used to run my development Linux system, and it
is not
desirable to reboot it each time. My third test machine is Asus Core2Duo
machine
with Intel chipset and Nvidia video card, so it has no Intel AMT
support.) So, I just
copy the image manually to my bootable flash stick, and trying to boot
it on my test machine,
manually. To avoid regenerating "core" image, I'd like to split it back
to separate files.
So, regenerating the image each time after each "config" change is not
feasible here.
This would require copying all the big "core" image to the flash stick,
which is too
long (and requires access to my development machine). That's why I want to
have "core" to be disassembled to separate files. This is simply more
convenient (for
my specific case) to edit "config" files only, without the need to
regenerating and
copying the image over. So, I'd like to have a possibility to bypass the
usual approach.
But if it's not possible via standard options in etc/boot.conf, I'd like
at least a way
to create the "core" image without any modules included. I modified the
"run" tool
a bit, but It does not like an empty modules list. So, my last question
was how would
I create the image with an empty modules list. I see "run" tool
generating an assembler
file with the module list. Is it possible to have it empty somehow?
PS: A feature request for Genode build system: May be, it would be good
to add options for etc/build.conf, to disable removing the "genode"
subdirectory, containing all the binaries, plus optionally generate
"core" together with "image.elf", so that it will generate a core image
without embedded modules. It would be more convenient if a developer
wants to copy modules to some GRUB installation manually. So, one can
run the run script to generate both the "image.elf" and separate
binaries. This would allow both to 1) deploy the scenario on test
machine automatically and 2) copy the scenario manually to GRUB config
file. For manual copying, it would be more convenient to have separate
binaries, so they are not duplicated in each scenario image.elf images,
which both take much space on disk, and the same binaries can be reused
in many scenarios. Also, with all modules built into the "image.elf", it
is not very comfortable to edit the "config" files without regenerating
the "image.elf".
The latter point was indeed a concern we had when unifying the
boot-module handling. However, on closer inspection, we found three
possible scenarios where the mechanism is used:
1. The majority of run scripts are test cases. They are small and are
executed ad-hoc (or automatically) but are never permanently
installed. So sharing binaries across scenarios would not give
any benefit.
But still all the binaries, except the test itself, can be shared. The
tests use the same "general" components (which are under test)
as other scenarios.
2. Run scripts that describe self-sustaining systems, like the
Turmvilla scenario. Here we have a large base system consisting of
many boot modules, maybe even including virtual disk images.
This situation likely corresponds to your's. It would be nice to
reuse selected boot modules (like a virtual disk image) between
scenarios. The single-image approach is clearly limiting.
3. Run scripts that create the boot image of a multi-stage scenario,
like the Sculpt scenario. Here, the boot image contains merely the
components needed to bootstrap a second stage from within Genode.
The initial boot image features a block-device driver, file system,
and fs-rom server. The interesting part happens at a second stage
where the Genode system can access information from the disk
directly.
Of these three cases, only the second one would really benefit from
loading individual ROM modules as multi-boot modules. Based on our
experience with Turmvilla, we figured that scenarios of this type -
where a complex system is bootstrapped by the boot loader only - do not
scale well. Since the direction is inherently limiting, we should stop
pursing it but instead embrace systems of the third type.
With Sculpt as the most prominent example of third type, one can already
see the benefits. Thanks to fetching all ingredients from the depot,
executing the run script is fast. The resulting boot image is quite
small (less than 20 MiB) and will stay small even when the second-stage
system grows. The potential benefit of sharing parts of it between
multiple scenario is negligible.
I hope that this background sheds some light on our line of thoughts
that went into the boot-module handling of Genode. Please understand
that we won't like to go back to supporting earlier approaches that
haven't worked out for us.
Most scenarios are still of case "2". Dynamic scenarios of case "3"
are not common yet. It seems, only "Sculpt" belong to this class.
BTW, I cannot run both "Sculpt" and "VirtualBox" and "Seoul"
scenarios, so far. NOVA seems to hang on my three available
machines, for some reason. So, I tried the Fiasco.OC kernel instead.
(But VirtualBox and Seoul are working on NOVA only). So far, my
attempts to run Sculpt was not successfull too. It looks like acpi_drv
does not like something in ACPI tables of my test machine.
Cheers
Norman
------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
genode-main mailing list
genode-main@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/genode-main