Hi Bruce,
Thanks for your extensive answer and also the great work you put into
meta-virtualization <3
I watched some of your talks on youtube. Very good structure, it helped
me a lot.
I fully understand the technical and architectural reasons, and I try to
get my customer slowly towards the retirement of the containers in favor
of a clean yocto integration. But there are social aspects (developers
are used to containers and since they kinda behave like ubuntu, they
know their way around). So we need to do the transition slowly to keep
everybody on board.
Thank you for your input and ideas, I think I can craft something on
this basis.
Regards,
Matthias
On 10/25/24 19:19, Bruce Ashfield wrote:
On Fri, Oct 25, 2024 at 8:49 AM Matthias Schoepfer via
lists.yoctoproject.org <http://lists.yoctoproject.org>
<[email protected]> wrote:
Hi!
I have been searching the web but have only found tons of stuff
how to
build a yocto image in a docker container, and some how to build a
container from a yocto recipe. I need to embed a (couple of) (docker)
containers into a readonly root file system. It is meant to be an
intermediate step before simply integrating all the services directly
into the image. What can I say.
My Idea is to for example podman pull the image and redirect the
storage
location into the ${D} space. On the machine, it should be
possible to
configure podman to include this directory.
There's been several posts on the list, and even a session at one of
the hands-on labs (I think it was within the last year), talking about
doing just that.
For both technical and architectural reasons, those approaches aren't
something that will merge into OE core, or meta-virtualization. The
reasons have been discussed in those other threads, so I won't repeat
them here as I'm a bit short on time, but didn't want to leave this
unanswered (there are licensing, compatibility, traceability,
reproducibility,
etc, issues).
For meta-virtualization, I call it "cross installation of containers", and
while it could support (and I'm leaving that door open) 3rd party built
containers, it is targeted for containers that have been built with the
yocto project itself. The external containers will be an exercise left
up to people in their own layers.
While I can pull images on my host, when I try to write a recipe
(extending podman to be -native), I get all kinds of funny errors...
Also tried buildah.
That all being said, there is work being completed in this area, and
I'm starting to push some of the support bits to master-next right now.
That includes a native set of tools, since in order for it to get into
meta-virtualization, it has to work with all the runtimes. I already have
podman, containerd, etc, working, docker is still the outlier that I'm
trying to get working without special permissions.
While waiting for that, you can easily work around the need for
-native versions of the yocto tools. I've done plenty of prototyping
with build machine container tools and haven't had any issues
(and in fact those talks and hands on classes I mentioned are
doing that as well).
The problem with docker is that the container file system storage
isn't the OCI standard, and it really isn't documented in a way that
I've been able to understand yet. So you can run docker on the
build host, pull a container to make it available and then copy it
into the target rootfs while it is being built .. having it show up to
the target docker and be "runnable" is the challenge.
I'm playing around with running the target in qemu during the
build and letting docker install that way. But I'm not happy ith
it yet.
Alternatively, if you are running a fully read only rootfs, your
container state is going to be lost when they exit (and after
a reboot), so you could just mount a tmpfs over the container
store, have the containers available in the readonly rootfs and
have docker import them on first boot.
podman has some extra flexibility as it is using a container store
that is manipulated by OCI tool so you can more easily run podman
on the build host and install it to the target rootfs and have it
show up as runnable.
Which takes me back to my first statement that for this to land
in meta-virt, I need all the container runtime options working, which
is what is taking the extra time.
Bruce
Sorry, I am noob in the side of the containers and their format. I
always tried to avoid them, as they seem to me as an option to "just
static link everything" to not have to bother with dependency issues.
Any hints and links are welcome.
```
[storage.options]
# Storage options to be passed to underlying storage drivers
# AdditionalImageStores is used to pass paths to additional Read/Only
image stores
# Must be comma separated list.
additionalimagestores = [
]
```
Thanks and Regards,
Matthias
--
- Thou shalt not follow the NULL pointer, for chaos and madness await
thee at its end
- "Use the force Harry" - Gandalf, Star Trek II
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#8939):
https://lists.yoctoproject.org/g/meta-virtualization/message/8939
Mute This Topic: https://lists.yoctoproject.org/mt/109206904/21656
Group Owner: [email protected]
Unsubscribe: https://lists.yoctoproject.org/g/meta-virtualization/unsub
[[email protected]]
-=-=-=-=-=-=-=-=-=-=-=-