Hi Bruce, Thanks for your quick reply.
Yesterday, I pulled meta-virtualization master-next branch to my local server and trying to build k3s with qemux86-64 image. In master-next branch, I found you had created DISTRO_FEATURE check for k3s and some package groups for k3s. E.g. packagegroup-k3s-node, packagegroup-k3s-host. I configured my local.conf to install k3s with following configurations: DISTRO_FEATURES_append = " k3s virtualization" IMAGE_INSTALL_append = " packagegroup-k3s-node packagegroup-k3s-host cgroup-lite kernel-modules" However, when I use: bitbake core-image-minimal to compile a minimal image to test k3s. I encountered the following error message: gcc: error: unrecognized command line option ‘-fmacro-prefix-map=/path/to/build/tmp/work/core2-64-poky-linux/libvirt/6.3.0-r0=/usr/src/debug/libvirt/6.3.0-r0’ | error: command '/home/lance/repos/build/tmp/hosttools/gcc' failed with exit code 1 | WARNING: exit code 1 from a shell command. | ERROR: Execution of '/path/to/build/tmp/work/core2-64-poky-linux/libvirt/6.3.0-r0/temp/run.do_compile.162048' failed with exit code 1: It seems the error was related to libvirt. I two three questions here. If you need more log details, please let me know. 1. Do I mis-config anything so that the error above appeared? Could you please share your configuration when you compile the image with k3s? 2. Do we need libvirt when we install k3s? From my own experience, libvirt is for virtualization. When I run k3s on bare-metal, I don't think I install a libvirt component. And before, I ran a k3s successfully without libvirt on aarch64 Yocto Linux. Once I successfully built the qemux86-64 image, I am willing to spare some effort to reproduce the bugs you mentioned in the previous email. Would you mind sharing your config to me? Best Regards, Lance > -----Original Message----- > From: Bruce Ashfield <[email protected]> > Sent: Friday, December 11, 2020 9:44 PM > To: Lance Yang <[email protected]> > Cc: Joakim Roubert <[email protected]>; meta-virtualization <meta- > [email protected]>; nd <[email protected]>; Michael Zhao > <[email protected]> > Subject: Re: [meta-virtualization][PATCH] k3s: Bump to v1.19.3+k3s3 > > On Fri, Dec 11, 2020 at 1:31 AM Lance Yang <[email protected]> wrote: > > > > Hi Bruce and Joakim, > > > > > > > > I recently saw a new branch master-next has been created in meta- > virtualization. It mainly related to k3s and I saw k3s had been bumped to > v1.19. I would like to know if the k3s patch will be merged into master branch > soon. I remembered you mentioned k3s v1.18 has some agent registration > issue and some had crash loop issues. > > > > > > > > What is the current status of the k3s patch? Is there any further issues? > > It is just in master-next, to show that some cleanups have been done, and > the supporting component version bumps that I've done, but no, it still > doesn't work under qemux86-64, so it can't actually merge to master. > > The node becomes ready, but all the core containers go into a crashbackoff > loop, kubectl logs doesn't work, and we still can't register a client with it > (likely because of symptom #1). > > I'm also chasing down an iptables issue (that I had previously fixed), but > should have that sorted shortly. > > Bruce > > > > > > > > > Best Regards, > > > > Lance > > > > From: [email protected] > > <[email protected]> On Behalf Of Bruce > > Ashfield via lists.yoctoproject.org > > Sent: Thursday, November 19, 2020 4:39 AM > > To: Joakim Roubert <[email protected]> > > Cc: meta-virtualization <[email protected]> > > Subject: Re: [meta-virtualization][PATCH] k3s: Bump to v1.19.3+k3s3 > > > > > > > > > > > > > > > > On Wed, Nov 18, 2020 at 1:27 PM Joakim Roubert > <[email protected]> wrote: > > > > On 2020-11-17 20:39, Bruce Ashfield wrote: > > > > > > root@qemux86-64:~# kubectl get pods -n kube-system NAME READY > STATUS > > > RESTARTS AGE > > > local-path-provisioner-7ff9579c6-bd947 0/1 CrashLoopBackOff 822m > > > coredns-66c464876b-76bdl 0/1 CrashLoopBackOff 822m > > > helm-install-traefik-mvt5k 0/1 CrashLoopBackOff 822m > > > metrics-server-7b4f8b595-ck5kz 0/1 CrashLoopBackOff 822m > > > root@qemux86-64:~# > > > > From my own experience with CrashLoopBackOff situations is that there > > may be serveral different things causing that. Can you trace down what > > is causing the CrashLoopBackOff here? > > > > > > > > Not really. The build seems to have some sort of misconfiguration and no > logs are created (and hence can't be accessed via kubectl). > > > > > > > > Running the server on the command line just gets us reams of messages > and a lot of false leads (too many nameservers, iptables --random-fully not > supported, etc ). > > > > > > > > I've tried a few variants today, but can't get any really simple logs out. > > I'll do > more tomorrow. > > > > > > > > I've been able to confirm that my packagegroup/image type runs docker, > containerd, and runc launched containers just fine, so this is just some > quirk/idiocy of k*s that is causing the problem. > > > > > > > > Bruce > > > > > > > > > > BR, > > > > /Joakim > > > > > > > > > > > > -- > > > > - Thou shalt not follow the NULL pointer, for chaos and madness await > > thee at its end > > - "Use the force Harry" - Gandalf, Star Trek II > > > > IMPORTANT NOTICE: The contents of this email and any attachments are > confidential and may also be privileged. If you are not the intended > recipient, > please notify the sender immediately and do not disclose the contents to any > other person, use it for any purpose, or store or copy the information in any > medium. Thank you. > > > > -- > - Thou shalt not follow the NULL pointer, for chaos and madness await thee > at its end > - "Use the force Harry" - Gandalf, Star Trek II IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
-=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#6178): https://lists.yoctoproject.org/g/meta-virtualization/message/6178 Mute This Topic: https://lists.yoctoproject.org/mt/78314751/21656 Group Owner: [email protected] Unsubscribe: https://lists.yoctoproject.org/g/meta-virtualization/unsub [[email protected]] -=-=-=-=-=-=-=-=-=-=-=-
