Hi Bruce,
On my side, the flannel did not crash.
I started k3s server without agent and confirmed the node is ready. Then I
copied the node_token and then ran k3s agent with that token and specified the
server url. Then I can see node registered successfully.
The pods traefik and coredns will be not ready after I started k3s agent in a
short while:
kube-system traefik-758cd5fc85-8tqhf 0/1 Running 0
24m
kube-system coredns-7944c66d8d-584jt 0/1 Running 0
21m
but they soon were back to ready state:
kube-system coredns-7944c66d8d-584jt 1/1 Running 0
21m
kube-system traefik-758cd5fc85-8tqhf 1/1 Running 0
24m
I did not see any errors related flannel crash. Hope this helped a little bit.
Best Regards,
Lance
> -----Original Message-----
> From: Bruce Ashfield <[email protected]>
> Sent: Wednesday, November 11, 2020 9:40 PM
> To: Lance Yang <[email protected]>
> Cc: Joakim Roubert <[email protected]>;
> [email protected]; Michael
> Zhao <[email protected]>; Kaly Xin <[email protected]>
> Subject: Re: [meta-virtualization][PATCH v5] Adding k3s recipe
>
> On Wed, Nov 11, 2020 at 5:07 AM Lance Yang <[email protected]> wrote:
> >
> > Hi Bruce,
> >
> > Thanks for your reply. It took me a long time to address the $PATH issue.
> >
> > I found the cause after I cleaned my environment. it is because I set my
> > own $PATH before k3s
> startup which did not include "/bin" and some components in k3s will use
> "os.Getenv("PATH")
> to search the related tool.
> >
> > Since some OS may not have systemd and the k3s-clean script seems not to
> > delete all
> interfaces created by cni and umount all mount points related to k3s. I wrote
> a script for this
> situation and I will send it through a separate email based on your previous
> scripts.
> >
>
> Sounds good.
>
> You'll notice that I pushed an update to the k3s-wip branch, things are a bit
> closer to working
> there now.
>
> I am interested to hear if flannel crashes if you (or anyone else) install
> (and use) the k3s-agent
> on the same node as the k3s-server.
>
> As soon as you run any k3s-agent command (with the proper token and local
> host as the server),
> the node moves into NotReady with what looks like a CNI error.
>
> I fetched and ran the binaries directly from rancher, and they showed the
> same behaviour as the
> meta-virt ones, so it isn't something fundamental with the integration.
>
> Bruce
>
> > Best Regards,
> > Lance
> > > -----Original Message-----
> > > From: [email protected]
> > > <[email protected]>
> > > On Behalf Of Bruce Ashfield via lists.yoctoproject.org
> > > Sent: Tuesday, November 10, 2020 9:35 PM
> > > To: Bruce Ashfield <[email protected]>
> > > Cc: Lance Yang <[email protected]>; Joakim Roubert
> > > <[email protected]>; meta- [email protected];
> > > Michael Zhao <[email protected]>; Kaly Xin <[email protected]>
> > > Subject: Re: [meta-virtualization][PATCH v5] Adding k3s recipe
> > >
> > > On Tue, Nov 10, 2020 at 8:17 AM Bruce Ashfield via
> > > lists.yoctoproject.org <[email protected]>
> > > wrote:
> > > >
> > > > On Tue, Nov 10, 2020 at 7:46 AM Bruce Ashfield via
> > > > lists.yoctoproject.org
> > > > <[email protected]> wrote:
> > > > >
> > > > > On Tue, Nov 10, 2020 at 1:43 AM Lance Yang <[email protected]> wrote:
> > > > > >
> > > > > > Hi Bruce and Joakim,
> > > > > >
> > > > > > Thanks for sharing this branch: k3s-wip. I have tested against my
> > > > > > yocto build.
> > > > >
> > > > > The branch will be more functional shortly, I have quite a few
> > > > > changes to factor things for k8s and generally more usable :D
> > > > >
> > > > > modified: classes/cni_networking.bbclass
> > > > > modified: conf/layer.conf
> > > > > modified:
> > > > > recipes-containers/containerd/containerd-docker_git.bb
> > > > > modified:
> > > > > recipes-containers/containerd/containerd-opencontainers_git.bb
> > > > > modified: recipes-containers/k3s/README.md
> > > > > modified: recipes-containers/k3s/k3s_git.bb
> > > > > modified: recipes-kernel/linux/linux-yocto/kubernetes.cfg
> > > > > modified: recipes-networking/cni/cni_git.bb
> > > > > container-deploy.txt
> > > > > recipes-core/packagegroups/
> > > > >
> > > > > >
> > > > > > My Image: Linux qemuarm64 by yocto.
> > > > > >
> > > > > > The master node can be ready after I started the k3s server.
> > > > > > However, the pods in kube-
> > > system (which are essential components for k3s) cannot turn to ready
> > > state on qemuarm64.
> > > > > >
> > > > >
> > > > > That's interesting, since in my configuration, the master never comes
> > > > > ready:
> > > > >
> > > > > root@qemux86-64:~# kubectl get nodes
> > > > > NAME STATUS ROLES AGE VERSION
> > > > > qemux86-64 NotReady master 15h v1.18.9-k3s1
> > > > >
> > > >
> > > > Hah.
> > > >
> > > > I finally got the node to show up as ready:
> > > >
> > > > root@qemux86-64:~# kubectl get nodes
> > > > NAME STATUS ROLES AGE VERSION
> > > > qemux86-64 Ready master 112s v1.18.9-k3s1
> > > >
> > >
> > > Lance,
> > >
> > > What image type were you building ? I'm pulling in dependencies to
> > > packagegroups and the recipes themselves.
> > >
> > > I'm not seeing the mount issue on my master/server node:
> > >
> > > root@qemux86-64:~# kubectl get pods -n kube-system
> > > NAME READY STATUS RESTARTS
> > > AGE
> > > local-path-provisioner-6d59f47c7-h7lxk 1/1 Running 0
> > > 3m32s
> > > metrics-server-7566d596c8-mwntr 1/1 Running 0
> > > 3m32s
> > > helm-install-traefik-229v7 0/1 Completed 0
> > > 3m32s
> > > coredns-7944c66d8d-9rfj7 1/1 Running 0
> > > 3m32s
> > > svclb-traefik-pb5j4 2/2 Running 0
> > > 2m29s
> > > traefik-758cd5fc85-lxpr8 1/1 Running 0
> > > 2m29s
> > >
> > > I'm going back to all-in-one node debugging, but can look into the mount
> > > issue more later.
> > >
> > > Bruce
> > >
> > > > I'm attempting to build an all-in-one node, and that is likely
> > > > causing me some issues.
> > > >
> > > > I'm revisiting those potential conflicts now.
> > > >
> > > > But if anyone else does have an all in one working and has some
> > > > tips, feel free to share :D
> > > >
> > > > Bruce
> > > >
> > > > > I've sorted out more of the dependencies, and have packagegroups
> > > > > to make them easier now.
> > > > >
> > > > > Hopefully, I can figure out what is now missing and keeping my
> > > > > master from moving into ready today.
> > > > >
> > > > > Bruce
> > > > >
> > > > > > After the master node itself turned to ready state, I check the
> > > > > > pods with kubectl:
> > > > > >
> > > > > > kubectl get nodes
> > > > > > NAME STATUS ROLES AGE VERSION
> > > > > > qemuarm64 Ready master 11m v1.18.9-k3s1
> > > > > > root@qemuarm64:~# ls
> > > > > > root@qemuarm64:~# kubectl get pods -n kube-system
> > > > > > NAME READY STATUS
> > > > > > RESTARTS AGE
> > > > > > local-path-provisioner-6d59f47c7-xxvbl 0/1 ContainerCreating
> > > > > > 0 12m
> > > > > > coredns-7944c66d8d-tlrm9 0/1 ContainerCreating
> > > > > > 0 12m
> > > > > > metrics-server-7566d596c8-svkff 0/1 ContainerCreating
> > > > > > 0 12m
> > > > > > helm-install-traefik-s8p5g 0/1 ContainerCreating
> > > > > > 0 12m
> > > > > >
> > > > > > Then I describe the pods with:
> > > > > >
> > > > > > Events:
> > > > > > Type Reason Age From
> > > > > > Message
> > > > > > ---- ------ ---- ----
> > > > > > -------
> > > > > > Normal Scheduled 16m default-scheduler
> > > > > > Successfully assigned kube-
> > > system/coredns-7944c66d8d-tlrm9 to qemuarm64
> > > > > > Warning FailedMount 5m23s (x3 over 14m) kubelet
> > > > > > Unable to attach or
> mount
> > > volumes: unmounted volumes=[coredns-token-b7nlh], unattached
> > > volumes=[coredns-token- b7nlh config-volume]: timed out waiting for
> > > the condition
> > > > > > Warning FailedMount 50s (x4 over 12m) kubelet
> > > > > > Unable to attach or mount
> > > volumes: unmounted volumes=[coredns-token-b7nlh], unattached
> > > volumes=[config-volume
> > > coredns-token-b7nlh]: timed out waiting for the condition
> > > > > > Warning FailedMount 11s (x16 over 16m) kubelet
> > > > > > MountVolume.SetUp
> failed
> > > for volume "coredns-token-b7nlh" : mount failed: exec: "mount":
> > > executable file not found in $PATH
> > > > > >
> > > > > > I found the "mount" binary is not found in $PATH. However, I
> > > > > > confirmed the $PATH and
> > > mount binary on my qemuarm64 image:
> > > > > >
> > > > > > root@qemuarm64:~# echo $PATH
> > > > > > /usr/local/bin:/usr/bin:/bin:/usr/local/sbin:/usr/sbin:/sbin
> > > > > > root@qemuarm64:~# which mount
> > > > > > /bin/mount
> > > > > >
> > > > > > When I type mount command, it worked fine:
> > > > > >
> > > > > > /dev/root on / type ext4 (rw,relatime) devtmpfs on /dev type
> > > > > > devtmpfs (rw,relatime,size=2016212k,nr_inodes=504053,mode=755)
> > > > > > proc on /proc type proc (rw,relatime) sysfs on /sys type sysfs
> > > > > > (rw,relatime) debugfs on /sys/kernel/debug type debugfs
> > > > > > (rw,relatime) tmpfs on /run type tmpfs
> > > > > > (rw,nosuid,nodev,mode=755) ...
> > > > > > ... (skipped the verbose output)
> > > > > >
> > > > > > I would like to know whether you have met this "mount" issue ever?
> > > > > >
> > > > > > Best Regards,
> > > > > > Lance
> > > > > >
> > > > > > > -----Original Message-----
> > > > > > > From: [email protected]
> > > > > > > <[email protected]>
> > > > > > > On Behalf Of Bruce Ashfield via lists.yoctoproject.org
> > > > > > > Sent: Monday, October 26, 2020 11:46 PM
> > > > > > > To: Joakim Roubert <[email protected]>
> > > > > > > Cc: [email protected]
> > > > > > > Subject: Re: [meta-virtualization][PATCH v5] Adding k3s
> > > > > > > recipe
> > > > > > >
> > > > > > > On Wed, Oct 21, 2020 at 2:00 AM Joakim Roubert
> > > > > > > <[email protected]> wrote:
> > > > > > > >
> > > > > > > > On 2020-10-21 05:10, Bruce Ashfield wrote:
> > > > > > > > > Ha!!!!
> > > > > > > > >
> > > > > > > > > This applies.
> > > > > > > >
> > > > > > > > Wonderful, thank you! I guess this is what is called "five
> > > > > > > > times lucky"...
> > > > > > > >
> > > > > > > > > I'm now testing and completing some of my networking
> > > > > > > > > factoring, as well as importing / forking some recipes
> > > > > > > > > to avoid extra layer depends.
> > > > > > > >
> > > > > > > > Excellent!
> > > > > > >
> > > > > > > I've pushed some of my WIP to:
> > > > > > > https://git.yoctoproject.org/cgit/cgit.cgi/meta-virtualizati
> > > > > > > on/l
> > > > > > > og/?h=k3s-wip
> > > > > > >
> > > > > > > That includes the split of the networking, the import of
> > > > > > > some of the dependencies and some small tweaks I'm working on.
> > > > > > >
> > > > > > > I did have a couple of questions on the k3s packaging
> > > > > > > itself, I was getting the following
> > > > > > > error:
> > > > > > >
> > > > > > > ERROR: k3s-v1.18.9+k3s1-dirty-r0 do_package: QA Issue: k3s:
> > > > > > > Files/directories were installed but not shipped in any package:
> > > > > > > /usr/local/bin/k3s-clean
> > > > > > > /usr/local/bin/crictl
> > > > > > > /usr/local/bin/kubectl
> > > > > > > /usr/local/bin/k3s
> > > > > > >
> > > > > > > So I added them to the FILES of the k3s package itself (so
> > > > > > > both k3s-server and k3s-agent will get them), is that the split
> > > > > > > you were looking for ?
> > > > > > >
> > > > > > > Bruce
> > > > > > >
> > > > > > > >
> > > > > > > > BR,
> > > > > > > >
> > > > > > > > /Joakim
> > > > > > > > --
> > > > > > > > Joakim Roubert
> > > > > > > > Senior Engineer
> > > > > > > >
> > > > > > > > Axis Communications AB
> > > > > > > > Emdalavägen 14, SE-223 69 Lund, Sweden
> > > > > > > > Tel: +46 46 272 18 00, Tel (direct): +46 46 272 27 48
> > > > > > > > Fax: +46 46 13 61 30, www.axis.com
> > > > > > > >
> > > > > > > --
> > > > > > > - Thou shalt not follow the NULL pointer, for chaos and
> > > > > > > madness await thee at its end
> > > > > > > - "Use the force Harry" - Gandalf, Star Trek II
> > > > > > IMPORTANT NOTICE: The contents of this email and any
> > > > > > attachments are confidential and
> > > may also be privileged. If you are not the intended recipient,
> > > please notify the sender immediately and do not disclose the
> > > contents to any other person, use it for any purpose, or store or copy
> > > the information in any
> medium. Thank you.
> > > > >
> > > > >
> > > > >
> > > > > --
> > > > > - Thou shalt not follow the NULL pointer, for chaos and madness
> > > > > await thee at its end
> > > > > - "Use the force Harry" - Gandalf, Star Trek II
> > > > >
> > > > >
> > > > >
> > > >
> > > >
> > > > --
> > > > - Thou shalt not follow the NULL pointer, for chaos and madness
> > > > await thee at its end
> > > > - "Use the force Harry" - Gandalf, Star Trek II
> > > >
> > > >
> > > >
> > >
> > >
> > > --
> > > - Thou shalt not follow the NULL pointer, for chaos and madness
> > > await thee at its end
> > > - "Use the force Harry" - Gandalf, Star Trek II
> > IMPORTANT NOTICE: The contents of this email and any attachments are
> > confidential and may
> also be privileged. If you are not the intended recipient, please notify the
> sender immediately
> and do not disclose the contents to any other person, use it for any purpose,
> or store or copy
> the information in any medium. Thank you.
>
>
>
> --
> - Thou shalt not follow the NULL pointer, for chaos and madness await thee at
> its end
> - "Use the force Harry" - Gandalf, Star Trek II
IMPORTANT NOTICE: The contents of this email and any attachments are
confidential and may also be privileged. If you are not the intended recipient,
please notify the sender immediately and do not disclose the contents to any
other person, use it for any purpose, or store or copy the information in any
medium. Thank you.
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#6062):
https://lists.yoctoproject.org/g/meta-virtualization/message/6062
Mute This Topic: https://lists.yoctoproject.org/mt/77679236/21656
Group Owner: [email protected]
Unsubscribe: https://lists.yoctoproject.org/g/meta-virtualization/unsub
[[email protected]]
-=-=-=-=-=-=-=-=-=-=-=-