Here's problem #1, using qemux86-64 and the latest on my k3s-wip branch,
while the master shows itself as ready, the pods are crashing. I need to
sort that out before I can get into the single node config.

root@qemux86-64:~# uname -a

Linux qemux86-64 5.8.13-yocto-standard #1 SMP PREEMPT Tue Oct 6 12:23:29
UTC 2020 x86_64 x86_64 x86_64 GNU/Linux

root@qemux86-64:~# k3s --version

k3s version v1.19.3+gitAUTOINC+970fbc66d3 (970fbc66)

root@qemux86-64:~# kubectl get nodes

NAME         STATUS   ROLES    AGE   VERSION

qemux86-64   Ready    master   17m   v1.19.3-k3s1

And flannel is up:

5: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue
state UNKNOWN group default

    link/ether be:5e:55:3e:52:e6 brd ff:ff:ff:ff:ff:ff

    inet 10.42.0.0/32 scope global flannel.1

       valid_lft forever preferred_lft forever

    inet6 fe80::bc5e:55ff:fe3e:52e6/64 scope link

       valid_lft forever preferred_lft forever

6: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP
group default qlen 1000

    link/ether e2:38:35:11:31:6e brd ff:ff:ff:ff:ff:ff

    inet 10.42.0.1/24 brd 10.42.0.255 scope global cni0

       valid_lft forever preferred_lft forever

    inet6 fe80::e038:35ff:fe11:316e/64 scope link

       valid_lft forever preferred_lft forever

1081: veth89c8ca0d@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc
noqueue master cni0 state UP group default

    link/ether b6:76:4f:77:e7:a6 brd ff:ff:ff:ff:ff:ff link-netns
cni-255cf837-ca3c-f08d-730f-bbb359cb6ae1

    inet6 fe80::8cf:bff:febb:85d7/64 scope link

       valid_lft forever preferred_lft forever

1082: veth001513a2@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc
noqueue master cni0 state UP group default

    link/ether 52:5c:64:41:c6:e1 brd ff:ff:ff:ff:ff:ff link-netns
cni-71490108-5f0a-a266-de1f-4bed565dcf88

    inet6 fe80::605b:23ff:fe50:c22f/64 scope link

       valid_lft forever preferred_lft forever

1083: veth4ae47008@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc
noqueue master cni0 state UP group default

    link/ether be:0e:af:76:4d:66 brd ff:ff:ff:ff:ff:ff link-netns
cni-bb1a1e05-6fa1-e9dd-3de8-b3a4be47197a

    inet6 fe80::8467:31ff:fec3:6b20/64 scope link

       valid_lft forever preferred_lft forever

1084: veth07cd05c1@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc
noqueue master cni0 state UP group default

    link/ether 5e:75:7b:bc:ee:1c brd ff:ff:ff:ff:ff:ff link-netns
cni-07e997d6-7918-0f38-b1e5-1b08e9ac0581

    inet6 fe80::28e0:ceff:fe42:364d/64 scope link tentative
       valid_lft forever preferred_lft forever

root@qemux86-64:~# kubectl get pods -n kube-system

NAME                                     READY   STATUS             RESTARTS
  AGE

local-path-provisioner-7ff9579c6-bd947   0/1     CrashLoopBackOff   8
    22m

coredns-66c464876b-76bdl                 0/1     CrashLoopBackOff   8
    22m

helm-install-traefik-mvt5k               0/1     CrashLoopBackOff   8
    22m

metrics-server-7b4f8b595-ck5kz           0/1     CrashLoopBackOff   8
    22m

root@qemux86-64:~#

Those failures are a bit different than I used to see before going to 1.19

For whatever reason, my logs are empty, so I'm trying to see if my k3s host
image is missing something.

Bruce

On Tue, Nov 17, 2020 at 9:41 AM Bruce Ashfield via lists.yoctoproject.org
<[email protected]> wrote:

>
>
> On Tue, Nov 17, 2020 at 9:27 AM Joakim Roubert <[email protected]>
> wrote:
>
>> On 2020-11-17 15:19, Bruce Ashfield wrote:
>> >
>> >         But if you have an agent, that would be 2 nodes, right? One
>> >         master and
>> >         one worker node?
>> >
>> >     In kubernetes terms, perhaps. But physical boxes .. no. This needs
>> >     to be testable under a single instance (qemu in my case).
>>
>> I see. Well, I have never tried that, but would in that case use one
>> qemu instance for the master and a separate qemu instance for each
>> worker node. (I should try to run both master and a worker on one
>> machine and see what happens on my setup.)
>>
>
> I'm experimenting with two qemu sessions as well (I had to do this with
> meta-openstack), but then host networking becomes an issue and if that's
> also the box you are doing builds on (like I am), it can cause issues.
>
> Either way, I'm fairly close now and hopefully will have this sorted out
> soon.
>
> My build is still running, but I'll post specific logs later today.
>
> Bruce
>
>
>
>>
>> BR,
>>
>> /Joakim
>>
>>
>>
>>
>>
>
> --
> - Thou shalt not follow the NULL pointer, for chaos and madness await thee
> at its end
> - "Use the force Harry" - Gandalf, Star Trek II
>
>
> 
>
>

-- 
- Thou shalt not follow the NULL pointer, for chaos and madness await thee
at its end
- "Use the force Harry" - Gandalf, Star Trek II
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#6101): 
https://lists.yoctoproject.org/g/meta-virtualization/message/6101
Mute This Topic: https://lists.yoctoproject.org/mt/78314751/21656
Group Owner: [email protected]
Unsubscribe: https://lists.yoctoproject.org/g/meta-virtualization/unsub 
[[email protected]]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to