Thank you Bruce . I will do more research in that direction ( kernel 
configuration)

Regards
Simon
> On Feb 25, 2023, at 5:20 PM, Bruce Ashfield <[email protected]> wrote:
> 
> On Sat, Feb 25, 2023 at 5:35 PM SIMON BABY <[email protected]> wrote:
>> 
>> Hi Bruce,
>> I also observed that the docker daemon is not starting by default and if I 
>> launch it manually , it takes a long time to start. Am I missing any kernel 
>> modules?
>> 
>> Here is the  o/p from  "systemctl status docker.service".
>> 
>> root@imx8mpevk:~# systemctl status docker.service
>> * docker.service - Docker Application Container Engine
>>     Loaded: loaded (/lib/systemd/system/docker.service; disabled; vendor 
>> preset: disabled)
>>     Active: active (running) since Sat 2023-02-25 22:19:54 UTC; 4min 10s ago
>> TriggeredBy: * docker.socket
>>       Docs: https://docs.docker.com
>>   Main PID: 423 (dockerd)
>>      Tasks: 11 (limit: 5578)
>>     Memory: 115.0M
>>     CGroup: /system.slice/docker.service
>>             `-423 /usr/bin/dockerd -H fd://
>> 
>> Feb 25 22:19:53 imx8mpevk dockerd[423]: 
>> time="2023-02-25T22:19:53.837738928Z" level=warning msg="Running modprobe 
>> bridge br_netfilter failed with message: modprobe: WARNING: Module 
>> br_netfilter not found in director...ror: exit status 1"
> 
> The above error could be a missing module, or a missing iptables module.
> 
> 
>> Feb 25 22:19:54 imx8mpevk dockerd[423]: 
>> time="2023-02-25T22:19:54.071250923Z" level=warning msg="Could not load 
>> necessary modules for IPSEC rules: protocol not supported"
>> Feb 25 22:19:54 imx8mpevk dockerd[423]: 
>> time="2023-02-25T22:19:54.078250217Z" level=warning msg="Could not load 
>> necessary modules for Conntrack: Running modprobe nf_conntrack_netlink 
>> failed with message: `modprobe: WARNING: Module nf_...
> 
> As does the above one.
> 
> so you definitely have missing configuration.
> 
> Bruce
> 
>> Feb 25 22:19:54 imx8mpevk dockerd[423]: 
>> time="2023-02-25T22:19:54.081471487Z" level=info msg="Default bridge 
>> (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip 
>> can be used to set a preferred IP address"
>> Feb 25 22:19:54 imx8mpevk dockerd[423]: 
>> time="2023-02-25T22:19:54.199132980Z" level=info msg="Loading containers: 
>> done."
>> Feb 25 22:19:54 imx8mpevk dockerd[423]: 
>> time="2023-02-25T22:19:54.296845346Z" level=info msg="Docker daemon" 
>> commit=906f57ff5b-unsupported graphdriver(s)=overlay2 version=20.10.12-ce
>> Feb 25 22:19:54 imx8mpevk dockerd[423]: 
>> time="2023-02-25T22:19:54.297236599Z" level=info msg="Daemon has completed 
>> initialization"
>> Feb 25 22:19:54 imx8mpevk systemd[1]: Started Docker Application Container 
>> Engine.
>> Feb 25 22:19:54 imx8mpevk dockerd[423]: 
>> time="2023-02-25T22:19:54.372354197Z" level=info msg="API listen on 
>> /run/docker.sock"
>> Feb 25 22:23:14 imx8mpevk dockerd[423]: 
>> time="2023-02-25T22:23:14.188738979Z" level=info msg="ignoring event" 
>> container=a973c205bf7c0e57450de3241767f39e4983b6b174e231e014159ed8ae220791 
>> module=libcontainerd namespace...*events.TaskDelete"
>> Hint: Some lines were ellipsized, use -l to show in full.
>> root@imx8mpevk:~# Feb 25 22:19:53 imx8mpevk dockerd[423]: 
>> time="2023-02-25T22:19:53.837738928Z" level=warning msg="Running modprobe 
>> bridge br_netfilter failed with message: modprobe: WARNING: Module 
>> br_netfilter not found in director...ror: exit status 1"
>> 
>> 
>> Regards
>> Simon
>> 
>>> On Fri, Feb 24, 2023 at 6:47 PM SIMON BABY via lists.yoctoproject.org 
>>> <[email protected]> wrote:
>>> 
>>> Hello Bruce,
>>> 
>>> Thank you for the inputs.
>>> 
>>> 
>>> Yes, I use linux-yocto. The target linux version is below.
>>> 
>>> 
>>> 
>>> Linux imx8mpevk 5.15.32-rt39-lts-next+g2a8a193a07b4 #1 SMP PREEMPT_RT Tue 
>>> Jun 7 02:34:46 UTC 2022 aarch64 aarch64 aarch64 GNU/Linux
>>> 
>>> 
>>> 
>>> The layers used are in the link below.
>>> 
>>> https://source.codeaurora.org/external/imx/imx-manifest/tree/imx-5.15.32-2.0.0.xml?h=imx-linux-kirkstone
>>> 
>>> 
>>> 
>>> I tried to add IMAGE_INSTALL:append = " kernel-modules" in local.conf but 
>>> it did not make any difference.
>>> 
>>> 
>>> 
>>> The docker version I am running on the target is 20.10.12-ce
>>> 
>>> 
>>> 
>>> Below is the error I am getting on the target.
>>> 
>>> 
>>> 
>>> root@imx8mpevk:~# docker run hello-world
>>> 
>>> [ 1359.005452] docker0: port 1(veth4dc9000) entered blocking state
>>> 
>>> [ 1359.005512] docker0: port 1(veth4dc9000) entered disabled state
>>> 
>>> [ 1359.005921] device veth4dc9000 entered promiscuous mode
>>> 
>>> [ 1359.005994] audit: type=1700 audit(1677283528.914:37): dev=veth4dc9000 
>>> prom=256 old_prom=0 auid=4294967295 uid=0 gid=0 ses=4294967295
>>> 
>>> [ 1359.013139] audit: type=1300 audit(1677283528.914:37): arch=c00000b7 
>>> syscall=206 success=yes exit=40 a0=e a1=4000ec0d50 a2=28 a3=0 items=0 
>>> ppid=1 pid=446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 
>>> sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="dockerd" 
>>> exe="/usr/bin/dockerd" key=(null)
>>> 
>>> [ 1359.013228] audit: type=1327 audit(1677283528.914:37): 
>>> proctitle=2F7573722F62696E2F646F636B657264002D480066643A2F2F
>>> 
>>> [ 1359.263483] docker0: port 1(veth4dc9000) entered disabled state
>>> 
>>> [ 1359.298263] device veth4dc9000 left promiscuous mode
>>> 
>>> [ 1359.298305] docker0: port 1(veth4dc9000) entered disabled state
>>> 
>>> [ 1359.298646] audit: type=1700 audit(1677283529.164:38): dev=veth4dc9000 
>>> prom=0 old_prom=256 auid=4294967295 uid=0 gid=0 ses=4294967295
>>> 
>>> docker: Error response from daemon: failed to create shim task: OCI runtime 
>>> create failed: runc create failed: unable to start container process: can't 
>>> get final child's PID from pipe: EOF: unknown.
>>> 
>>> ERRO[0000] error waiting for container: context canceled
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> Also sending the local.conf and bblayers.conf file
>>> 
>>> 
>>> 
>>> local.conf:
>>> 
>>> 
>>> 
>>> MACHINE ??= 'imx8mpevk'
>>> 
>>> DISTRO ?= 'fsl-imx-wayland'
>>> 
>>> PACKAGE_CLASSES ?= 'package_rpm'
>>> 
>>> EXTRA_IMAGE_FEATURES ?= "debug-tweaks"
>>> 
>>> USER_CLASSES ?= "buildstats"
>>> 
>>> PATCHRESOLVE = "noop"
>>> 
>>> BB_DISKMON_DIRS ??= "\
>>> 
>>>    STOPTASKS,${TMPDIR},1G,100K \
>>> 
>>>    STOPTASKS,${DL_DIR},1G,100K \
>>> 
>>>    STOPTASKS,${SSTATE_DIR},1G,100K \
>>> 
>>>   STOPTASKS,/tmp,100M,100K \
>>> 
>>>    HALT,${TMPDIR},100M,1K \
>>> 
>>>    HALT,${DL_DIR},100M,1K \
>>> 
>>>    HALT,${SSTATE_DIR},100M,1K \
>>> 
>>>    HALT,/tmp,10M,1K"
>>> 
>>> PACKAGECONFIG:append:pn-qemu-system-native = " sdl"
>>> 
>>> CONF_VERSION = "2"
>>> 
>>> 
>>> 
>>> DL_DIR ?= "${BSPDIR}/downloads/"
>>> 
>>> ACCEPT_FSL_EULA = "1"
>>> 
>>> 
>>> 
>>> # Switch to Debian packaging and include package-management in the image
>>> 
>>> PACKAGE_CLASSES = "package_deb"
>>> 
>>> EXTRA_IMAGE_FEATURES += "package-management"
>>> 
>>> DISTRO_FEATURES:append = " virtualization"
>>> 
>>> IMAGE_INSTALL:append = " docker-ce"
>>> 
>>> IMAGE_INSTALL:append = " kernel-modules"
>>> 
>>> 
>>> 
>>> EXTRA_IMAGE_FEATURES = "debug-tweaks tools-profile"
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> bblayers.conf
>>> 
>>> 
>>> 
>>> LCONF_VERSION = "7"
>>> 
>>> 
>>> 
>>> BBPATH = "${TOPDIR}"
>>> 
>>> BSPDIR := ${@os.path.abspath(os.path.dirname(d.getVar('FILE', True)) + 
>>> '/../..')}
>>> 
>>> 
>>> 
>>> BBFILES ?= ""
>>> 
>>> BBLAYERS = " \
>>> 
>>>  ${BSPDIR}/sources/poky/meta \
>>> 
>>>  ${BSPDIR}/sources/poky/meta-poky \
>>> 
>>>  \
>>> 
>>>  ${BSPDIR}/sources/meta-openembedded/meta-oe \
>>> 
>>>  ${BSPDIR}/sources/meta-openembedded/meta-multimedia \
>>> 
>>>  ${BSPDIR}/sources/meta-openembedded/meta-python \
>>> 
>>>  \
>>> 
>>>  ${BSPDIR}/sources/meta-freescale \
>>> 
>>>  ${BSPDIR}/sources/meta-freescale-3rdparty \
>>> 
>>>  ${BSPDIR}/sources/meta-freescale-distro \
>>> 
>>> "
>>> 
>>> 
>>> 
>>> # i.MX Yocto Project Release layers
>>> 
>>> BBLAYERS += "${BSPDIR}/sources/meta-imx/meta-bsp"
>>> 
>>> BBLAYERS += "${BSPDIR}/sources/meta-imx/meta-sdk"
>>> 
>>> BBLAYERS += "${BSPDIR}/sources/meta-imx/meta-ml"
>>> 
>>> BBLAYERS += "${BSPDIR}/sources/meta-imx/meta-v2x"
>>> 
>>> #BBLAYERS += "${BSPDIR}/sources/meta-nxp-demo-experience"
>>> 
>>> 
>>> 
>>> #BBLAYERS += "${BSPDIR}/sources/meta-browser/meta-chromium"
>>> 
>>> #BBLAYERS += "${BSPDIR}/sources/meta-clang"
>>> 
>>> #BBLAYERS += "${BSPDIR}/sources/meta-openembedded/meta-gnome"
>>> 
>>> BBLAYERS += "${BSPDIR}/sources/meta-openembedded/meta-networking"
>>> 
>>> BBLAYERS += "${BSPDIR}/sources/meta-openembedded/meta-filesystems"
>>> 
>>> BBLAYERS += "${BSPDIR}/sources/meta-virtualization"
>>> 
>>> BBLAYERS += "${BSPDIR}/sources/meta-teledyne-wapng"
>>> 
>>> BBLAYERS += "${BSPDIR}/sources/meta-aws"
>>> 
>>> 
>>> 
>>> Regards
>>> 
>>> Simon
>>> 
>>> 
>>> On Thu, Feb 23, 2023 at 12:03 PM Bruce Ashfield <[email protected]> 
>>> wrote:
>>>> 
>>>> On Wed, Feb 22, 2023 at 9:47 PM SIMON BABY <[email protected]> wrote:
>>>>> 
>>>>> Hello Team,
>>>>> 
>>>>> Can I know what are the changes required in yocto to run docker and its 
>>>>> dependencies  on my target embedded system. I have added the below 
>>>>> changes. Do I need more plugins and packages ?
>>>>> 
>>>>> bblayers.conf:
>>>>> 
>>>>> 
>>>>> 
>>>>> BBLAYERS += "${BSPDIR}/sources/meta-openembedded/meta-networking"
>>>>> 
>>>>> BBLAYERS += "${BSPDIR}/sources/meta-openembedded/meta-filesystems"
>>>>> 
>>>>> BBLAYERS += "${BSPDIR}/sources/meta-virtualization"
>>>>> 
>>>>> 
>>>>> 
>>>>> local.conf:
>>>>> 
>>>>> 
>>>>> 
>>>>> DISTRO_FEATURES:append = " virtualization"
>>>>> 
>>>>> IMAGE_INSTALL:append = " docker-ce"
>>>>> 
>>>> 
>>>> You likely are missing kernel configuration values required to run the
>>>> containers.
>>>> 
>>>> What kernel are you using (linux-yocto?), and are you on the master
>>>> branch of the layers ?
>>>> 
>>>> As you can see, it is working in my latest tests:
>>>> 
>>>> root@qemux86-64:~# docker --version
>>>> Docker version 23.0.1, build a5ee5b1dfc
>>>> root@qemux86-64:~# docker pull alpine
>>>> Using default tag: latest
>>>> latest: Pulling from library/alpine
>>>> 63b65145d645: Pull complete
>>>> Digest: 
>>>> sha256:69665d02cb32192e52e07644d76bc6f25abeb5410edc1c7a81a10ba3f0efb90a
>>>> Status: Downloaded newer image for alpine:latest
>>>> docker.io/library/alpine:latest
>>>> root@qemux86-64:~# docker run -it alpine /bin/sh
>>>> / #
>>>> 
>>>> Try adding "kernel-modules" to your IMAGE_INSTALL, and see if that
>>>> makes a difference.
>>>> 
>>>> Bruce
>>>> 
>>>> 
>>>>> 
>>>>> 
>>>>> WIth the above changes and tested on the target I am getting the below 
>>>>> error when try to run "docker run hello-world"
>>>>> 
>>>>> 
>>>>> root@imx8mpevk:~# docker run hello-world
>>>>> DEBU[2023-02-23T00:53:57.064704083Z] Calling HEAD /_ping
>>>>> DEBU[2023-02-23T00:53:57.068355788Z] Calling POST /v1.41/containers/create
>>>>> DEBU[2023-02-23T00:53:57.069098805Z] form data: 
>>>>> {“AttachStderr”:true,“AttachStdin”:false,“AttachStdout”:true,“Cmd”:null,“Domainname”:“”,“Entrypoint”:null,“Env”:null,“HostConfig”:{“AutoRemove”:false,“Binds”:null,“BlkioDeviceReadBps”:null,“BlkioDeviceReadIOps”:null,“BlkioDeviceWriteBps”:null,“BlkioDeviceWriteIOps”:null,“BlkioWeight”:0,“BlkioWeightDevice”:,“CapAdd”:null,“CapDrop”:null,“Cgroup”:“”,“CgroupParent”:“”,“CgroupnsMode”:“”,“ConsoleSize”:[0,0],“ContainerIDFile”:“”,“CpuCount”:0,“CpuPercent”:0,“CpuPeriod”:0,“CpuQuota”:0,“CpuRealtimePeriod”:0,“CpuRealtimeRuntime”:0,“CpuShares”:0,“CpusetCpus”:“”,“CpusetMems”:“”,“DeviceCgroupRules”:null,“DeviceRequests”:null,“Devices”:,“Dns”:,“DnsOptions”:,“DnsSearch”:,“ExtraHosts”:null,“GroupAdd”:null,“IOMaximumBandwidth”:0,“IOMaximumIOps”:0,“IpcMode”:“”,“Isolation”:“”,“KernelMemory”:0,“KernelMemoryTCP”:0,“Links”:null,“LogConfig”:{“Config”:{},“Type”:“”},“MaskedPaths”:null,“Memory”:0,“MemoryReservation”:0,“MemorySwap”:0,“MemorySwappiness”:-1,“NanoCpus”:0,“NetworkMode”:“default”,“OomKillDisable”:false,“OomScoreAdj”:0,“PidMode”:“”,“PidsLimit”:0,“PortBindings”:{},“Privileged”:false,“PublishAllPorts”:false,“ReadonlyPaths”:null,“ReadonlyRootfs”:false,“RestartPolicy”:{“MaximumRetryCount”:0,“Name”:“no”},“SecurityOpt”:null,“ShmSize”:0,“UTSMode”:“”,“Ulimits”:null,“UsernsMode”:“”,“VolumeDriver”:“”,“VolumesFrom”:null},“Hostname”:“”,“Image”:“hello-world”,“Labels”:{},“NetworkingConfig”:{“EndpointsConfig”:{}},“OnBuild”:null,“OpenStdin”:false,“Platform”:null,“StdinOnce”:false,“Tty”:false,“User”:“”,“Volumes”:{},“WorkingDir”:“”}
>>>>> DEBU[25846.680992] docker0: port 1(veth659d267) entered blocking state
>>>>> [25846.681041] docker0: port 1(veth659d267) entered disabled state
>>>>> [2023-02-23T00:53:57.121358454Z] [25846.681312] device veth659d267 
>>>>> entered promiscuous mode
>>>>> container mounted via layerStore:[25846.681392] audit: type=1700 
>>>>> audit(1677113637.219:205): dev=veth659d267 prom=256 old_prom=0 
>>>>> auid=4294967295 uid=0 gid=0 ses=4294967295
>>>>> &{/var/lib/docker/overlay2/d664e[25846.683022] audit: type=1300 
>>>>> audit(1677113637.219:205): arch=c00000b7 syscall=206 success=yes exit=40 
>>>>> a0=d a1=4000c507b0 a2=28 a3=0 items=0 ppid=409 pid=1551 auid=4294967295 
>>>>> uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=ttymxc1 
>>>>> ses=4294967295 comm=“dockerd” exe=“/usr/bin/dockerd” key=(null)
>>>>> 7963d79b51cb1322f9995853ff56f54a3[25846.683091] audit: type=1327 
>>>>> audit(1677113637.219:205): 
>>>>> proctitle=2F7573722F62696E2F646F636B657264002D44
>>>>> aa2994ae5b99b3bcb65c33ec2f/merged 0xaaaabdb0b060 0xaaaabdb0b060} 
>>>>> container=4f926f032e0566c4dbdfbb027787b42e6e19ef6e633864f09a4c9edbdb62d190
>>>>> DEBU[2023-02-23T00:53:57.184741848Z] Calling POST 
>>>>> /v1.41/containers/4f926f032e0566c4dbdfbb027787b42e6e19ef6e633864f09a4c9edbdb62d190/attach?stderr=1&stdout=1&stream=1
>>>>> DEBU[2023-02-23T00:53:57.185112606Z] attach: stderr: begin
>>>>> DEBU[2023-02-23T00:53:57.185130357Z] attach: stdout: begin
>>>>> DEBU[2023-02-23T00:53:57.186340258Z] Calling POST 
>>>>> /v1.41/containers/4f926f032e0566c4dbdfbb027787b42e6e19ef6e633864f09a4c9edbdb62d190/wait?condition=next-exit
>>>>> DEBU[2023-02-23T00:53:57.188347802Z] Calling POST 
>>>>> /v1.41/containers/4f926f032e0566c4dbdfbb027787b42e6e19ef6e633864f09a4c9edbdb62d190/start
>>>>> DEBU[2023-02-23T00:53:57.190864983Z] container mounted via layerStore: 
>>>>> &{/var/lib/docker/overlay2/d664e7963d79b51cb1322f9995853ff56f54a3aa2994ae5b99b3bcb65c33ec2f/merged
>>>>>  0xaaaabdb0b060 0xaaaabdb0b060} 
>>>>> container=4f926f032e0566c4dbdfbb027787b42e6e19ef6e633864f09a4c9edbdb62d190
>>>>> DEBU[2023-02-23T00:53:57.191993758Z] Assigning addresses for endpoint 
>>>>> crazy_bell’s interface on network bridge
>>>>> DEBU[2023-02-23T00:53:57.192083760Z] 
>>>>> RequestAddress(LocalDefault/172.17.0.0/16, , map)
>>>>> DEBU[2023-02-23T00:53:57.192149761Z] Request address PoolID:172.17.0.0/16 
>>>>> App: ipam/default/data, ID: LocalDefault/172.17.0.0/16, DBIndex: 0x0, 
>>>>> Bits: 65536, Unselected: 65533, Sequence: (0xc0000000, 1)->(0x0, 
>>>>> 2046)->(0x1, 1)->end Curr:3 Serial:false PrefAddress:
>>>>> ERRO[2023-02-23T00:53:57.192262764Z] failed to set to initial namespace, 
>>>>> readlink /proc/1551/task/1555/ns/net: no such file or directory, initns 
>>>>> fd -1: bad file descriptor
>>>>> DEBU[2023-02-23T00:53:57.252893597Z] Assigning addresses for endpoint 
>>>>> crazy_bell’s interface on network bridge
>>>>> ERRO[2023-02-23T00:53:57.274329693Z] failed to set to initial namespace, 
>>>>> readlink /proc/1551/task/1555/ns/net: no such file or directory, initns 
>>>>> fd -1: bad file descriptor
>>>>> DEBU[2023-02-23T00:53:57.294111754Z] Programming external connectivity on 
>>>>> endpoint crazy_bell 
>>>>> (1a86f3778b61204dcc7106bed28728a001028ba51f5c5fe731042007ec0ebd3c)
>>>>> ERRO[2023-02-23T00:53:57.299150489Z] failed [25846.962844] docker0: port 
>>>>> 1(veth659d267) entered disabled state
>>>>> to set to initial namespace, readlink /proc/1551/task/1555/ns/net: no 
>>>>> such file or directory, initns fd -1: bad file descriptor
>>>>> DEBU[2023-02-23T00:53:57.304933242Z] EnableService 
>>>>> 4f926f032e0566c4dbdfbb027787b42e6e19ef6e633864f09a4c9edbdb62d190 START
>>>>> DEBU[2023-02-23T00:53:57.305002118Z] Enabl[25846.996647] device 
>>>>> veth659d267 left promiscuous mode
>>>>> eService 4f926f032e0566c4dbdfbb02[25846.996686] docker0: port 
>>>>> 1(veth659d267) entered disabled state
>>>>> [25846.996703] audit: type=1700 audit(1677113637.488:206): 
>>>>> dev=veth659d267 prom=0 old_prom=256 auid=4294967295 uid=0 gid=0 
>>>>> ses=4294967295
>>>>> 7787b42e6e19ef6e633864f09a4c9edbdb62d190 DONE
>>>>> DEBU[2023-02-23T00:53:57.313909564Z] bundle dir created 
>>>>> bundle=/var/run/docker/containerd/4f926f032e0566c4dbdfbb027787b42e6e19ef[25847.040986]
>>>>>  audit: type=1300 audit(1677113637.488:206): arch=c00000b7 syscall=206 
>>>>> success=yes exit=32 a0=d a1=4000ccd240 a2=20 a3=0 items=0 ppid=409 
>>>>> pid=1551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 
>>>>> fsgid=0 tty=ttymxc1 ses=4294967295 comm=“dockerd” exe=“/usr/bin/dockerd” 
>>>>> key=(null)
>>>>> [25847.041004] audit: type=1327 audit(1677113637.488:206): 
>>>>> proctitle=2F7573722F62696E2F646F636B657264002D44
>>>>> 6e633864f09a4c9edbdb62d190 module=libcontainerd namespace=moby 
>>>>> root=/var/lib/docker/overlay2/d664e7963d79b51cb1322f9995853ff56f54a3aa2994ae5b99b3bcb65c33ec2f/merged
>>>>> ERRO[2023-02-23T00:53:57.445101824Z] stream copy error: reading from a 
>>>>> closed fifo
>>>>> ERRO[2023-02-23T00:53:57.445126200Z] stream copy error: reading from a 
>>>>> closed fifo
>>>>> DEBU[2023-02-23T00:53:57.445172451Z] attach: stderr: end
>>>>> DEBU[2023-02-23T00:53:57.445174576Z] attach: stdout: end
>>>>> DEBU[2023-02-23T00:53:57.445349705Z] attach done
>>>>> DEBU[2023-02-23T00:53:57.469084602Z] Revoking external connectivity on 
>>>>> endpoint crazy_bell 
>>>>> (1a86f3778b61204dcc7106bed28728a001028ba51f5c5fe731042007ec0ebd3c)
>>>>> ERRO[2023-02-23T00:53:57.469206980Z] failed to set to initial namespace, 
>>>>> readlink /proc/1551/task/1558/ns/net: no such file or directory, initns 
>>>>> fd -1: bad file descriptor
>>>>> ERRO[2023-02-23T00:53:57.475388115Z] failed to set to initial namespace, 
>>>>> readlink /proc/1551/task/1558/ns/net: no such file or directory, initns 
>>>>> fd -1: bad file descriptor
>>>>> ERRO[2023-02-23T00:53:57.489002290Z] failed to set to initial namespace, 
>>>>> readlink /proc/1551/task/1558/ns/net: no such file or directory, initns 
>>>>> fd -1: bad file descriptor
>>>>> DEBU[2023-02-23T00:53:57.587904715Z] Releasing addresses for endpoint 
>>>>> crazy_bell’s interface on network bridge
>>>>> DEBU[2023-02-23T00:53:57.610361084Z] 
>>>>> ReleaseAddress(LocalDefault/172.17.0.0/16, 172.17.0.2)
>>>>> DEBU[2023-02-23T00:53:57.619890544Z] Released address 
>>>>> PoolID:LocalDefault/172.17.0.0/16, Address:172.17.0.2 Sequence:App: 
>>>>> ipam/default/data, ID: LocalDefault/172.17.0.0/16, DBIndex: 0x0, Bits: 
>>>>> 65536, Unselected: 65532, Sequence: (0xe0000000, 1)->(0x0, 2046)->(0x1, 
>>>>> 1)->end Curr:3
>>>>> ERRO[2023-02-23T00:53:57.659608292Z] 
>>>>> 4f926f032e0566c4dbdfbb027787b42e6e19ef6e633864f09a4c9edbdb62d190 cleanup: 
>>>>> failed to delete container from containerd: no such container
>>>>> ERRO[2023-02-23T00:53:57.659718420Z] Handler for POST 
>>>>> /v1.41/containers/4f926f032e0566c4dbdfbb027787b42e6e19ef6e633864f09a4c9edbdb62d190/start
>>>>>  returned error: failed to create shim task: OCI runtime create failed: 
>>>>> runc create failed: unable to start container process: can’t get final 
>>>>> child’s PID from pipe: EOF: unknown
>>>>> docker: Error response from daemon: failed to create shim task: OCI 
>>>>> runtime create failed: runc create failed: unable to start container 
>>>>> process: can’t get final child’s PID from pipe: EOF: unknown.
>>>>> ERRO[0000] error waiting for container: context canceled
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> Regards
>>>>> 
>>>>> Simon
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>> 
>>>> 
>>>> --
>>>> - Thou shalt not follow the NULL pointer, for chaos and madness await
>>>> thee at its end
>>>> - "Use the force Harry" - Gandalf, Star Trek II
>>> 
>>> 
>>> 
>>> 
> 
> 
> -- 
> - Thou shalt not follow the NULL pointer, for chaos and madness await
> thee at its end
> - "Use the force Harry" - Gandalf, Star Trek II
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#7902): 
https://lists.yoctoproject.org/g/meta-virtualization/message/7902
Mute This Topic: https://lists.yoctoproject.org/mt/97175886/21656
Group Owner: [email protected]
Unsubscribe: https://lists.yoctoproject.org/g/meta-virtualization/unsub 
[[email protected]]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to