Thanks for the pointer Dave,

in my case though problem proved to be old docker version (18) provided by OS repos. Installing latest docker-ce from docker.com resolves the problem. It would be nice though if host was checked for compatibility before starting an upgrade.



On 14.4.2021 г. 13:15 ч., Dave Hall wrote:
Radoslav,

I ran into the same.  For Debian 10 - recent updates - you have to add 'cgroup_enable=memory swapaccount=1' to the kernel command line (/etc/default/grub).  The reference I found said that Debian decided to disable this by default and make us turn it on if we want to run containers.

-Dave

--
Dave Hall
Binghamton University
[email protected] <mailto:[email protected]>

On Wed, Apr 14, 2021 at 12:51 PM Radoslav Milanov <[email protected] <mailto:[email protected]>> wrote:

    Hello,

    Cluster is 3 nodes Debian 10. Started cephadm upgrade on healthy
    15.2.10
    cluster. Managers were upgraded fine then first monitor went down for
    upgrade and never came back. Researching at the unit files container
    fails to run because of an error:

    root@host1:/var/lib/ceph/97d9f40e-9d33-11eb-8e3f-1c34da4b9fb6/mon.host1#

    cat unit.run

    set -e
    /usr/bin/install -d -m0770 -o 167 -g 167
    /var/run/ceph/97d9f40e-9d33-11eb-8e3f-1c34da4b9fb6
    # mon.host1
    ! /usr/bin/docker rm -f
    ceph-97d9f40e-9d33-11eb-8e3f-1c34da4b9fb6-mon.host1 2> /dev/null
    /usr/bin/docker run --rm --ipc=host --net=host --entrypoint
    /usr/bin/ceph-mon --privileged --group-add=disk --init --name
    ceph-97d9f40e-9d33-11eb-8e3f-1c34da4b9fb6-mon.host1 -e
    
CONTAINER_IMAGE=ceph/ceph@sha256:9b04c0f15704c49591640a37c7adfd40ffad0a4b42fecb950c3407687cb4f29a

    -e NODE_NAME=host1 -e CEPH_USE_RANDOM_NONCE=1 -v
    /var/run/ceph/97d9f40e-9d33-11eb-8e3f-1c34da4b9fb6:/var/run/ceph:z -v
    /var/log/ceph/97d9f40e-9d33-11eb-8e3f-1c34da4b9fb6:/var/log/ceph:z -v
    
/var/lib/ceph/97d9f40e-9d33-11eb-8e3f-1c34da4b9fb6/crash:/var/lib/ceph/crash:z

    -v
    
/var/lib/ceph/97d9f40e-9d33-11eb-8e3f-1c34da4b9fb6/mon.host1:/var/lib/ceph/mon/ceph-host1:z

    -v
    
/var/lib/ceph/97d9f40e-9d33-11eb-8e3f-1c34da4b9fb6/mon.host1/config:/etc/ceph/ceph.conf:z

    -v /dev:/dev -v /run/udev:/run/udev
    
ceph/ceph@sha256:9b04c0f15704c49591640a37c7adfd40ffad0a4b42fecb950c3407687cb4f29a

    -n mon.host1 -f --setuser ceph --setgroup ceph
    --default-log-to-file=false --default-log-to-stderr=true
    '--default-log-stderr-prefix=debug '
    --default-mon-cluster-log-to-file=false
    --default-mon-cluster-log-to-stderr=true

    root@host1:/var/lib/ceph/97d9f40e-9d33-11eb-8e3f-1c34da4b9fb6/mon.host1#

    /usr/bin/docker run --rm --ipc=host --net=host --entrypoint
    /usr/bin/ceph-mon --privileged --group-add=disk --init --name
    ceph-97d9f40e-9d33-11eb-8e3f-1c34da4b9fb6-mon.host1 -e
    
CONTAINER_IMAGE=ceph/ceph@sha256:9b04c0f15704c49591640a37c7adfd40ffad0a4b42fecb950c3407687cb4f29a

    -e NODE_NAME=host1 -e CEPH_USE_RANDOM_NONCE=1 -v
    /var/run/ceph/97d9f40e-9d33-11eb-8e3f-1c34da4b9fb6:/var/run/ceph:z -v
    /var/log/ceph/97d9f40e-9d33-11eb-8e3f-1c34da4b9fb6:/var/log/ceph:z -v
    
/var/lib/ceph/97d9f40e-9d33-11eb-8e3f-1c34da4b9fb6/crash:/var/lib/ceph/crash:z

    -v
    
/var/lib/ceph/97d9f40e-9d33-11eb-8e3f-1c34da4b9fb6/mon.host1:/var/lib/ceph/mon/ceph-host1:z

    -v
    
/var/lib/ceph/97d9f40e-9d33-11eb-8e3f-1c34da4b9fb6/mon.host1/config:/etc/ceph/ceph.conf:z

    -v /dev:/dev -v /run/udev:/run/udev
    
ceph/ceph@sha256:9b04c0f15704c49591640a37c7adfd40ffad0a4b42fecb950c3407687cb4f29a

    -n mon.host1 -f --setuser ceph --setgroup ceph
    --default-log-to-file=false --default-log-to-stderr=true
    '--default-log-stderr-prefix=debug '
    --default-mon-cluster-log-to-file=false
    --default-mon-cluster-log-to-stderr=true


    /usr/bin/docker: Error response from daemon: OCI runtime create
    failed:
    container_linux.go:344: starting container process caused "exec:
    \"/dev/init\": stat /dev/init: no such file or directory": unknown.

    Any suggestions how to resolve that ?

    Thank you.
    _______________________________________________
    ceph-users mailing list -- [email protected]
    <mailto:[email protected]>
    To unsubscribe send an email to [email protected]
    <mailto:[email protected]>

_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to