On Thu, Dec 21, 2017 at 3:04 PM, Serguei Bezverkhi (sbezverk)
<sbezv...@cisco.com> wrote:
> Hi Ilya,
>
> Here you go, no k8s services running this time:
>
> sbezverk@kube-4:~$ sudo rbd map raw-volume --pool kubernetes --id admin -m 
> 192.168.80.233  --key=AQCeHO1ZILPPDRAA7zw3d76bplkvTwzoosybvA==
> /dev/rbd0
> sbezverk@kube-4:~$ sudo rbd status raw-volume --pool kubernetes --id admin -m 
> 192.168.80.233  --key=AQCeHO1ZILPPDRAA7zw3d76bplkvTwzoosybvA==
> Watchers:
>         watcher=192.168.80.235:0/3465920438 client.65327 cookie=1
> sbezverk@kube-4:~$ sudo rbd info raw-volume --pool kubernetes --id admin -m 
> 192.168.80.233  --key=AQCeHO1ZILPPDRAA7zw3d76bplkvTwzoosybvA==
> rbd image 'raw-volume':
>         size 10240 MB in 2560 objects
>         order 22 (4096 kB objects)
>         block_name_prefix: rb.0.fafa.625558ec
>         format: 1
> sbezverk@kube-4:~$ sudo reboot
>
> sbezverk@kube-4:~$ sudo rbd status raw-volume --pool kubernetes --id admin -m 
> 192.168.80.233  --key=AQCeHO1ZILPPDRAA7zw3d76bplkvTwzoosybvA==
> Watchers: none
>
> It seems when the image was mapped manually, this issue is not reproducible.
>
> K8s does not just map the image, it also creates loopback device which is 
> linked to /dev/rbd0. Maybe this somehow reminds rbd client to re-activate a 
> watcher on reboot. I will try to mimic exact steps k8s follows manually to 
> see what exactly forces an active watcher after reboot.

To confirm, I'd also make sure that nothing runs "rbd unmap" on all
images (or some subset of images) during shutdown in the manual case.
Either do a hard reboot or rename /usr/bin/rbd to something else before
running reboot.

Thanks,

                Ilya
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to