2018-07-10 14:37 GMT+02:00 Jason Dillaman <[email protected]>:

> On Tue, Jul 10, 2018 at 2:37 AM Kevin Olbrich <[email protected]> wrote:
>
>> 2018-07-10 0:35 GMT+02:00 Jason Dillaman <[email protected]>:
>>
>>> Is the link-local address of "fe80::219:99ff:fe9e:3a86%eth0" at least
>>> present on the client computer you used? I would have expected the OSD to
>>> determine the client address, so it's odd that it was able to get a
>>> link-local address.
>>>
>>
>> Yes, it is. eth0 is part of bond0 which is a vlan trunk. Bond0.X is
>> attached to brX which has an ULA-prefix for the ceph cluster.
>> Eth0 has no address itself. In this case this must mean, the address has
>> been carried down to the hardware interface.
>>
>> I am wondering why it uses link local when there is an ULA-prefix
>> available.
>>
>> The address is available on brX on this client node.
>>
>
> I'll open a tracker ticker to get that issue fixed, but in the meantime,
> you can run "rados -p <IMAGE POOL> rmxattr rbd_header.<IMAGE ID>
> lock.rbd_lock" to remove the lock.
>

Worked perfectly, thank you very much!


>
>> - Kevin
>>
>>
>>> On Mon, Jul 9, 2018 at 3:43 PM Kevin Olbrich <[email protected]> wrote:
>>>
>>>> 2018-07-09 21:25 GMT+02:00 Jason Dillaman <[email protected]>:
>>>>
>>>>> BTW -- are you running Ceph on a one-node computer? I thought IPv6
>>>>> addresses starting w/ fe80 were link-local addresses which would probably
>>>>> explain why an interface scope id was appended. The current IPv6 address
>>>>> parser stops reading after it encounters a non hex, colon character [1].
>>>>>
>>>>
>>>> No, this is a compute machine attached to the storage vlan where I
>>>> previously had also local disks.
>>>>
>>>>
>>>>>
>>>>>
>>>>> On Mon, Jul 9, 2018 at 3:14 PM Jason Dillaman <[email protected]>
>>>>> wrote:
>>>>>
>>>>>> Hmm ... it looks like there is a bug w/ RBD locks and IPv6 addresses
>>>>>> since it is failing to parse the address as valid. Perhaps it's barfing 
>>>>>> on
>>>>>> the "%eth0" scope id suffix within the address.
>>>>>>
>>>>>> On Mon, Jul 9, 2018 at 2:47 PM Kevin Olbrich <[email protected]> wrote:
>>>>>>
>>>>>>> Hi!
>>>>>>>
>>>>>>> I tried to convert an qcow2 file to rbd and set the wrong pool.
>>>>>>> Immediately I stopped the transfer but the image is stuck locked:
>>>>>>>
>>>>>>> Previusly when that happened, I was able to remove the image after
>>>>>>> 30 secs.
>>>>>>>
>>>>>>> [root@vm2003 images1]# rbd -p rbd_vms_hdd lock list fpi_server02
>>>>>>> There is 1 exclusive lock on this image.
>>>>>>> Locker         ID                  Address
>>>>>>>
>>>>>>> client.1195723 auto 93921602220416 [fe80::219:99ff:fe9e:3a86%
>>>>>>> eth0]:0/1200385089
>>>>>>>
>>>>>>> [root@vm2003 images1]# rbd -p rbd_vms_hdd lock rm fpi_server02
>>>>>>> "auto 93921602220416" client.1195723
>>>>>>> rbd: releasing lock failed: (22) Invalid argument
>>>>>>> 2018-07-09 20:45:19.080543 7f6c2c267d40 -1 librados: unable to parse
>>>>>>> address [fe80::219:99ff:fe9e:3a86%eth0]:0/1200385089
>>>>>>> 2018-07-09 20:45:19.080555 7f6c2c267d40 -1 librbd: unable to
>>>>>>> blacklist client: (22) Invalid argument
>>>>>>>
>>>>>>> The image is not in use anywhere!
>>>>>>>
>>>>>>> How can I force removal of all locks for this image?
>>>>>>>
>>>>>>> Kind regards,
>>>>>>> Kevin
>>>>>>> _______________________________________________
>>>>>>> ceph-users mailing list
>>>>>>> [email protected]
>>>>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Jason
>>>>>>
>>>>>
>>>>> [1] https://github.com/ceph/ceph/blob/master/src/msg/msg_types.cc#L108
>>>>>
>>>>> --
>>>>> Jason
>>>>>
>>>>
>>>>
>>>
>>> --
>>> Jason
>>>
>>
>>
>
> --
> Jason
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to