Hi Alex,

I saw your thread, but I think mine is a little bit different.
I have only one message so far, and I want to better understand the issue. I 
would like to see whether there are any tunable parameters that could be 
adjusted to have an influence on this behavior.

Kind regards,
Laszlo

On 12.04.2017 22:19, Alex Gorbachev wrote:
Hi Laszlo,

On Wed, Apr 12, 2017 at 6:26 AM Laszlo Budai <[email protected] 
<mailto:[email protected]>> wrote:

    Hello,

    yesterday one of our compute nodes has recorded the following message for 
one of the ceph connections:

    submit_message osd_op(client.28817736.0:690186 
rbd_data.15c046b11ab57b7.00000000000000c4 [read 2097152~380928] 3.6f81364a 
ack+read+known_if_redirected e3617) v5 remote, 10.12.68.71:6818/6623 
<http://10.12.68.71:6818/6623>, failed lossy con, dropping message

    Can someone "decode" the above message, or direct me to some document where 
I could read more about it?

    We have ceph 0.94.10.


I am researching the same issue, but on 11 OSD nodes.  You should be able to 
find my thread in this list.  Looks like this could be a bug in kernels 4.4+, 
or a network issue.

Regards,
Alex




    Thank you,
    Laszlo
    _______________________________________________
    ceph-users mailing list
    [email protected] <mailto:[email protected]>
    http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

--
--
Alex Gorbachev
Storcium
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to