Hi Jared,
did you have find a solution to your problem ? It appear that I
have the same osd problem, and tcpdump captures won't show any solution.
All OSD nodes produced logs like
2017-12-14 11:25:11.756552 7f0cc5905700 -1 osd.49 29546 heartbeat_check:
no reply from 172.16.5.155:6817
On Fri, Jul 28, 2017 at 6:06 AM, Jared Watts wrote:
> I’ve got a cluster where a bunch of OSDs are down/out (only 6/21 are up/in).
> ceph status and ceph osd tree output can be found at:
>
> https://gist.github.com/jbw976/24895f5c35ef0557421124f4b26f6a12
>
>
>
> In osd.4
I’ve got a cluster where a bunch of OSDs are down/out (only 6/21 are up/in).
ceph status and ceph osd tree output can be found at:
https://gist.github.com/jbw976/24895f5c35ef0557421124f4b26f6a12
In osd.4 log, I see many of these:
2017-07-27 19:38:53.468852 7f3855c1c700 -1 osd.4 120