Correct, except it doesn't have to be a specific host or a specific
OSD. What matters here is whether the client is idle. As soon as the
client is woken up and sends a request to _any_ OSD, it receives a new
osdmap and applies it, possibly emitting those dmesg entries.
Thanks for the
On Thu, Aug 30, 2018 at 1:04 PM Eugen Block wrote:
>
> Hi again,
>
> we still didn't figure out the reason for the flapping, but I wanted
> to get back on the dmesg entries.
> They just reflect what happened in the past, they're no indicator to
> predict anything.
The kernel client is just that,
Hi again,
we still didn't figure out the reason for the flapping, but I wanted
to get back on the dmesg entries.
They just reflect what happened in the past, they're no indicator to
predict anything.
For example, when I changed the primary-affinity of OSD.24 last week,
one of the clients
Update:
I changed the primary affinity of one OSD back to 1.0 to test if those
metrics change, and indeed they do:
OSD.24 immediately shows values greater than 0.
I guess the metrics are completely unrelated to the flapping.
So the search goes on...
Zitat von Eugen Block :
An hour ago
An hour ago host5 started to report the OSDs on host4 as down (still
no clue why), resulting in slow requests. This time no flapping
occured, the cluster recovered a couple of minutes later. No other
OSDs reported that, only those two on host5. There's nothing in the
logs of the reporting
Greg, thanks for your reply.
So, this is actually just noisy logging from the client processing an
OSDMap. That should probably be turned down, as it's not really an
indicator of...anything...as far as I can tell.
I usually stick with the defaults:
host4:~ # ceph daemon osd.21 config show |
On Wed, Aug 22, 2018 at 6:46 AM Eugen Block wrote:
> Hello *,
>
> we have an issue with a Luminous cluster (all 12.2.5, except one on
> 12.2.7) for RBD (OpenStack) and CephFS. This is the osd tree:
>
> host1:~ # ceph osd tree
> ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
> -1
Hello *,
we have an issue with a Luminous cluster (all 12.2.5, except one on
12.2.7) for RBD (OpenStack) and CephFS. This is the osd tree:
host1:~ # ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 22.57602 root default
-41.81998 host host5