These unfound progress events are from the cephadm module. More details are
in https://tracker.ceph.com/issues/65799
On Fri, Sep 29, 2023 at 7:42 AM Zakhar Kirpichenko wrote:
> Many thanks for the clarification!
>
> /Z
>
> On Fri, 29 Sept 2023 at 16:43, Tyler Stachecki
> wrote:
>
> >
> >
> >
Many thanks for the clarification!
/Z
On Fri, 29 Sept 2023 at 16:43, Tyler Stachecki
wrote:
>
>
> On Fri, Sep 29, 2023, 9:40 AM Zakhar Kirpichenko wrote:
>
>> Thanks for the suggestion, Tyler! Do you think switching the progress
>> module off will have no material impact on the operation of
On Fri, Sep 29, 2023, 9:40 AM Zakhar Kirpichenko wrote:
> Thanks for the suggestion, Tyler! Do you think switching the progress
> module off will have no material impact on the operation of the cluster?
>
It does not. It literally just tracks the completion rate of certain
actions so that it
Thanks for the suggestion, Tyler! Do you think switching the progress
module off will have no material impact on the operation of the cluster?
/Z
On Fri, 29 Sept 2023 at 14:13, Tyler Stachecki
wrote:
> On Fri, Sep 29, 2023, 5:55 AM Zakhar Kirpichenko wrote:
>
>> Thank you, Eugen.
>>
>> Indeed
On Fri, Sep 29, 2023, 5:55 AM Zakhar Kirpichenko wrote:
> Thank you, Eugen.
>
> Indeed it looks like the progress module had some stale events from the
> time when we added new OSDs and set a specific number of PGs for pools,
> while the autoscaler tried to scale them down. Somehow the
Thank you, Eugen.
Indeed it looks like the progress module had some stale events from the
time when we added new OSDs and set a specific number of PGs for pools,
while the autoscaler tried to scale them down. Somehow the scale-down
events got stuck in the progress log, although these tasks have
Hi,
this is from the mgr progress module [1]. I haven't played too much
with it yet, you can check out the output of 'ceph progress json',
maybe there are old events from a (failed) upgrade etc. You can reset
it with 'ceph progress clear', you could also turn it off ('ceph
progress off')