My guess without logs is that that osd was purging PGs that had been
removed previously but not fully deleted from the disk. There have been
bugs like that fixed recently, and PG removal can be intense (unless you
run latest releases).

Next time you have an unexplained busy osd, inject debug_osd=10 to see what
it's doing.

.. Dan



On Thu, 2 Sep 2021, 22:49 Marc, <[email protected]> wrote:

>
> I was told there was a power loss at the datacenter. Anyway all ceph nodes
> lost power, just turning them on was enough to get everything back online,
> no problems at all. However I had one disk/osd on a high load for day.
>
> I guess this must have been some check of ceph? How can I see this,
> because I do not see anything in the logs when I do grep on -i error or
> warn. Should there not be some warning or error logged when a osd is fully
> utilized like this? I do not think it was a normal scrub/deep-scrub.
> The amount of lines of 'rocksdb', 'bdev' and 'bluefs' between this osd log
> and others are sort of similar.
>
>
> _______________________________________________
> ceph-users mailing list -- [email protected]
> To unsubscribe send an email to [email protected]
>
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to