In my side, I saw the osd container trying to map and start on another "device
mapper" that in "READ-ONLY". You could check by
Step 1: check the folder store OSD infomation
the path is /var/lib/ceph/{fsid}/osd.{id}/block , when we run `ls -lah {block}`
, we will get a symlink like that
Hi Konstantin,
Many thanks for your response! That is the funny part: the logs on both
hosts do not indicate that anything happened to any devices at all, those
related to the OSDs which failed to start or otherwise. The only useful
message was from the OSD debug logs:
"debug -3>
On 1/26/23 02:33, Zakhar Kirpichenko wrote:
Hi,
Attempted to upgrade 16.2.10 to 16.2.11, 2 OSDs out of many started
crashing in a loop on the very 1st host:
I Just upgraded a test cluster to 16.2.11 and I did not observe this
behavior. It all went smooth (thx devs!).
Just to add an upgrade
Hi Zakhar,
> On 26 Jan 2023, at 08:33, Zakhar Kirpichenko wrote:
>
> Jan 25 23:07:53 ceph01 bash[2553123]:
>