[ceph-users] Re: cephadm upgrade 16.2.10 to 16.2.11: osds crash and get stuck restarting

2023-07-18 Thread letonphat1988
In my side, I saw the osd container trying to map and start on another "device mapper" that in "READ-ONLY". You could check by Step 1: check the folder store OSD infomation the path is /var/lib/ceph/{fsid}/osd.{id}/block , when we run `ls -lah {block}` , we will get a symlink like that

[ceph-users] Re: cephadm upgrade 16.2.10 to 16.2.11: osds crash and get stuck restarting

2023-01-26 Thread Zakhar Kirpichenko
Hi Konstantin, Many thanks for your response! That is the funny part: the logs on both hosts do not indicate that anything happened to any devices at all, those related to the OSDs which failed to start or otherwise. The only useful message was from the OSD debug logs: "debug -3>

[ceph-users] Re: cephadm upgrade 16.2.10 to 16.2.11: osds crash and get stuck restarting

2023-01-26 Thread Stefan Kooman
On 1/26/23 02:33, Zakhar Kirpichenko wrote: Hi, Attempted to upgrade 16.2.10 to 16.2.11, 2 OSDs out of many started crashing in a loop on the very 1st host: I Just upgraded a test cluster to 16.2.11 and I did not observe this behavior. It all went smooth (thx devs!). Just to add an upgrade

[ceph-users] Re: cephadm upgrade 16.2.10 to 16.2.11: osds crash and get stuck restarting

2023-01-25 Thread Konstantin Shalygin
Hi Zakhar, > On 26 Jan 2023, at 08:33, Zakhar Kirpichenko wrote: > > Jan 25 23:07:53 ceph01 bash[2553123]: >