[ceph-users] Re: quincy v17.2.0 QE Validation status

2022-03-28 Thread Venky Shankar
Hey Yuri, On Tue, Mar 29, 2022 at 3:18 AM Yuri Weinstein wrote: > > We are trying to release v17.2.0 as soon as possible. > And need to do a quick approval of tests and review failures. > > Still outstanding are two PRs: > https://github.com/ceph/ceph/pull/45673 >

[ceph-users] Re: quincy v17.2.0 QE Validation status

2022-03-28 Thread Neha Ojha
On Mon, Mar 28, 2022 at 2:48 PM Yuri Weinstein wrote: > > We are trying to release v17.2.0 as soon as possible. > And need to do a quick approval of tests and review failures. > > Still outstanding are two PRs: > https://github.com/ceph/ceph/pull/45673 > https://github.com/ceph/ceph/pull/45604 >

[ceph-users] Re: PG down, due to 3 OSD failing

2022-03-28 Thread Dan van der Ster
Hi Fulvio, You can check (offline) which PGs are on an OSD with the list-pgs op, e.g. ceph-objectstore-tool --data-path /var/lib/ceph/osd/cephpa1-158/ --op list-pgs The EC pgs have a naming convention like 85.25s1 etc.. for the various k/m EC shards. -- dan On Mon, Mar 28, 2022 at 2:29 PM

[ceph-users] PG down, due to 3 OSD failing

2022-03-28 Thread Fulvio Galeazzi
Hallo, all of a sudden, 3 of my OSDs failed, showing similar messages in the log: . -5> 2022-03-28 14:19:02.451 7fc20fe99700 5 osd.145 pg_epoch: 616454 pg[70.2c6s1( empty local-lis/les=612106/612107 n=0 ec=148456/148456 lis/c 612106/612106 les/c/f 612107/612107/0

[ceph-users] Re: RBD Exclusive lock to shared lock

2022-03-28 Thread Marc
> > > > > My use case would be a HA cluster where a VM is mapping an rbd image, > and then it encounters some network issue. An other node of the HA > cluster could start the VM and map again the image, but if the > networking is fixed on the first VM that would keep using the already > mapped

[ceph-users] Re: ceph mon failing to start

2022-03-28 Thread Dan van der Ster
Are the two running mons also running 14.2.9 ? --- dan On Mon, Mar 28, 2022 at 8:27 AM Tomáš Hodek wrote: > > Hi, I have 3 node ceph cluster (managed via proxmox). Got single node > fatal failure and replaced it. Os boots correctly, however monitor on > failed node did not start successfully;

[ceph-users] Re: ceph mon failing to start

2022-03-28 Thread Eugen Block
Hi, does the failed MON's keyring file contain the correct auth caps? Then I would also remove the local (failed) MON's store.db before rejoining. Zitat von Tomáš Hodek : Hi, I have 3 node ceph cluster (managed via proxmox). Got single node fatal failure and replaced it. Os boots

[ceph-users] Re: Changing PG size of cache pool

2022-03-28 Thread Eugen Block
Hi, understandable, but we played with the PGs of our rbd cache pool a couple of times, last time was year ago (although it only has roughly 200 GB of data in it). We haven't noticed any issues. But to be fair, we only use this one cache tier. And there's probably a reason why it's not