Dear All,
I've recently setup a 3 node Ceph Quincy (17.2) cluster to serve a pair of
CephFS mounts for a Slurm cluster. Each ceph node has 6 x SSD and 6 x HDD, and
I've setup the pools and crush rules to create separate CephFS filesystems
using the different disk classes. I used the default era
;ll report a documentation bug with the hope that they clarify
things (I know of at least one other admin who hit the same issue I'm seeing,
so I'm not the only one...).
Cheers,
Mark
From: Danny Webb
Sent: 25 July 2022 14:32
To: Mark S. Holliman ; ceph-users@ceph.io
Subject: Re: Defa
Hi all,
I have a large distributed ceph cluster that recently broke with all PGs housed
at a single site getting marked as 'unknown' after a run of the Ceph Ansible
playbook (which was being used to expand the cluster at a third site). Is
there a way to recover the location of PGs in this stat
So I've managed to use ceph-objectstore-tool to locate the pgs in 'unknown'
state on the OSDs, but how do I tell the rest of the system where to find them?
Is there a command for setting a the OSDs associated with a PG? Or, less
ideally, is there a table somewhere I can hack to do this by hand?