I found something that I think could be interesting (please remember I'm
new to Ceph :)
There are 3 pools in the cluster:
[xxx@ceph02 ~]$ sudo ceph --cluster xxx osd pool ls
xxx-pool
foo_data
foo_metadata
xxx-pool is empty, contains no data, but has the bulk of the pgs:
[xxx@ceph02 ~]$ sudo ceph
Hello Mehmet,
On Sat, Sep 3, 2022 at 1:50 PM wrote:
> Is ceph still backfilling? What is the actual output of ceph -s?
>
Yes:
[trui...@ceph02.eun ~]$ sudo ceph --cluster xxx -s
cluster:
id: 91ba1ea6-bfec-4ddb-a8b5-9faf842f22c3
health: HEALTH_WARN
1 backfillfull
Hi,
Is ceph still backfilling? What is the actual output of ceph -s?
If not backfilling, it is strange that you only have 84 pgs on osd.11 but 93.59
percent in use...
Are you able to find a pg on 11 which is too big?
Perhaps pg query will help to find. Otherwise you should lower the weight of
Hello Stefan,
Thank you for your answer.
On Fri, Sep 2, 2022 at 5:27 PM Stefan Kooman wrote:
> On 9/2/22 15:55, Oebele Drijfhout wrote:
> > Hello,
> >
> > I'm new to Ceph and I recently inherited a 4 node cluster with 32 OSDs
> and
> > about 116TB raw space, which shows low available space,