On Fri, Oct 4, 2019 at 6:09 PM Marc Roos wrote:
>
> >
> >Try something like the following on each OSD that holds a copy of
> >rbd_data.1f114174b0dc51.0974 and see what output you get.
> >Note that you can drop the bluestore flag if they are not bluestore
> >osds and you will need
Hello.
I've created an rgw installation, had uploaded about 60M files into a
single bucket. Removal had looked as a long adventure, so I "ceph osd
pool rm'ed" both default.rgw.data and default.rgw.index.
Now I have this:
# rados lspools
.rgw.root
default.rgw.control
default.rgw.meta
Hi,
I get following weird negative objects number on tiering. Why is this
happening? How to get back to normal?
Best regards,
[root@management-a ~]# ceph df detail
GLOBAL:
SIZE AVAIL RAW USED %RAW USED OBJECTS
446T 184T 261T 58.62 22092k
POOLS:
Hi,
the default for this warning changed recently (see other similar
threads on the mailing list), it was 2 million before 14.2.3.
I don't think the new default of 200k is a good choice, so increasing
it is a reasonable work-around.
Paul
--
Paul Emmerich
Looking for help with your Ceph
Hi,
there is also /var/log/ceph/ceph.log on the MONs, it has the stats
you're asking for. Does this answer your question?
Regards,
Eugen
Zitat von nokia ceph :
Hi Team,
With default log settings , the ceph stats will be logged like
cluster [INF] pgmap v30410386: 8192 pgs: 8192
Hi Team,
With default log settings , the ceph stats will be logged like
cluster [INF] pgmap v30410386: 8192 pgs: 8192 active+clean; 445 TB data,
1339 TB used, 852 TB / 2191 TB avail; 188 kB/s rd, 217 MB/s wr, 1618 op/s
Jewel : on mon logs
Nautilus : on mgr logs
Luminous : not able to view
Hi,unfortunately it's single mon, because we had major outage on this cluster
and it's just being used to copy off data now. We werent able to add more mons
because once a second mon was added it crashed the first one (there's a bug
tracker ticket).
I still have old rocksdb files before I ran