Hi,
I recently took our test cluster up to a new version and am no longer able to
start radosgw. The cluster itself (mon, osd, mgr) appears fine.
Without being much of an expert trying to read this, from the errors that were
being thrown it seems like the object expirer is choking ok handling
Hi,
Our cluster has large omap objects in the .log pool. Recent changes to the
default warn limits brought this to our awareness. Automatic resharding of rgw
buckets seems to have helped with all of our other large omap warnings
elsewhere.
I guess my first question is what sort of things
> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2020-January/037909.html
> <http://lists.ceph.com/pipermail/ceph-users-ceph.com/2020-January/037909.html>
>
> I saw the same after an update from Luminous to Nautilus 14.2.6
>
> Cheers, Massimo
>
> On Tue, Jan 14
e such big objects around
>
> I am also wondering what a pg repair would do in such case
>
> Il mer 15 gen 2020, 16:18 Liam Monahan <mailto:l...@umiacs.umd.edu>> ha scritto:
> Thanks for that link.
>
> Do you have a default osd max object size of 128M? I’m think
Hi,
I am getting one inconsistent object on our cluster with an inconsistency error
that I haven’t seen before. This started happening during a rolling upgrade of
the cluster from 14.2.3 -> 14.2.6, but I am not sure that’s related.
I was hoping to know what the error means before trying a