Casey,
  These clusters were setup with the intention of one day doing multi site
replication. That has never happened. The cluster has a single realm, which
contains a single zonegroup, and that zonegroup contains a single zone.

-Brett

On Thu, Jul 25, 2019 at 2:16 PM Casey Bodley <cbod...@redhat.com> wrote:

> Hi Brett,
>
> These meta.log objects store the replication logs for metadata sync in
> multisite. Log entries are trimmed automatically once all other zones
> have processed them. Can you verify that all zones in the multisite
> configuration are reachable and syncing? Does 'radosgw-admin sync
> status' on any zone show that it's stuck behind on metadata sync? That
> would prevent these logs from being trimmed and result in these large
> omap warnings.
>
> On 7/25/19 1:59 PM, Brett Chancellor wrote:
> > I'm having an issue similar to
> >
> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2019-March/033611.html .
>
> > I don't see where any solution was proposed.
> >
> > $ ceph health detail
> > HEALTH_WARN 1 large omap objects
> > LARGE_OMAP_OBJECTS 1 large omap objects
> >     1 large objects found in pool 'us-prd-1.rgw.log'
> >     Search the cluster log for 'Large omap object found' for more
> details.
> >
> > $ grep "Large omap object" /var/log/ceph/ceph.log
> > 2019-07-25 14:58:21.758321 osd.3 (osd.3) 15 : cluster [WRN] Large omap
> > object found. Object:
> > 51:61eb35fe:::meta.log.e557cf47-46df-4b45-988e-9a94c5004a2e.19:head
> > Key count: 3382154 Size (bytes): 611384043
> >
> > $ rados -p us-prd-1.rgw.log listomapkeys
> > meta.log.e557cf47-46df-4b45-988e-9a94c5004a2e.19 |wc -l
> > 3382154
> >
> > $ rados -p us-prd-1.rgw.log listomapvals
> > meta.log.e557cf47-46df-4b45-988e-9a94c5004a2e.19
> > This returns entries from almost every bucket, across multiple
> > tenants. Several of the entries are from buckets that no longer exist
> > on the system.
> >
> > $ ceph df |egrep 'OBJECTS|.rgw.log'
> >     POOL        ID      STORED      OBJECTS     USED        %USED MAX
> > AVAIL
> >     us-prd-1.rgw.log                 51     758 MiB 228     758 MiB
> >       0       102 TiB
> >
> > Thanks,
> >
> > -Brett
> >
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to