Hi All, exactly the same story today, same 8 OSDs and a lot of garbage
collection objects to process

Below is the number of "cls_rgw.cc:3284: gc_iterate_entries end_key="
entries per OSD log file
hostA:
  /var/log/ceph/ceph-osd.58.log
  1826467
hostB:
  /var/log/ceph/ceph-osd.88.log
  2924241
hostC:
  /var/log/ceph/ceph-osd.153.log
  581002
  /var/log/ceph/ceph-osd.164.log
  3278606
hostD:
  /var/log/ceph/ceph-osd.95.log
  1426963
hostE:
  /var/log/ceph/ceph-osd.4.log
  2716914
  /var/log/ceph/ceph-osd.53.log
  943749
hostF:
  /var/log/ceph/ceph-osd.172.log
  4085334


# radosgw-admin gc list --include-all|grep oid |wc -l
302357
#

Can anyone please explain what is going on ?

Thanks!
Jakub

On Tue, Aug 7, 2018 at 3:03 PM Jakub Jaszewski <[email protected]>
wrote:

> Hi,
>
> 8 out of 192 OSDs in our cluster (version 12.2.5) write plenty of records
> like "cls_rgw.cc:3284: gc_iterate_entries end_key=" to the corresponding
> log files, e.g.
>
> 2018-08-07 04:34:06.000585 7fdd8f012700  0 <cls>
> /build/ceph-12.2.5/src/cls/rgw/cls_rgw.cc:3284: gc_iterate_entries
> end_key=1_01533616446.000580407
> 2018-08-07 04:34:06.001888 7fdd8f012700  0 <cls>
> /build/ceph-12.2.5/src/cls/rgw/cls_rgw.cc:3284: gc_iterate_entries
> end_key=1_01533616446.001886318
> 2018-08-07 04:34:06.003395 7fdd8f012700  0 <cls>
> /build/ceph-12.2.5/src/cls/rgw/cls_rgw.cc:3284: gc_iterate_entries
> end_key=1_01533616446.003390299
> 2018-08-07 04:34:06.005205 7fdd8f012700  0 <cls>
> /build/ceph-12.2.5/src/cls/rgw/cls_rgw.cc:3284: gc_iterate_entries
> end_key=1_01533616446.005200341
>
> # grep '2018-08-07 04:34:06' /var/log/ceph/ceph-osd.4.log |wc -l
> 712
> #
>
> At the same time there were like 500 000 expired garbage collection
> objects.
>
> Log level of OSD subsystem is set to default 1/5 across all OSDs.
>
> I wonder why only few OSDs record this information and is it something to
> be logged in log level = 1 or maybe higher?
> https://github.com/ceph/ceph/blob/v12.2.5/src/cls/rgw/cls_rgw.cc#L3284
>
> Thanks
> Jakub
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to