Hello, we are running a ceph cluster + rgw on luminous 12.2.12 that serves as a 
S3 compatible storage. We have noticed some buckets where the `rgw.none` from 
the output of `radosgw-admin bucket stats` shows extremely large value for 
`num_objects`, which is not convincible. It does look like an underflow by 
subtracting a positive number from 0 and then the value is interpreted and 
shown as an uint64. For example,

```
# radosgw-admin bucket stats --bucket redacted
{
    "bucket": "redacted",
     ...........
    "usage": {
        "rgw.none": {
            "size": 0,
            "size_actual": 0,
            "size_utilized": 0,
            "size_kb": 0,
            "size_kb_actual": 0,
            "size_kb_utilized": 0,
            "num_objects": 18446744073709551607
        },
        "rgw.main": {
            "size": 1687971465874,
            "size_actual": 1696692400128,
            "size_utilized": 1687971465874,
            "size_kb": 1648409635,
            "size_kb_actual": 1656926172,
            "size_kb_utilized": 1648409635,
            "num_objects": 4290147
        },
        "rgw.multimeta": {
            "size": 0,
            "size_actual": 0,
            "size_utilized": 0,
            "size_kb": 0,
            "size_kb_actual": 0,
            "size_kb_utilized": 0,
            "num_objects": 75
        }
    },
    "bucket_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    }
}
```

We did find a few reports on this issue, eg. 
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2019-November/037531.html. 

Do we know any use patterns that can lead object count to become that large? 
Also is there a way to accurately collect the object count for each bucket in 
the cluster, as we would like to use it for management purpose.
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to