Hi,

I've been benchmarking my Luminous test cluster, the s3 user has deleted
all objects and buckets, and yet the RGW data pool is using 7TiB of data:
default.rgw.buckets.data 11 7.16TiB      3.27        212TiB      1975644

There are no buckets left (radosgw-admin bucket list returns []), and
the only user is using no quota.

radosgw-admin gc list doesn't show anything pending gc; if I do
--include-all there are some:

[
    {
        "tag": "01a3b9f4-d6e8-4ac6-a44f-3ebb53dcee1b.3099907.1022170\u0000",
        "time": "2018-12-06 13:36:59.0.88218s",
        "objs": [
            {
                "pool": "default.rgw.buckets.data",
                "oid":
"01a3b9f4-d6e8-4ac6-a44f-3ebb53dcee1b.3665142.15__multipart_b713be7d5b86b2fa51830f7c13092223.2~7_mvHIZc-L8mOFy51hkGnZbn4ihgOXR.1",
                "key": "",
                "instance": ""
            },
[continues for 16k lines]

What am I meant to do about this? OK, it's a test system so I could blow
the pool away and start again, but I'd like to know what the underlying
issue is and how I'd manage this on a production cluster.

We've previously had data-loss issues with using orphans find (i.e. it
found things that were not orphans)... :(

Regards,

Matthew


-- 
 The Wellcome Sanger Institute is operated by Genome Research 
 Limited, a charity registered in England with number 1021457 and a 
 company registered in England with number 2742969, whose registered 
 office is 215 Euston Road, London, NW1 2BE. 
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to