Hi Cephs

Several nodes of our Ceph 14.2.5 are fully dedicated to host cold storage / 
backups information.

Today checking the data usage with a customer found that rgw-admin is reporting:

{
    "bucket": "XXXXXX",
    "tenant": "",
    "zonegroup": "4d8c7c5f-ca40-4ee3-b5bb-b2cad90bd007",
    "placement_rule": "default-placement",
    "explicit_placement": {
        "data_pool": "default.rgw.buckets.data",
        "data_extra_pool": "default.rgw.buckets.non-ec",
        "index_pool": "default.rgw.buckets.index"
    },
    "id": "48efb8c3-693c-4fe0-bbe4-fdc16f590a82.15946848.1",
    "marker": "48efb8c3-693c-4fe0-bbe4-fdc16f590a82.3886182.18",
    "index_type": "Normal",
    "owner": "XXXXXXXX",
    "ver": "0#410482,1#441516,2#401062,3#371595",
    "master_ver": "0#0,1#0,2#0,3#0",
    "mtime": "2019-06-08 00:26:06.266567Z",
    "max_marker": "0#,1#,2#,3#",
    "usage": {
        "rgw.none": {
            "size": 0,
            "size_actual": 0,
            "size_utilized": 0,
            "size_kb": 0,
            "size_kb_actual": 0,
            "size_kb_utilized": 0,
            "num_objects": 0
        },
        "rgw.main": {
            "size": 5118399148914,
            "size_actual": 5118401548288,
            "size_utilized": 5118399148914,
            "size_kb": 4998436669,
            "size_kb_actual": 4998439012,
            "size_kb_utilized": 4998436669,
            "num_objects": 293083
        },
        "rgw.multimeta": {
            "size": 0,
            "size_actual": 0,
            "size_utilized": 378,
            "size_kb": 0,
            "size_kb_actual": 0,
            "size_kb_utilized": 1,
            "num_objects": 1688
        }
    },
    "bucket_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1024,
        "max_size_kb": 0,
        "max_objects": -1
    }

That's near 5TB used space in CEPH, and the external tools are reporting just 
1.42TB.

Just in this case is more than a 300%. As the platform is billed by usage that 
cause an internal problem with customers.

Our setup don't use EC nodes, all are replica. All nodes use 14.2.5. 6 SSD 
fully dedicated to RGW-index .

No error at rgw logs or something that can explain this huge difference.

Magnitude in our case is that customer report us he use near 70-80TB in 
multiple buckets, but our CEPH report 163TB.

Im planning to move out all the customer information to a NAS to cleanup this 
bucket/space and re-upload but the process is not very transparent or smooth 
for customer.

Suggestions accepted.

Regards
Manuel


_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to