Hello,

i am running a ceph 13.2.0 cluster exclusively for radosrw / s3.

i only have one big bucket. and the cluster is currently in warning state:

  cluster:
    id:     d605c463-9f1c-4d91-a390-a28eedb21650
    health: HEALTH_WARN
            13 large omap objects

i tried to google it, but i was not able to find what to do about the
"large omap objects".

as far as i understand ceph should automatically re shard an s3 bucket when
an omap is getting to big. or is this something i have to do?

"radosgw-admin reshard list" tells that no resharding is ongoing right now.


radosgw-admin metadata get
bucket.instance:nuxeo_live:6f85d718-fd2e-4c1b-a21d-bafb04a8cfcc.4854.4
{
    "key":
"bucket.instance:nuxeo_live:6f85d718-fd2e-4c1b-a21d-bafb04a8cfcc.4854.4",
    "ver": {
        "tag": "Y2epzPoujRDfxM5CNMZgKPaA",
        "ver": 6
    },
    "mtime": "2018-06-08 14:48:15.515840Z",
    "data": {
        "bucket_info": {
            "bucket": {
                "name": "nuxeo_live",
                "marker": "6f85d718-fd2e-4c1b-a21d-bafb04a8cfcc.4848.1",
                "bucket_id": "6f85d718-fd2e-4c1b-a21d-bafb04a8cfcc.4854.4",
                "tenant": "",
                "explicit_placement": {
                    "data_pool": "",
                    "data_extra_pool": "",
                    "index_pool": ""
                }
            },
            "creation_time": "2018-05-23 13:31:57.664398Z",
            "owner": "nuxeo_live",
            "flags": 0,
            "zonegroup": "506cc27c-fef5-4b89-a9f3-4c928a74b955",
            "placement_rule": "default-placement",
            "has_instance_obj": "true",
            "quota": {
                "enabled": false,
                "check_on_raw": false,
                "max_size": -1,
                "max_size_kb": 0,
                "max_objects": -1
            },
            "num_shards": 349,
            "bi_shard_hash_type": 0,
            "requester_pays": "false",
            "has_website": "false",
            "swift_versioning": "false",
            "swift_ver_location": "",
            "index_type": 0,
            "mdsearch_config": [],
            "reshard_status": 2,
            "new_bucket_instance_id":
"6f85d718-fd2e-4c1b-a21d-bafb04a8cfcc.176143.1"
        },
        "attrs": [
            {
                "key": "user.rgw.acl",
                "val":
"AgKpAAAAAwIhAAAACgAAAG51eGVvX2xpdmUPAAAAbnV4ZW8gbGl2ZSB1c2VyBAN8AAAAAQEAAAAKAAAAbnV4ZW9fbGl2ZQ8AAAABAAAACgAAAG51eGVvX2xpdmUFA0UAAAACAgQAAAAAAAAACgAAAG51eGVvX2xpdmUAAAAAAAAAAAICBAAAAA8AAAAPAAAAbnV4ZW8gbGl2ZSB1c2VyAAAAAAAAAAAAAAAAAAAAAA=="
            },
            {
                "key": "user.rgw.idtag",
                "val": ""
            }
        ]

i also tried to manually trigger a resharding. but it failed with:


   - NOTICE: operation will not remove old bucket index objects ***
   - these will need to be removed manually ***
   tenant:
   bucket name: nuxeo_live
   old bucket instance id: 6f85d718-fd2e-4c1b-a21d-bafb04a8cfcc.184670.1
   new bucket instance id: 6f85d718-fd2e-4c1b-a21d-bafb04a8cfcc.176197.1
   WARNING: RGWReshard::add failed to drop lock on
   bucket_name:6f85d718-fd2e-4c1b-a21d-bafb04a8cfcc.184670.1 ret=-2
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to