Hi Francois,

I did found the solution.
As always, it a RGW bug.

https://tracker.ceph.com/issues/69281

https://github.com/ceph/ceph/commit/46a6f8e001c23e54d324e69751bc2fdf751779aa
https://github.com/ceph/ceph/commit/65271d8836a6f11e0d2eebb10926996947485521

If you apply patches from these pull requests to 19.2.2 you will be able to create buckets in your secondary zonegroup. (I got it working in the lab)

It seems that these fixes will be backported to reef and squid at some point.

Cheers
Adam



W dniu 24.07.2025 o 19:43, Scheurer François pisze:
Hi Adam


may I ask if by chance you did find a solution to your issue in the meantime?
with 2 clusters on squid 19.2.2 in a lab we also similar issues with create 
bucket in secondary zonegroup.

Cheers

Francois



on secondary zonegroup (single zone):

2025-07-24T19:21:24.095774+02:00 ewge1-ceph1hp1-poc.poc.i.ewcs.ch 
bash[2115035]: debug 2025-07-24T17:21:24.084+0000 72665ca00640 20 req 
2119425827517683058 0.005000044s s3:create_bucket sign_request_v4(): sigv4 
header: Authorization: AWS4-HMAC-SHA256 
Credential=Xc5KJNrFXNtFIfw5ItMd/20250724/ch-zh1/s3/aws4_request,SignedHeaders=date;x-amz-content-sha256;x-amz-date,Signature=c23951fb928737c29956aa3ff27d9897ebbbaa1e9c9b37f64de74673935fccc1
2025-07-24T19:21:24.095774+02:00 ewge1-ceph1hp1-poc.poc.i.ewcs.ch 
bash[2115035]: debug 2025-07-24T17:21:24.084+0000 72665ca00640 20 req 
2119425827517683058 0.005000044s s3:create_bucket sign_request_v4(): sigv4 
header: x-amz-content-sha256: UNSIGNED-PAYLOAD
2025-07-24T19:21:24.095774+02:00 ewge1-ceph1hp1-poc.poc.i.ewcs.ch 
bash[2115035]: debug 2025-07-24T17:21:24.084+0000 72665ca00640 20 req 
2119425827517683058 0.005000044s s3:create_bucket sign_request_v4(): sigv4 
header: x-amz-date: 20250724T172124Z
2025-07-24T19:21:24.095774+02:00 ewge1-ceph1hp1-poc.poc.i.ewcs.ch bash[2115035]: 
debug 2025-07-24T17:21:24.084+0000 72665ca00640 20 sending request to 
http://10.33.98.135:7480/pmatest-ge1b?rgwx-uid=40465718fdc745989975a2b9bdaa0e84&rgwx-zonegroup=b0d161ed-ef8c-4551-b5eb-00c02cb989a6
2025-07-24T19:21:24.095774+02:00 ewge1-ceph1hp1-poc.poc.i.ewcs.ch bash[2115035]: 
debug 2025-07-24T17:21:24.084+0000 72665ca00640 20 register_request 
mgr=0x6463de83d320 req_data->id=19, curl_handle=0x6463e2757a80
2025-07-24T19:21:24.095774+02:00 ewge1-ceph1hp1-poc.poc.i.ewcs.ch bash[2115035]: 
debug 2025-07-24T17:21:24.084+0000 72674f200640 20 link_request 
req_data=0x6463e24d6d20 req_data->id=19, curl_handle=0x6463e2757a80
2025-07-24T19:21:24.095774+02:00 ewge1-ceph1hp1-poc.poc.i.ewcs.ch 
bash[2115035]: debug 2025-07-24T17:21:24.087+0000 72665c000640  2 req 
2119425827517683058 0.008000071s s3:create_bucket completing
2025-07-24T19:21:24.095774+02:00 ewge1-ceph1hp1-poc.poc.i.ewcs.ch 
bash[2115035]: debug 2025-07-24T17:21:24.087+0000 72665c000640 10 req 
2119425827517683058 0.008000071s cache get: 
name=ch-ge1-az1.rgw.log++script.postrequest. : expiry miss
2025-07-24T19:21:24.095774+02:00 ewge1-ceph1hp1-poc.poc.i.ewcs.ch bash[2115035]: 
debug 2025-07-24T17:21:24.087+0000 72665c000640 20 req 2119425827517683058 
0.008000071s rados->read ofs=0 len=0
2025-07-24T19:21:24.095774+02:00 ewge1-ceph1hp1-poc.poc.i.ewcs.ch 
bash[2115035]: debug 2025-07-24T17:21:24.088+0000 72665e800640 20 req 
2119425827517683058 0.009000080s rados_obj.operate() r=-2 bl.length=0
2025-07-24T19:21:24.095774+02:00 ewge1-ceph1hp1-poc.poc.i.ewcs.ch 
bash[2115035]: debug 2025-07-24T17:21:24.088+0000 72665e800640 10 req 
2119425827517683058 0.009000080s cache put: 
name=ch-ge1-az1.rgw.log++script.postrequest. info.flags=0x0
2025-07-24T19:21:24.095774+02:00 ewge1-ceph1hp1-poc.poc.i.ewcs.ch 
bash[2115035]: debug 2025-07-24T17:21:24.088+0000 72665e800640 10 req 
2119425827517683058 0.009000080s adding ch-ge1-az1.rgw.log++script.postrequest. 
to cache LRU end
2025-07-24T19:21:24.097075+02:00 ewge1-ceph1hp1-poc.poc.i.ewcs.ch 
bash[2115035]: debug 2025-07-24T17:21:24.088+0000 72665e800640  2 req 
2119425827517683058 0.009000080s s3:create_bucket op status=-2202
2025-07-24T19:21:24.097075+02:00 ewge1-ceph1hp1-poc.poc.i.ewcs.ch 
bash[2115035]: debug 2025-07-24T17:21:24.088+0000 72665e800640  2 req 
2119425827517683058 0.009000080s s3:create_bucket http status=503
2025-07-24T19:21:24.097075+02:00 ewge1-ceph1hp1-poc.poc.i.ewcs.ch 
bash[2115035]: debug 2025-07-24T17:21:24.088+0000 72665e800640  1 ====== req 
done req=0x726754cfd5d0 op status=-2202 http_status=503 latency=0.009000080s 
======
2025-07-24T19:21:24.097075+02:00 ewge1-ceph1hp1-poc.poc.i.ewcs.ch bash[2115035]: debug 
2025-07-24T17:21:24.088+0000 72665e800640  1 beast: 0x726754cfd5d0: 10.33.145.51 - 
40465718fdc745989975a2b9bdaa0e84 [24/Jul/2025:17:21:24.079 +0000] "PUT /pmatest-ge1b 
HTTP/1.1" 503 256 - "aws-cli/1.41.12 md/Botocore#1.39.12 ua/2.1 os/linux#6.8.0-56-generic 
md/arch#x86_64 lang/python#3.10.12 md/pyimpl#CPython m/D,a,c,N cfg/retry-mode#legacy 
botocore/1.39.12" - latency=0.009000080s
2025-07-24T19:21:24.097075+02:00 ewge1-ceph1hp1-poc.poc.i.ewcs.ch 
bash[2115035]: debug 2025-07-24T17:21:24.091+0000 726647600640 20 failed to 
read header: end of stream
2025-07-24T19:21:24.376479+02:00 ewge1-ceph1hp1-poc.poc.i.ewcs.ch 
bash[2115035]: debug 2025-07-24T17:21:24.374+0000 7265d8c00640 10 lua 
background: cache get: name=ch-ge1-az1.rgw.log++script.background. : hit 
(negative entry)

the create bucket get the forwarded and rejected in master zonegroup (also 
single zone):
2025-07-24T19:21:24.095445+0200 ewos2-ctl2-dev bash[1114231]: debug 
2025-07-24T17:21:24.086+0000 7af0dc800640 10 req 8249004579294306081 
0.001000010s credential scope = 20250724/ch-zh1/s3/aws4_request
2025-07-24T19:21:24.095445+0200 ewos2-ctl2-dev bash[1114231]: debug 
2025-07-24T17:21:24.086+0000 7af0dc800640 10 req 8249004579294306081 
0.001000010s canonical headers format = date:Thu Jul 24 17:21:24 2025
2025-07-24T19:21:24.095445+0200 ewos2-ctl2-dev bash[1114231]: 
x-amz-content-sha256:0801ca4348996dc35da00925fd7c41248e02551b1303413b0569fbe1e96cc6f3
2025-07-24T19:21:24.095445+0200 ewos2-ctl2-dev bash[1114231]: 
x-amz-date:20250724T172124Z
2025-07-24T19:21:24.095445+0200 ewos2-ctl2-dev bash[1114231]: debug 
2025-07-24T17:21:24.086+0000 7af0dc800640 10 req 8249004579294306081 
0.001000010s payload request hash = 
0801ca4348996dc35da00925fd7c41248e02551b1303413b0569fbe1e96cc6f3
2025-07-24T19:21:24.095445+0200 ewos2-ctl2-dev bash[1114231]: debug 
2025-07-24T17:21:24.086+0000 7af0dc800640 10 req 8249004579294306081 
0.001000010s canonical request = PUT
2025-07-24T19:21:24.095445+0200 ewos2-ctl2-dev bash[1114231]: /pmatest-ge1b
2025-07-24T19:21:24.095445+0200 ewos2-ctl2-dev bash[1114231]: 
rgwx-uid=40465718fdc745989975a2b9bdaa0e84&rgwx-zonegroup=b0d161ed-ef8c-4551-b5eb-00c02cb989a6
2025-07-24T19:21:24.095445+0200 ewos2-ctl2-dev bash[1114231]: date:Thu Jul 24 
17:21:24 2025
2025-07-24T19:21:24.095445+0200 ewos2-ctl2-dev bash[1114231]: 
x-amz-content-sha256:0801ca4348996dc35da00925fd7c41248e02551b1303413b0569fbe1e96cc6f3
2025-07-24T19:21:24.095445+0200 ewos2-ctl2-dev bash[1114231]: 
x-amz-date:20250724T172124Z
2025-07-24T19:21:24.095445+0200 ewos2-ctl2-dev bash[1114231]: 
date;x-amz-content-sha256;x-amz-date
2025-07-24T19:21:24.095445+0200 ewos2-ctl2-dev bash[1114231]: 
0801ca4348996dc35da00925fd7c41248e02551b1303413b0569fbe1e96cc6f3
2025-07-24T19:21:24.095445+0200 ewos2-ctl2-dev bash[1114231]: debug 
2025-07-24T17:21:24.087+0000 7af0dc800640 10 req 8249004579294306081 
0.002000020s canonical request hash = 
f90e8e7f35e349059d0011ee142c0c05f1a07a1752936faefaec829b28836922
2025-07-24T19:21:24.095445+0200 ewos2-ctl2-dev bash[1114231]: debug 
2025-07-24T17:21:24.087+0000 7af0dc800640 10 req 8249004579294306081 
0.002000020s string to sign = AWS4-HMAC-SHA256
2025-07-24T19:21:24.095445+0200 ewos2-ctl2-dev bash[1114231]: 20250724T172124Z
2025-07-24T19:21:24.095445+0200 ewos2-ctl2-dev bash[1114231]: 
20250724/ch-zh1/s3/aws4_request
2025-07-24T19:21:24.095445+0200 ewos2-ctl2-dev bash[1114231]: 
f90e8e7f35e349059d0011ee142c0c05f1a07a1752936faefaec829b28836922
2025-07-24T19:21:24.095445+0200 ewos2-ctl2-dev bash[1114231]: debug 
2025-07-24T17:21:24.087+0000 7af0dc800640  4 req 8249004579294306081 
0.002000020s ERROR: empty payload checksum mismatch, expected 
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 got 
0801ca4348996dc35da00925fd7c41248e02551b1303413b0569fbe1e96cc6f3
2025-07-24T19:21:24.095445+0200 ewos2-ctl2-dev bash[1114231]: debug 
2025-07-24T17:21:24.087+0000 7af0dc800640 20 req 8249004579294306081 
0.002000020s s3:create_bucket rgw::auth::s3::LocalEngine denied with 
reason=-2040
2025-07-24T19:21:24.095445+0200 ewos2-ctl2-dev bash[1114231]: debug 
2025-07-24T17:21:24.087+0000 7af0dc800640 20 req 8249004579294306081 
0.002000020s s3:create_bucket rgw::auth::s3::AWSAuthStrategy denied with 
reason=-2040




--


EveryWare AG
François Scheurer
Senior Systems Engineer
Zurlindenstrasse 52a
CH-8003 Zürich

tel: +41 44 466 60 00
fax: +41 44 466 60 10
mail: francois.scheu...@everyware.ch
web: http://www.everyware.ch
________________________________
From: Adam Prycki <apry...@man.poznan.pl>
Sent: Thursday, June 5, 2025 1:18:03 PM
To: ceph-users@ceph.io
Subject: [ceph-users] Re: How to create buckets in secondary zonegroup?

Hello,

I have recreated my multisite lab form zone but I'm still stuck on the
same bucket issue.
Can someone with more experience tell me if this is valid multisite
configuration?


zonegroup configuration
(all zones have configured the same system_key pair)

radosgw-admin zonegroup get --rgw-zonegroup poznan-1
{
      "id": "73868eb9-fb8e-4ad8-bf31-37a16d2e6d14",
      "name": "poznan-1",
      "api_name": "poznan-1",
      "is_master": true,
      "endpoints": [
          "http://10.0.9.120:8000";
      ],
      "hostnames": [],
      "hostnames_s3website": [],
      "master_zone": "62ab5076-83f2-48ba-88f0-080aad9f271f",
      "zones": [
          {
              "id": "02c9655c-0e61-4421-9df2-dd5b954bbe82",
              "name": "zone-poznan-1-lodman",
              "endpoints": [
                  "http://10.0.9.121:8000";
              ],
              "log_meta": false,
              "log_data": true,
              "bucket_index_max_shards": 11,
              "read_only": false,
              "tier_type": "",
              "sync_from_all": true,
              "sync_from": [],
              "redirect_zone": "",
              "supported_features": [
                  "compress-encrypted",
                  "notification_v2",
                  "resharding"
              ]
          },
          {
              "id": "62ab5076-83f2-48ba-88f0-080aad9f271f",
              "name": "zone-poznan-1-pcss-bst",
              "endpoints": [
                  "http://10.0.9.120:8000";
              ],
              "log_meta": false,
              "log_data": true,
              "bucket_index_max_shards": 11,
              "read_only": false,
              "tier_type": "",
              "sync_from_all": true,
              "sync_from": [],
              "redirect_zone": "",
              "supported_features": [
                  "compress-encrypted",
                  "notification_v2",
                  "resharding"
              ]
          }
      ],
      "placement_targets": [
          {
              "name": "default-placement",
              "tags": [],
              "storage_classes": [
                  "STANDARD"
              ]
          }
      ],
      "default_placement": "default-placement",
      "realm_id": "4fa02715-aa1d-4163-8d7f-d4f52996d372",
      "sync_policy": {
          "groups": []
      },
      "enabled_features": [
          "notification_v2",
          "resharding"
      ]
}
gh-test-ceph-a ~ # radosgw-admin zonegroup get --rgw-zonegroup lodz-1
{
      "id": "259ffb31-643b-4ae3-b248-1327f5ff0887",
      "name": "lodz-1",
      "api_name": "lodz-1",
      "is_master": false,
      "endpoints": [
          "http://10.0.9.121:8001";
      ],
      "hostnames": [],
      "hostnames_s3website": [],
      "master_zone": "3b79f8f4-869f-420f-b31a-b24ad0c72792",
      "zones": [
          {
              "id": "3b79f8f4-869f-420f-b31a-b24ad0c72792",
              "name": "zone-lodz-1-lodman",
              "endpoints": [
                  "http://10.0.9.121:8001";
              ],
              "log_meta": false,
              "log_data": true,
              "bucket_index_max_shards": 11,
              "read_only": false,
              "tier_type": "",
              "sync_from_all": true,
              "sync_from": [],
              "redirect_zone": "",
              "supported_features": [
                  "compress-encrypted",
                  "notification_v2",
                  "resharding"
              ]
          },
          {
              "id": "7e5ca0ec-cb1d-4aaa-83ee-5af2061f5348",
              "name": "zone-lodz-1-pcss-bst",
              "endpoints": [
                  "http://10.0.9.120:8001";
              ],
              "log_meta": false,
              "log_data": true,
              "bucket_index_max_shards": 11,
              "read_only": false,
              "tier_type": "",
              "sync_from_all": true,
              "sync_from": [],
              "redirect_zone": "",
              "supported_features": [
                  "compress-encrypted",
                  "notification_v2",
                  "resharding"
              ]
          }
      ],
      "placement_targets": [
          {
              "name": "default-placement",
              "tags": [],
              "storage_classes": [
                  "STANDARD"
              ]
          }
      ],
      "default_placement": "default-placement",
      "realm_id": "4fa02715-aa1d-4163-8d7f-d4f52996d372",
      "sync_policy": {
          "groups": []
      },
      "enabled_features": [
          "notification_v2",
          "resharding"
      ]
}

Creating bucket on zone-lodz-1-lodman zone causes 503 errors

s3cmd --host 10.0.9.121:8001 mb s3://lodz-1-test --region lodz-1
WARNING: Retrying failed request: / (503 (ServiceUnavailable))
WARNING: Waiting 3 sec...
WARNING: Retrying failed request: / (503 (ServiceUnavailable))
WARNING: Waiting 6 sec...
WARNING: Retrying failed request: / (503 (ServiceUnavailable))

I see that zone zone-lodz-1-lodman is making some requests to master
zonezgroup  but they are failing.

2025-06-05T11:04:55.196+0000 7febf0cac6c0  4 req 793098438972080696
0.000000000s ERROR: empty payload checksum mismatch, expected
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 got
477cc47e2b1886f364c3e7e343ce437e29a1c450895c96e7e4ea13768fd48244
2025-06-05T11:04:55.196+0000 7febf0cac6c0 20 req 793098438972080696
0.000000000s s3:create_bucket rgw::auth::s3::LocalEngine denied with
reason=-2040
2025-06-05T11:04:55.196+0000 7febf0cac6c0 20 req 793098438972080696
0.000000000s s3:create_bucket rgw::auth::s3::AWSAuthStrategy denied with
reason=-2040
2025-06-05T11:04:55.196+0000 7febf0cac6c0  5 req 793098438972080696
0.000000000s s3:create_bucket Failed the auth strategy, reason=-2040
2025-06-05T11:04:55.196+0000 7febf0cac6c0 10 failed to authorize request
2025-06-05T11:04:55.196+0000 7febf0cac6c0 20 req 793098438972080696
0.000000000s op->ERRORHANDLER: err_no=-2040 new_err_no=-2040
2025-06-05T11:04:55.196+0000 7febf0cac6c0 10 req 793098438972080696
0.000000000s cache get:
name=zone-poznan-1-pcss-bst.rgw.log++script.postrequest. : hit (negative
entry)
2025-06-05T11:04:55.196+0000 7febf0cac6c0  2 req 793098438972080696
0.000000000s s3:create_bucket op status=0
2025-06-05T11:04:55.196+0000 7febf0cac6c0  2 req 793098438972080696
0.000000000s s3:create_bucket http status=400
2025-06-05T11:04:55.196+0000 7febf0cac6c0  1 ====== req done
req=0x7feb8be483c0 op status=0 http_status=400 latency=0.000000000s ======
2025-06-05T11:04:55.196+0000 7febf0cac6c0  1 beast: 0x7feb8be483c0:
10.0.9.121 - - [05/Jun/2025:11:04:55.196 +0000] "PUT
/lodz-1-test/?rgwx-uid=test&rgwx-zonegroup=73868eb9-fb8e-4ad8-bf31-37a16d2e6d14
HTTP/1.1" 400 250 - - - latency=0.000000000s
2025-06-05T11:04:55.236+0000 7fecb56356c0 20 HTTP_ACCEPT=*/*
2025-06-05T11:04:55.236+0000 7fecb56356c0 20
HTTP_AUTHORIZATION=AWS4-HMAC-SHA256
Credential=TWN7U7S3WCTGZ9PZSPJS/20250605/poznan-1/s3/aws4_request,SignedHeaders=date;host;x-amz-content-sha256;x-amz-date,Signature=63a881ab9f9b73ec1c06f722e09dee61bb7c5a1ab2223231f57dbad36537810f
2025-06-05T11:04:55.236+0000 7fecb56356c0 20 HTTP_DATE=Thu, 05 Jun 2025
11:04:55 +0000
2025-06-05T11:04:55.236+0000 7fecb56356c0 20 HTTP_HOST=10.0.9.120:8000
2025-06-05T11:04:55.236+0000 7fecb56356c0 20 HTTP_VERSION=1.1


Creating bucket on zone-poznan-1-pcss-bst zone with lodz-1 region
results in bucket in poznan-1 region.
gh-test-ceph-a ~ # s3cmd --host 10.0.9.120:8000 mb s3://lodz-1-test
--region lodz-1
Bucket 's3://lodz-1-test/' created
gh-test-ceph-a ~ # s3cmd --host 10.0.9.120:8000 info s3://lodz-1-test
--region lodz-1
s3://lodz-1-test/ (bucket):
     Location:  poznan-1
     Payer:     BucketOwner
     Ownership: none
     Versioning:none
     Expiration rule: none
     Block Public Access: none
     Policy:    none
     CORS:      none
     ACL:       test: FULL_CONTROL

In that second scenario I see that rgw sees my lodz-1 region is
specified in LocationConstraint but rgw ignores it and creates zone in
wrong zonegroup.

2025-06-05T11:01:53.378+0000 7fec8cde46c0 20 req 3494031126505109504
0.000000000s s3:create_bucket create bucket input
data=<CreateBucketConfiguration><LocationConstraint>lodz-1</LocationConstraint></CreateBucketConfiguration>
2025-06-05T11:01:53.378+0000 7fec8cde46c0 10 req 3494031126505109504
0.000000000s s3:create_bucket create bucket location constraint: lodz-1
2025-06-05T11:01:53.378+0000 7fec8cde46c0 20 req 3494031126505109504
0.000000000s s3:create_bucket using zonegroup-default placement target
default-placement
2025-06-05T11:01:53.378+0000 7fec8cde46c0 10 req 3494031126505109504
0.000000000s s3:create_bucket cache get:
name=zone-poznan-1-pcss-bst.rgw.meta+root+lodz-1-test : hit (negative entry)
2025-06-05T11:01:53.378+0000 7fec8cde46c0 10 req 3494031126505109504
0.000000000s s3:create_bucket user=test bucket=:lodz-1-test[])
2025-06-05T11:01:53.395+0000 7fec8f5e96c0 10 req 3494031126505109504
0.016666736s s3:create_bucket cache put:
name=zone-poznan-1-pcss-bst.rgw.meta+root+.bucket.meta.lodz-1-test:62ab5076-83f2-48ba-88f0-080aad9f271f.16467.3
info.flags=0x17
2025-06-05T11:01:53.395+0000 7fec8f5e96c0 10 req 3494031126505109504
0.016666736s s3:create_bucket adding
zone-poznan-1-pcss-bst.rgw.meta+root+.bucket.meta.lodz-1-test:62ab5076-83f2-48ba-88f0-080aad9f271f.16467.3
to cache LRU end
2025-06-05T11:01:53.395+0000 7fec8f5e96c0 10 req 3494031126505109504
0.016666736s s3:create_bucket updating xattr: name=user.rgw.acl
bl.length()=129

Am I encountering a bug or is me configuration faulty?

Best regards
Adam Prycki


W dniu 3.06.2025 o 18:55, Adam Prycki pisze:
Hello,

I have a question about ceph multisite theory.
How to create buckets in non-master zonegroup?

I'm trying to create crate a test enviroment with 2 regions.

I've crated deployment which looks like this
REALM
      region-1 (master zonegroup)
          region-1-zone-a (master zone)
          region-1-zone-b
      region-2
          region-2-zone-a (master zone)


The goal would be to have 2 regions, each with separate data and 2
active-active zones in each.
Just after creating region-2-zone-a I've decided to create a bucket
there in region-2 and upload something. But I cannot create a bucket in
new region.

Rgw allows you to create buckets only on master zone.
If I try to use s3cmd to create bucket on region-1-zone-b endpoint I get
error 503. (as expected)

Trying to list region-1 bucket on region-2 zone gives me 301 (moved
permanently) so region-2 has metadata and working authentication.

But I cannot figure out how to make a bucket in region-2.
s3cmd .s3cfg-region-2 mb s3://bucket gives me error 503 like if I were
trying to create a bucket on non-master zone.

Creating bucket on region-1 endpoint with s3cmd flags specifying
LocationConstraint region-2 creates bucket in region-1
I see in region-1-zone-a logs that rgw sees region-2 in
LocationConstraint argument but creates bucket in region-1

I'm working on ceph 19.2.2

Best regards
Adam Prycki

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io



Attachment: smime.p7s
Description: Kryptograficzna sygnatura S/MIME

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to