East and West Clusters have been upgraded to quincy, 17.2.6.

We are still seeing replication failures. Deep diving the logs, I found the 
following interesting items.

What is the best way to continue to troubleshoot this?
What is the curl attempting to fetch, but failing to obtain?

-----
        root@east01:~# radosgw-admin bucket sync --bucket=ceph-bucket 
--source-zone=rgw-west run
        2023-05-09T15:22:43.582+0000 7f197d7fa700  0 WARNING: curl operation 
timed out, network average transfer speed less than 1024 Bytes per second 
during 300 seconds.
        2023-05-09T15:22:43.582+0000 7f1a48dd9e40  0 data sync: ERROR: failed 
to fetch bucket index status
        2023-05-09T15:22:43.582+0000 7f1a48dd9e40  0 
RGW-SYNC:bucket[ceph-bucket:ddd66ab8-0417-dddd-dddd-aaaaaaaa.93706683.1:119<-ceph-bucket:ddd66ab8-0417-dddd-dddd-aaaaaaaa.93706683.93706683.1:119]:
 ERROR: init sync on bucket failed, retcode=-5
        2023-05-09T15:24:54.652+0000 7f197d7fa700  0 WARNING: curl operation 
timed out, network average transfer speed less than 1024 Bytes per second 
during 300 seconds.
        2023-05-09T15:27:05.725+0000 7f197d7fa700  0 WARNING: curl operation 
timed out, network average transfer speed less than 1024 Bytes per second 
during 300 seconds.
-----

        radosgw-admin bucket sync --bucket=ceph-bucket-prd info
                  realm 98e0e391- (rgw-blobs)
              zonegroup 0e0faf4e- (WestEastCeph)
                   zone ddd66ab8- (rgw-east)
                 bucket :ceph-bucket[ddd66ab8-xxxx.93706683.1])

            source zone b2a4a31c-
                 bucket :ceph-bucket[ddd66ab8-.93706683.1])
        root@bctlpmultceph01:~# radosgw-admin bucket sync --bucket=ceph-bucket 
status
                  realm 98e0e391- (rgw-blobs)
              zonegroup 0e0faf4e- (WestEastCeph)
                   zone ddd66ab8- (rgw-east)
                 bucket :ceph-bucket[ddd66ab8.93706683.1])

            source zone b2a4a31c- (rgw-west)
          source bucket :ceph-bucket[ddd66ab8-.93706683.1])
                        full sync: 0/120 shards
                        incremental sync: 120/120 shards
                        bucket is behind on 112 shards
                        behind shards: 
[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,77,78,80,81,82,83,84,85,86,89,90,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119]


-----


2023-05-09T15:46:21.069+0000 7f1fc7fff700  0 WARNING: curl operation timed out, 
network average transfer speed less than 1024 Bytes per second during 300 
seconds.
2023-05-09T15:46:21.069+0000 7f20b12b8700  0 WARNING: curl operation timed out, 
network average transfer speed less than 1024 Bytes per second during 300 
seconds.
2023-05-09T15:46:21.069+0000 7f20b12b8700  0 WARNING: curl operation timed out, 
network average transfer speed less than 1024 Bytes per second during 300 
seconds.
2023-05-09T15:46:21.069+0000 7f20b12b8700  0 WARNING: curl operation timed out, 
network average transfer speed less than 1024 Bytes per second during 300 
seconds.
2023-05-09T15:46:21.069+0000 7f20857f2700  0 rgw async rados processor: 
store->fetch_remote_obj() returned r=-5
2023-05-09T15:46:21.069+0000 7f20b12b8700  0 WARNING: curl operation timed out, 
network average transfer speed less than 1024 Bytes per second during 300 
seconds.
2023-05-09T15:46:21.069+0000 7f20b12b8700  0 WARNING: curl operation timed out, 
network average transfer speed less than 1024 Bytes per second during 300 
seconds.
2023-05-09T15:46:21.069+0000 7f2092ffd700  0 rgw async rados processor: 
store->fetch_remote_obj() returned r=-5
2023-05-09T15:46:21.069+0000 7f20b12b8700  0 WARNING: curl operation timed out, 
network average transfer speed less than 1024 Bytes per second during 300 
seconds.
2023-05-09T15:46:21.069+0000 7f2080fe9700  0 rgw async rados processor: 
store->fetch_remote_obj() returned r=-5
2023-05-09T15:46:21.069+0000 7f20b12b8700  0 WARNING: curl operation timed out, 
network average transfer speed less than 1024 Bytes per second during 300 
seconds.
2023-05-09T15:46:21.069+0000 7f20817ea700  0 rgw async rados processor: 
store->fetch_remote_obj() returned r=-5
2023-05-09T15:46:21.069+0000 7f208b7fe700  0 rgw async rados processor: 
store->fetch_remote_obj() returned r=-5
2023-05-09T15:46:21.069+0000 7f20867f4700  0 rgw async rados processor: 
store->fetch_remote_obj() returned r=-5
2023-05-09T15:46:21.069+0000 7f2086ff5700  0 rgw async rados processor: 
store->fetch_remote_obj() returned r=-5
2023-05-09T15:46:21.069+0000 7f20b12b8700  0 WARNING: curl operation timed out, 
network average transfer speed less than 1024 Bytes per second during 300 
seconds.
2023-05-09T15:46:21.069+0000 7f20b12b8700  0 WARNING: curl operation timed out, 
network average transfer speed less than 1024 Bytes per second during 300 
seconds.
2023-05-09T15:46:21.069+0000 7f2085ff3700  0 rgw async rados processor: 
store->fetch_remote_obj() returned r=-5
2023-05-09T15:46:21.069+0000 7f20827ec700  0 rgw async rados processor: 
store->fetch_remote_obj() returned r=-5


From: Casey Bodley <cbod...@redhat.com>
Date: Thursday, April 27, 2023 at 12:37 PM
To: Tarrago, Eli (RIS-BCT) <eli.tarr...@lexisnexisrisk.com>
Cc: Ceph Users <ceph-users@ceph.io>
Subject: Re: [ceph-users] Re: Radosgw multisite replication issues
*** External email: use caution ***



On Thu, Apr 27, 2023 at 11:36 AM Tarrago, Eli (RIS-BCT)
<eli.tarr...@lexisnexisrisk.com> wrote:
>
> After working on this issue for a bit.
> The active plan is to fail over master, to the “west” dc. Perform a realm 
> pull from the west so that it forces the failover to occur. Then have the 
> “east” DC, then pull the realm data back. Hopefully will get both sides back 
> in sync..
>
> My concern with this approach is both sides are “active”, meaning the client 
> has been writing data to both endpoints. Will this cause an issue where 
> “west” will have data that the metadata does not have record of, and then 
> delete the data?

no object data would be deleted as a result of metadata failover issues, no

>
> Thanks
>
> From: Tarrago, Eli (RIS-BCT) <eli.tarr...@lexisnexisrisk.com>
> Date: Thursday, April 20, 2023 at 3:13 PM
> To: Ceph Users <ceph-users@ceph.io>
> Subject: Radosgw multisite replication issues
> Good Afternoon,
>
> I am experiencing an issue where east-1 is no longer able to replicate from 
> west-1, however, after a realm pull, west-1 is now able to replicate from 
> east-1.
>
> In other words:
> West <- Can Replicate <- East
> West -> Cannot Replicate -> East
>
> After confirming the access and secret keys are identical on both sides, I 
> restarted all radosgw services.
>
> Here is the current status of the cluster below.
>
> Thank you for your help,
>
> Eli Tarrago
>
>
> root@east01:~# radosgw-admin zone get
> {
>     "id": "ddd66ab8-0417-46ee-a53b-043352a63f93",
>     "name": "rgw-east",
>     "domain_root": "rgw-east.rgw.meta:root",
>     "control_pool": "rgw-east.rgw.control",
>     "gc_pool": "rgw-east.rgw.log:gc",
>     "lc_pool": "rgw-east.rgw.log:lc",
>     "log_pool": "rgw-east.rgw.log",
>     "intent_log_pool": "rgw-east.rgw.log:intent",
>     "usage_log_pool": "rgw-east.rgw.log:usage",
>     "roles_pool": "rgw-east.rgw.meta:roles",
>     "reshard_pool": "rgw-east.rgw.log:reshard",
>     "user_keys_pool": "rgw-east.rgw.meta:users.keys",
>     "user_email_pool": "rgw-east.rgw.meta:users.email",
>     "user_swift_pool": "rgw-east.rgw.meta:users.swift",
>     "user_uid_pool": "rgw-east.rgw.meta:users.uid",
>     "otp_pool": "rgw-east.rgw.otp",
>     "system_key": {
>         "access_key": "PxxxxxxxxxxxxxxxxW",
>         "secret_key": "Hxxxxxxxxxxxxxxxx6"
>     },
>     "placement_pools": [
>         {
>             "key": "default-placement",
>             "val": {
>                 "index_pool": "rgw-east.rgw.buckets.index",
>                 "storage_classes": {
>                     "STANDARD": {
>                         "data_pool": "rgw-east.rgw.buckets.data"
>                     }
>                 },
>                 "data_extra_pool": "rgw-east.rgw.buckets.non-ec",
>                 "index_type": 0
>             }
>         }
>     ],
>     "realm_id": "98e0e391-16fb-48da-80a5-08437fd81789",
>     "notif_pool": "rgw-east.rgw.log:notif"
> }
>
> root@west01:~# radosgw-admin zone get
> {
>    "id": "b2a4a31c-1505-4fdc-b2e0-ea07d9463da1",
>     "name": "rgw-west",
>     "domain_root": "rgw-west.rgw.meta:root",
>     "control_pool": "rgw-west.rgw.control",
>     "gc_pool": "rgw-west.rgw.log:gc",
>     "lc_pool": "rgw-west.rgw.log:lc",
>     "log_pool": "rgw-west.rgw.log",
>     "intent_log_pool": "rgw-west.rgw.log:intent",
>     "usage_log_pool": "rgw-west.rgw.log:usage",
>     "roles_pool": "rgw-west.rgw.meta:roles",
>     "reshard_pool": "rgw-west.rgw.log:reshard",
>     "user_keys_pool": "rgw-west.rgw.meta:users.keys",
>     "user_email_pool": "rgw-west.rgw.meta:users.email",
>     "user_swift_pool": "rgw-west.rgw.meta:users.swift",
>     "user_uid_pool": "rgw-west.rgw.meta:users.uid",
>     "otp_pool": "rgw-west.rgw.otp",
>     "system_key": {
>         "access_key": "PxxxxxxxxxxxxxxW",
>         "secret_key": "Hxxxxxxxxxxxxxx6"
>     },
>     "placement_pools": [
>         {
>             "key": "default-placement",
>             "val": {
>                 "index_pool": "rgw-west.rgw.buckets.index",
>                 "storage_classes": {
>                     "STANDARD": {
>                         "data_pool": "rgw-west.rgw.buckets.data"
>                     }
>                 },
>                 "data_extra_pool": "rgw-west.rgw.buckets.non-ec",
>                 "index_type": 0
>             }
>         }
>     ],
>     "realm_id": "98e0e391-16fb-48da-80a5-08437fd81789",
>     "notif_pool": "rgw-west.rgw.log:notif"
> east01:~# radosgw-admin metadata sync status
> {
>     "sync_status": {
>         "info": {
>             "status": "init",
>             "num_shards": 0,
>             "period": "",
>             "realm_epoch": 0
>         },
>         "markers": []
>     },
>     "full_sync": {
>         "total": 0,
>         "complete": 0
>     }
> }
>
> west01:~#  radosgw-admin metadata sync status
> {
>     "sync_status": {
>         "info": {
>             "status": "sync",
>             "num_shards": 64,
>             "period": "44b6b308-e2d8-4835-8518-c90447e7b55c",
>             "realm_epoch": 3
>         },
>         "markers": [
>             {
>                 "key": 0,
>                 "val": {
>                     "state": 1,
>                     "marker": "",
>                     "next_step_marker": "",
>                     "total_entries": 46,
>                     "pos": 0,
>                     "timestamp": "0.000000",
>                     "realm_epoch": 3
>                 }
>             },
> #### goes on for a long time…
>             {
>                 "key": 63,
>                 "val": {
>                     "state": 1,
>                     "marker": "",
>                     "next_step_marker": "",
>                     "total_entries": 0,
>                     "pos": 0,
>                     "timestamp": "0.000000",
>                     "realm_epoch": 3
>                 }
>             }
>         ]
>     },
>     "full_sync": {
>         "total": 46,
>         "complete": 46
>     }
> }
>
> east01:~#  radosgw-admin sync status
>           realm 98e0e391-16fb-48da-80a5-08437fd81789 (rgw-blobs)
>       zonegroup 0e0faf4e-39f5-402e-9dbb-4a1cdc249ddd (EastWestceph)
>            zone ddd66ab8-0417-46ee-a53b-043352a63f93 (rgw-east)
>   metadata sync no sync (zone is master)
> 2023-04-20T19:03:13.388+0000 7f25fa036c80  0 ERROR: failed to fetch datalog 
> info
>       data sync source: b2a4a31c-1505-4fdc-b2e0-ea07d9463da1 (rgw-west)
>                         failed to retrieve sync info: (13) Permission denied

does the multisite system user exist on the rgw-west zone? you can
check there with `radosgw-admin user info --access-key
PxxxxxxxxxxxxxxW`

the sync status on rgw-west shows that metadata sync is caught up so i
would expect it to have that user metadata, but maybe not?

>
> west01:~# radosgw-admin sync status
>           realm 98e0e391-16fb-48da-80a5-08437fd81789 (rgw-blobs)
>       zonegroup 0e0faf4e-39f5-402e-9dbb-4a1cdc249ddd (EastWestceph)
>            zone b2a4a31c-1505-4fdc-b2e0-ea07d9463da1 (rgw-west)
>   metadata sync syncing
>                 full sync: 0/64 shards
>                 incremental sync: 64/64 shards
>                 metadata is caught up with master
>       data sync source: ddd66ab8-0417-46ee-a53b-043352a63f93 (rgw-east)
>                         syncing
>                         full sync: 0/128 shards
>                         incremental sync: 128/128 shards
>                         data is behind on 16 shards
>                         behind shards: 
> [5,56,62,65,66,70,76,86,87,94,104,107,111,113,120,126]
>                         oldest incremental change not applied: 
> 2023-04-20T19:02:48.783283+0000 [5]
>
> east01:~# radosgw-admin zonegroup get
> {
>     "id": "0e0faf4e-39f5-402e-9dbb-4a1cdc249ddd",
>     "name": "EastWestceph",
>     "api_name": "EastWestceph",
>     "is_master": "true",
>     "endpoints": [
>         
> https://nam11.safelinks.protection.outlook.com/?url=http%3A%2F%2Feast01.example.net%3A8080%2F&data=05%7C01%7Celi.tarrago%40lexisnexisrisk.com%7C41a7b7e6b090436251d408db473db43a%7C9274ee3f94254109a27f9fb15c10675d%7C0%7C0%7C638182102580798669%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=cuDy9B4KZ%2BWqyatyvYiHTm%2BZitzA2nvq83cMGrK6C1o%3D&reserved=0<http://east01.example.net:8080/>,
>         
> https://nam11.safelinks.protection.outlook.com/?url=http%3A%2F%2Feast02.example.net%3A8080%2F&data=05%7C01%7Celi.tarrago%40lexisnexisrisk.com%7C41a7b7e6b090436251d408db473db43a%7C9274ee3f94254109a27f9fb15c10675d%7C0%7C0%7C638182102580798669%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=KWBO8FadQXZ6mue1tJMlJJlQUv%2FoIBkruCfFQdOfOJw%3D&reserved=0<http://east02.example.net:8080/>,
>         
> https://nam11.safelinks.protection.outlook.com/?url=http%3A%2F%2Feast03.example.net%3A8080%2F&data=05%7C01%7Celi.tarrago%40lexisnexisrisk.com%7C41a7b7e6b090436251d408db473db43a%7C9274ee3f94254109a27f9fb15c10675d%7C0%7C0%7C638182102580798669%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=ZyW3%2FJQwCE6MyluzJuhIKspXHjiWamnOmm3oA98dcKU%3D&reserved=0<http://east03.example.net:8080/>,
>         
> https://nam11.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwest01.example.net%3A8080%2F&data=05%7C01%7Celi.tarrago%40lexisnexisrisk.com%7C41a7b7e6b090436251d408db473db43a%7C9274ee3f94254109a27f9fb15c10675d%7C0%7C0%7C638182102580954883%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=UE%2FYX4ag89UTUSyeKtFu5mwJ7mavME2LltLVPgCZqXc%3D&reserved=0<http://west01.example.net:8080/>,
>         
> https://nam11.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwest02.example.net%3A8080%2F&data=05%7C01%7Celi.tarrago%40lexisnexisrisk.com%7C41a7b7e6b090436251d408db473db43a%7C9274ee3f94254109a27f9fb15c10675d%7C0%7C0%7C638182102580954883%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=g4Dx9RBsC%2BdVviWN7Ynptdg%2Bd9wEX2IH3Qmi9GtFSUA%3D&reserved=0<http://west02.example.net:8080/>,
>         
> https://nam11.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwest03.example.net%3A8080%2F&data=05%7C01%7Celi.tarrago%40lexisnexisrisk.com%7C41a7b7e6b090436251d408db473db43a%7C9274ee3f94254109a27f9fb15c10675d%7C0%7C0%7C638182102580954883%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=B0IZxWUfQq8HZMAbXmNB2itNuQTOwweCQR6YyjXPPik%3D&reserved=0<http://west03.example.net:8080/>
>     ],
>     "hostnames": [
>         "eastvip.example.net",
>         "westvip.example.net"
>     ],
>     "hostnames_s3website": [],
>     "master_zone": "ddd66ab8-0417-46ee-a53b-043352a63f93",
>     "zones": [
>         {
>             "id": "b2a4a31c-1505-4fdc-b2e0-ea07d9463da1",
>             "name": "rgw-west",
>             "endpoints": [
>                 
> https://nam11.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwest01.example.net%3A8080%2F&data=05%7C01%7Celi.tarrago%40lexisnexisrisk.com%7C41a7b7e6b090436251d408db473db43a%7C9274ee3f94254109a27f9fb15c10675d%7C0%7C0%7C638182102580954883%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=UE%2FYX4ag89UTUSyeKtFu5mwJ7mavME2LltLVPgCZqXc%3D&reserved=0<http://west01.example.net:8080/>,
>                 
> https://nam11.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwest02.example.net%3A8080%2F&data=05%7C01%7Celi.tarrago%40lexisnexisrisk.com%7C41a7b7e6b090436251d408db473db43a%7C9274ee3f94254109a27f9fb15c10675d%7C0%7C0%7C638182102580954883%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=g4Dx9RBsC%2BdVviWN7Ynptdg%2Bd9wEX2IH3Qmi9GtFSUA%3D&reserved=0<http://west02.example.net:8080/>,
>                 
> https://nam11.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwest03.example.net%3A8080%2F&data=05%7C01%7Celi.tarrago%40lexisnexisrisk.com%7C41a7b7e6b090436251d408db473db43a%7C9274ee3f94254109a27f9fb15c10675d%7C0%7C0%7C638182102580954883%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=B0IZxWUfQq8HZMAbXmNB2itNuQTOwweCQR6YyjXPPik%3D&reserved=0<http://west03.example.net:8080/>
>             ],
>             "log_meta": "false",
>             "log_data": "true",
>             "bucket_index_max_shards": 0,
>             "read_only": "false",
>             "tier_type": "",
>             "sync_from_all": "true",
>             "sync_from": [],
>             "redirect_zone": ""
>         },
>         {
>             "id": "ddd66ab8-0417-46ee-a53b-043352a63f93",
>             "name": "rgw-east",
>             "endpoints": [
>                 
> https://nam11.safelinks.protection.outlook.com/?url=http%3A%2F%2Feast01.example.net%3A8080%2F&data=05%7C01%7Celi.tarrago%40lexisnexisrisk.com%7C41a7b7e6b090436251d408db473db43a%7C9274ee3f94254109a27f9fb15c10675d%7C0%7C0%7C638182102580954883%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=Vn81L1i%2F%2F1iVUWhCrm5xgmip6OstdXguklejB7ZkDIo%3D&reserved=0<http://east01.example.net:8080/>,
>                 
> https://nam11.safelinks.protection.outlook.com/?url=http%3A%2F%2Feast02.example.net%3A8080%2F&data=05%7C01%7Celi.tarrago%40lexisnexisrisk.com%7C41a7b7e6b090436251d408db473db43a%7C9274ee3f94254109a27f9fb15c10675d%7C0%7C0%7C638182102580954883%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=wd%2BqqJ30OWwT3sq5AD40iP%2B8JOiD%2FO%2F4iSerkjV0kts%3D&reserved=0<http://east02.example.net:8080/>,
>                 
> https://nam11.safelinks.protection.outlook.com/?url=http%3A%2F%2Feast03.example.net%3A8080%2F&data=05%7C01%7Celi.tarrago%40lexisnexisrisk.com%7C41a7b7e6b090436251d408db473db43a%7C9274ee3f94254109a27f9fb15c10675d%7C0%7C0%7C638182102580954883%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=8XM5x%2BX0GBJFO5KKpJCdpIPcJbxMe%2BgyF5kbrOa7EQw%3D&reserved=0<http://east03.example.net:8080/>
>             ],
>             "log_meta": "false",
>             "log_data": "true",
>             "bucket_index_max_shards": 0,
>             "read_only": "false",
>             "tier_type": "",
>             "sync_from_all": "true",
>             "sync_from": [],
>             "redirect_zone": ""
>         }
>     ],
>     "placement_targets": [
>         {
>             "name": "default-placement",
>             "tags": [],
>             "storage_classes": [
>                 "STANDARD"
>             ]
>         }
>     ],
>     "default_placement": "default-placement",
>     "realm_id": "98e0e391-16fb-48da-80a5-08437fd81789",
>     "sync_policy": {
>         "groups": []
>     }
> }
>
>
> ________________________________
> The information contained in this e-mail message is intended only for the 
> personal and confidential use of the recipient(s) named above. This message 
> may be an attorney-client communication and/or work product and as such is 
> privileged and confidential. If the reader of this message is not the 
> intended recipient or an agent responsible for delivering it to the intended 
> recipient, you are hereby notified that you have received this document in 
> error and that any review, dissemination, distribution, or copying of this 
> message is strictly prohibited. If you have received this communication in 
> error, please notify us immediately by e-mail, and delete the original 
> message.
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>

________________________________
The information contained in this e-mail message is intended only for the 
personal and confidential use of the recipient(s) named above. This message may 
be an attorney-client communication and/or work product and as such is 
privileged and confidential. If the reader of this message is not the intended 
recipient or an agent responsible for delivering it to the intended recipient, 
you are hereby notified that you have received this document in error and that 
any review, dissemination, distribution, or copying of this message is strictly 
prohibited. If you have received this communication in error, please notify us 
immediately by e-mail, and delete the original message.
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to