Ramin,

I think youre still going to experience what Casey described.

If your intent is to completely isolate bucket metadata/data in one
zonegroup from another, then I believe you need multiple independent
realms. Each with its own endpoint.

For instance;

Ceph Cluster A
Realm1/zonegroup1/zone1 (endpoint.left)
Realm2/zonegroup2/zone2 (endpoint.right)

Then, you dont need to bother with location constraint. Attempts to
create buckets via endpoint.right will be created at realm2/zonegroup2.
Buckets created via endpoint.left will show up at zonegroup1/zone1. They
will be completely isolated from one another, yet reside on the same
cluster.

Location constraint could be used for different placement targets within
zonegroup2. For instance, location constraint zonegroup2:put-data-on-
replicated-storage or zonegroup2:put-data-on-erasure-storage. Without
separate realms, I believe you will continue to experience what Casey
explained.

If it is your intent to have multiple isolated zonegroups within the
same cluster, then I would read through the docs for creating a multi-
realm deployment. And just stop short of creating an additional
synchronized replicated realm. (step #15)

https://www.ibm.com/docs/en/storage-ceph/5?topic=administration-configuring-multiple-realms-in-same-storage-cluster

You should then have two independent realms residing on the same
cluster. You can further isolate the two zonegroups by deploying new
root pools. By default, I believe the zonegroup/zone settings are stored
in .rgw.root

Any actions without specifying an alternative zonegroup/zone will by
default fall under the .rgw.root hierarchy.

You could, for example, deploy two new realms, each with its own root
pool.

zonegroup1 --> .zonegroup1.rgw.root
zonegroup2 --> .zonegroup2.rgw.root 

These two zonegroups will be isolated in their own realms, and also
isolated from the default .rgw.root

Once you have that setup, it's like each realm is its own completely
separate rgw deployment, but reside within a single Ceph cluster. It
takes a bit of forethought to get everything set up correctly. You will
need to create keys for each realm and be mindful of which realm you are
operating on. It may take several attempts to get everything in place
the way you expect it to be.


On Sat, 2023-05-20 at 15:42 +0330, Ramin Najjarbashi wrote:
> I had a typo in s3cmd output :D
> 
> I am encountering an issue with the LocationConstraint when using
> s3cmd to
> create a bucket. In my setup, I have two zone groups within the same
> realm.
> My intention is to create the bucket in the second zone group by
> correctly
> setting the bucket location using bucket_location:
> <CreateBucketConfiguration><LocationConstraint>abrak</LocationConstrai
> nt></CreateBucketConfiguration>.
> However, despite configuring it this way, the bucket is still being
> created
> in the first zone group.
> 
> https://gist.github.com/RaminNietzsche/1ff266a6158b437319dcd2eb10eeb34e
> 
> ```sh
> s3cmd --region zg2-api-name mb s3://test-zg2s3cmd info
> s3://test-zg2s3://test-zg2/ (bucket):
>    Location:  zg1-api-name
>    Payer:     BucketOwner
>    Expiration Rule: none
>    Policy:    none
>    CORS:      none
>    ACL:       development: FULL_CONTROL
> ```
> 
> 
> On Thu, May 18, 2023 at 4:29 PM Ramin Najjarbashi <
> ramin.najarba...@gmail.com> wrote:
> 
> > Thank Casey
> > 
> > Currently, when I create a new bucket and specify the bucket
> > location as
> > zone group 2, I expect the request to be handled by the master zone
> > in zone
> > group 1, as it is the expected behavior. However, I noticed that
> > regardless
> > of the specified bucket location, the zone group ID for all buckets
> > created
> > using this method remains the same as zone group 1.
> > 
> > 
> > My expectation was that when I create a bucket in zone group 2, the
> > zone
> > group ID in the bucket metadata would reflect the correct zone group
> > ID.
> > 
> > 
> > 
> > On Thu, May 18, 2023 at 15:54 Casey Bodley <cbod...@redhat.com>
> > wrote:
> > 
> > > On Wed, May 17, 2023 at 11:13 PM Ramin Najjarbashi
> > > <ramin.najarba...@gmail.com> wrote:
> > > > 
> > > > Hi
> > > > 
> > > > I'm currently using Ceph version 16.2.7 and facing an issue with
> > > > bucket
> > > > creation in a multi-zone configuration. My setup includes two
> > > > zone
> > > groups:
> > > > 
> > > > ZG1 (Master) and ZG2, with one zone in each zone group (zone-1
> > > > in ZG1
> > > and
> > > > zone-2 in ZG2).
> > > > 
> > > > The objective is to create buckets in a specific zone group
> > > > (ZG2) using
> > > the
> > > > bucket constructor.
> > > > However, despite setting the desired zone group (abrak) in the
> > > > request,
> > > the
> > > > bucket is still being created in the master zone group (ZG1).
> > > > I have defined the following endpoint pattern for each zone
> > > > group:
> > > > 
> > > > s3.{zg}.mydomain.com
> > > > 
> > > > I am using the s3cmd client to interact with the Ceph cluster. I
> > > > have
> > > > ensured that I provide the necessary endpoint and region
> > > > information
> > > while
> > > > executing the bucket creation command. Despite my efforts, the
> > > > bucket
> > > > consistently gets created in ZG1 instead of ZG2.
> > > 
> > > this is expected behavior for the metadata consistency model. all
> > > metadata gets created on the metadata master zone first, and syncs
> > > to
> > > all other zones in the realm from there. so your buckets will be
> > > visible to every zonegroup
> > > 
> > > however, ZG2 is still the 'bucket location', and its object data
> > > should only reside in ZG2's zones. any s3 requests on that bucket
> > > sent
> > > to ZG1 will get redirected to ZG2 and serviced there
> > > 
> > > if you don't want any metadata shared between the two zonegroups,
> > > you
> > > can put them in separate realms. but that includes user metadata
> > > as
> > > well
> > > 
> > > > 
> > > > - Ceph Version: 16.2.7
> > > > - Zone Group 1 (ZG1) Endpoint: http://s3.zonegroup1.mydomain.com
> > > > - Zone Group 2 (ZG2) Endpoint: http://s3.zonegroup2.mydomain.com
> > > > - Desired Bucket Creation Region: zg2-api-name
> > > > 
> > > >  have reviewed the Ceph documentation and made necessary
> > > > configuration
> > > > changes, but I have not been able to achieve the desired
> > > > result.I kindly
> > > > request your assistance in understanding why the bucket
> > > > constructor is
> > > not
> > > > honoring the specified region and always defaults to ZG1. I
> > > > would
> > > greatly
> > > > appreciate any insights, recommendations, or potential solutions
> > > > to
> > > resolve
> > > > this issue.
> > > > 
> > > >  Thank you for your time and support.
> > > > 
> > > > ---------------------------------
> > > > Here are the details of my setup:
> > > > ---------------------------------
> > > > 
> > > > ```sh
> > > > s3cmd --region zg2-api-name mb s3://test-zg2s3cmd info
> > > > s3://test-zg2s3://test-zg2/ (bucket):
> > > >    Location:  zg2-api-name
> > > >    Payer:     BucketOwner
> > > >    Expiration Rule: none
> > > >    Policy:    none
> > > >    CORS:      none
> > > >    ACL:       development: FULL_CONTROL
> > > > ```
> > > > 
> > > > this is my config file:
> > > > 
> > > > ```ini
> > > > [default]
> > > > access_key = <KEY>
> > > > secret_key = <SECRET>
> > > > host_base = s3.zonegroup1.mydomain.com
> > > > host_bucket = s3.%(location)s.mydomain.com
> > > > #host_bucket = %(bucket)s.s3.zonegroup1.mydomain.com
> > > > #host_bucket = s3.%(location)s.mydomain.com
> > > > #host_bucket = s3.%(region)s.mydomain.com
> > > > bucket_location = zg1-api-name
> > > > use_https = False
> > > > ```
> > > > 
> > > > 
> > > > Zonegroup configuration for the `zonegroup1` region:
> > > > 
> > > > ```json
> > > > {
> > > >     "id": "fb3f818a-ca9b-4b12-b431-7cdcd80006d",
> > > >     "name": "zg1-api-name",
> > > >     "api_name": "zg1-api-name",
> > > >     "is_master": "false",
> > > >     "endpoints": [
> > > >         "http://s3.zonegroup1.mydomain.com";,
> > > >     ],
> > > >     "hostnames": [
> > > >         "s3.zonegroup1.mydomain.com",
> > > >     ],
> > > >     "hostnames_s3website": [
> > > >         "s3-website.zonegroup1.mydomain.com",
> > > >     ],
> > > >     "master_zone": "at2-stg-zone",
> > > >     "zones": [
> > > >         {
> > > >             "id": "at2-stg-zone",
> > > >             "name": "at2-stg-zone",
> > > >             "endpoints": [
> > > >                 "http://s3.zonegroup1.mydomain.com";
> > > >             ],
> > > >             "log_meta": "false",
> > > >             "log_data": "true",
> > > >             "bucket_index_max_shards": 11,
> > > >             "read_only": "false",
> > > >             "tier_type": "",
> > > >             "sync_from_all": "true",
> > > >             "sync_from": [],
> > > >             "redirect_zone": ""
> > > >         }
> > > >     ],
> > > >     "placement_targets": [
> > > >         {
> > > >             "name": "default-placement",
> > > >             "tags": [],
> > > >             "storage_classes": [
> > > >                 "STANDARD"
> > > >             ]
> > > >         }
> > > >     ],
> > > >     "default_placement": "default-placement",
> > > >     "realm_id": "fa2f8194-4a9d-4b98-b411-9cdcd1e5506a",
> > > >     "sync_policy": {
> > > >         "groups": []
> > > >     }
> > > > }
> > > > ```
> > > > 
> > > > Zonegroup configuration for the `zonegroup2` region:
> > > > 
> > > > ```json
> > > > {
> > > >     "id": "a513d60c-44a2-4289-a23d-b7a511be6ee4",
> > > >     "name": "zg2-api-name",
> > > >     "api_name": "zg2-api-name",
> > > >     "is_master": "false",
> > > >     "endpoints": [
> > > >         "http://s3.zonegroup2.mydomain.com";
> > > >     ],
> > > >     "hostnames": [
> > > >         "s3.zonegroup2.mydomain.com"
> > > >     ],
> > > >     "hostnames_s3website": [],
> > > >     "master_zone": "zonegroup2-sh-1",
> > > >     "zones": [
> > > >         {
> > > >             "id": "zonegroup2-sh-1",
> > > >             "name": "zonegroup2-sh-1",
> > > >             "endpoints": [
> > > >                 "http://s3.zonegroup2.mydomain.com";
> > > >             ],
> > > >             "log_meta": "false",
> > > >             "log_data": "false",
> > > >             "bucket_index_max_shards": 11,
> > > >             "read_only": "false",
> > > >             "tier_type": "",
> > > >             "sync_from_all": "true",
> > > >             "sync_from": [],
> > > >             "redirect_zone": ""
> > > >         }
> > > >     ],
> > > >     "placement_targets": [
> > > >         {
> > > >             "name": "default-placement",
> > > >             "tags": [],
> > > >             "storage_classes": [
> > > >                 "STANDARD"
> > > >             ]
> > > >         }
> > > >     ],
> > > >     "default_placement": "default-placement",
> > > >     "realm_id": "fa2f8194-4a9d-4b98-b411-9cdcd1e5506a",
> > > >     "sync_policy": {
> > > >         "groups": []
> > > >     }
> > > > }
> > > > ```
> > > > _______________________________________________
> > > > ceph-users mailing list -- ceph-users@ceph.io
> > > > To unsubscribe send an email to ceph-users-le...@ceph.io
> > > > 
> > > 
> > > 
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
> 

Attachment: smime.p7s
Description: S/MIME cryptographic signature

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to