[ceph-users] Re: Creating a bucket with bucket constructor in Ceph v16.2.7

2023-05-20 Thread Ramin Najjarbashi
I had a typo in s3cmd output :D

I am encountering an issue with the LocationConstraint when using s3cmd to
create a bucket. In my setup, I have two zone groups within the same realm.
My intention is to create the bucket in the second zone group by correctly
setting the bucket location using bucket_location:
abrak.
However, despite configuring it this way, the bucket is still being created
in the first zone group.

https://gist.github.com/RaminNietzsche/1ff266a6158b437319dcd2eb10eeb34e

```sh
s3cmd --region zg2-api-name mb s3://test-zg2s3cmd info
s3://test-zg2s3://test-zg2/ (bucket):
   Location:  zg1-api-name
   Payer: BucketOwner
   Expiration Rule: none
   Policy:none
   CORS:  none
   ACL:   development: FULL_CONTROL
```


On Thu, May 18, 2023 at 4:29 PM Ramin Najjarbashi <
ramin.najarba...@gmail.com> wrote:

> Thank Casey
>
> Currently, when I create a new bucket and specify the bucket location as
> zone group 2, I expect the request to be handled by the master zone in zone
> group 1, as it is the expected behavior. However, I noticed that regardless
> of the specified bucket location, the zone group ID for all buckets created
> using this method remains the same as zone group 1.
>
>
> My expectation was that when I create a bucket in zone group 2, the zone
> group ID in the bucket metadata would reflect the correct zone group ID.
>
>
>
> On Thu, May 18, 2023 at 15:54 Casey Bodley  wrote:
>
>> On Wed, May 17, 2023 at 11:13 PM Ramin Najjarbashi
>>  wrote:
>> >
>> > Hi
>> >
>> > I'm currently using Ceph version 16.2.7 and facing an issue with bucket
>> > creation in a multi-zone configuration. My setup includes two zone
>> groups:
>> >
>> > ZG1 (Master) and ZG2, with one zone in each zone group (zone-1 in ZG1
>> and
>> > zone-2 in ZG2).
>> >
>> > The objective is to create buckets in a specific zone group (ZG2) using
>> the
>> > bucket constructor.
>> > However, despite setting the desired zone group (abrak) in the request,
>> the
>> > bucket is still being created in the master zone group (ZG1).
>> > I have defined the following endpoint pattern for each zone group:
>> >
>> > s3.{zg}.mydomain.com
>> >
>> > I am using the s3cmd client to interact with the Ceph cluster. I have
>> > ensured that I provide the necessary endpoint and region information
>> while
>> > executing the bucket creation command. Despite my efforts, the bucket
>> > consistently gets created in ZG1 instead of ZG2.
>>
>> this is expected behavior for the metadata consistency model. all
>> metadata gets created on the metadata master zone first, and syncs to
>> all other zones in the realm from there. so your buckets will be
>> visible to every zonegroup
>>
>> however, ZG2 is still the 'bucket location', and its object data
>> should only reside in ZG2's zones. any s3 requests on that bucket sent
>> to ZG1 will get redirected to ZG2 and serviced there
>>
>> if you don't want any metadata shared between the two zonegroups, you
>> can put them in separate realms. but that includes user metadata as
>> well
>>
>> >
>> > - Ceph Version: 16.2.7
>> > - Zone Group 1 (ZG1) Endpoint: http://s3.zonegroup1.mydomain.com
>> > - Zone Group 2 (ZG2) Endpoint: http://s3.zonegroup2.mydomain.com
>> > - Desired Bucket Creation Region: zg2-api-name
>> >
>> >  have reviewed the Ceph documentation and made necessary configuration
>> > changes, but I have not been able to achieve the desired result.I kindly
>> > request your assistance in understanding why the bucket constructor is
>> not
>> > honoring the specified region and always defaults to ZG1. I would
>> greatly
>> > appreciate any insights, recommendations, or potential solutions to
>> resolve
>> > this issue.
>> >
>> >  Thank you for your time and support.
>> >
>> > -
>> > Here are the details of my setup:
>> > -
>> >
>> > ```sh
>> > s3cmd --region zg2-api-name mb s3://test-zg2s3cmd info
>> > s3://test-zg2s3://test-zg2/ (bucket):
>> >Location:  zg2-api-name
>> >Payer: BucketOwner
>> >Expiration Rule: none
>> >Policy:none
>> >CORS:  none
>> >ACL:   development: FULL_CONTROL
>> > ```
>> >
>> > this is my config file:
>> >
>> > ```ini
>> > [default]
>> > access_key = 
>> > secret_key = 
>> > host_base = s3.zonegroup1.mydomain.com
>> > host_bucket = s3.%(location)s.mydomain.com
>> > #host_bucket = %(bucket)s.s3.zonegroup1.mydomain.com
>> > #host_bucket = s3.%(location)s.mydomain.com
>> > #host_bucket = s3.%(region)s.mydomain.com
>> > bucket_location = zg1-api-name
>> > use_https = False
>> > ```
>> >
>> >
>> > Zonegroup configuration for the `zonegroup1` region:
>> >
>> > ```json
>> > {
>> > "id": "fb3f818a-ca9b-4b12-b431-7cdcd80006d",
>> > "name": "zg1-api-name",
>> > "api_name": "zg1-api-name",
>> > "is_master": "false",
>> > "endpoints": [
>> > "http://s3.zonegroup1.mydomain.com;,
>> > ],
>> > "hostnames": [
>> > 

[ceph-users] Re: Slow recovery on Quincy

2023-05-20 Thread Hector Martin
On 17/05/2023 03.07, 胡 玮文 wrote:
> Hi Sake,
> 
> We are experiencing the same. I set “osd_mclock_cost_per_byte_usec_hdd” to 
> 0.1 (default is 2.6) and get about 15 times backfill speed, without 
> significant affect client IO. This parameter seems calculated wrongly, from 
> the description 5e-3 should be a reasonable value for HDD (corresponding to 
> 200MB/s). I noticed this default is originally 5.2, then changed to 2.6 to 
> increase the recovery speed. So I suspect the original author just convert 
> the unit wrongly, he may want 5.2e-3 but wrote 5.2 in code.
> 
> But all this may be not important in the next version. I see the relevant 
> code is rewritten, and this parameter is now removed.
> 
> high_recovery_ops profile works very poorly for us. It increase the average 
> latency of client IO from 50ms to about 1s.
> 
> Weiwen Hu
> 

Thank you for this, that parameter indeed seems completely wrong
(assuming it means what it says on the tin). After changing that my
Quincy cluster is no recovering at a much more reasonable speed.

- Hector
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io