Hi,
>> Can I make existing bucket blind?
I didn't found a way to do that.
>> And how can I make ordinary and blind buckets coexist in one Ceph cluster?
The only way I see now - change configuration\restart services\create
new bucket and roll back.
Maybe someone from Ceph developers can add som
And how can I make ordinary and blind buckets coexist in one Ceph cluster?
2016-09-22 11:57 GMT+03:00 Василий Ангапов :
> Can I make existing bucket blind?
>
> 2016-09-22 4:23 GMT+03:00 Stas Starikevich :
>> Ben,
>>
>> Works fine as far as I see:
>>
>> [root@273aa9f2ee9f /]# s3cmd mb s3://test
>>
Can I make existing bucket blind?
2016-09-22 4:23 GMT+03:00 Stas Starikevich :
> Ben,
>
> Works fine as far as I see:
>
> [root@273aa9f2ee9f /]# s3cmd mb s3://test
> Bucket 's3://test/' created
>
> [root@273aa9f2ee9f /]# s3cmd put /etc/hosts s3://test
> upload: '/etc/hosts' -> 's3://test/hosts' [
Ben,
Works fine as far as I see:
[root@273aa9f2ee9f /]# s3cmd mb s3://test
Bucket 's3://test/' created
[root@273aa9f2ee9f /]# s3cmd put /etc/hosts s3://test
upload: '/etc/hosts' -> 's3://test/hosts' [1 of 1]
196 of 196 100% in0s 404.87 B/s done
[root@273aa9f2ee9f /]# s3cmd ls s3://te
Thanks. Will try it out once we get on Jewel.
Just curious, does bucket deletion with --purge-objects work via
radosgw-admin with the no index option?
If not, i imagine rados could be used to delete them manually by prefix.
On Sep 21, 2016 6:02 PM, "Stas Starikevich"
wrote:
> Hi Ben,
>
> Since
Hi Ben,
Since the 'Jewel' RadosGW supports blind buckets.
To enable blind buckets configuration I used:
radosgw-admin zone get --rgw-zone=default > default-zone.json
#change index_type from 0 to 1
vi default-zone.json
radosgw-admin zone set --rgw-zone=default --infile default-zone.json
To apply
Nice, thanks! Must have missed that one. It might work well for our use
case since we don't really need the index.
-Ben
On Wed, Sep 21, 2016 at 11:23 AM, Gregory Farnum wrote:
> On Wednesday, September 21, 2016, Ben Hines wrote:
>
>> Yes, 200 million is way too big for a single ceph RGW bucket
On Wednesday, September 21, 2016, Ben Hines wrote:
> Yes, 200 million is way too big for a single ceph RGW bucket. We
> encountered this problem early on and sharded our buckets into 20 buckets,
> each which have the sharded bucket index with 20 shards.
>
> Unfortunately, enabling the sharded RGW
Yes, 200 million is way too big for a single ceph RGW bucket. We
encountered this problem early on and sharded our buckets into 20 buckets,
each which have the sharded bucket index with 20 shards.
Unfortunately, enabling the sharded RGW index requires recreating the
bucket and all objects.
The fa
> Op 20 september 2016 om 10:55 schreef Василий Ангапов :
>
>
> Hello,
>
> Is there any way to copy rgw bucket index to another Ceph node to
> lower the downtime of RGW? For now I have a huge bucket with 200
> million files and its backfilling is blocking RGW completely for an
> hour and a hal
Hello,
Is there any way to copy rgw bucket index to another Ceph node to
lower the downtime of RGW? For now I have a huge bucket with 200
million files and its backfilling is blocking RGW completely for an
hour and a half even with 10G network.
Thanks!
___
11 matches
Mail list logo