Re: [ceph-users] rgw bucket index manual copy

2016-09-22 Thread Stas Starikevich
Hi, >> Can I make existing bucket blind? I didn't found a way to do that. >> And how can I make ordinary and blind buckets coexist in one Ceph cluster? The only way I see now - change configuration\restart services\create new bucket and roll back. Maybe someone from Ceph developers can add som

Re: [ceph-users] rgw bucket index manual copy

2016-09-22 Thread Василий Ангапов
And how can I make ordinary and blind buckets coexist in one Ceph cluster? 2016-09-22 11:57 GMT+03:00 Василий Ангапов : > Can I make existing bucket blind? > > 2016-09-22 4:23 GMT+03:00 Stas Starikevich : >> Ben, >> >> Works fine as far as I see: >> >> [root@273aa9f2ee9f /]# s3cmd mb s3://test >>

Re: [ceph-users] rgw bucket index manual copy

2016-09-22 Thread Василий Ангапов
Can I make existing bucket blind? 2016-09-22 4:23 GMT+03:00 Stas Starikevich : > Ben, > > Works fine as far as I see: > > [root@273aa9f2ee9f /]# s3cmd mb s3://test > Bucket 's3://test/' created > > [root@273aa9f2ee9f /]# s3cmd put /etc/hosts s3://test > upload: '/etc/hosts' -> 's3://test/hosts' [

Re: [ceph-users] rgw bucket index manual copy

2016-09-21 Thread Stas Starikevich
Ben, Works fine as far as I see: [root@273aa9f2ee9f /]# s3cmd mb s3://test Bucket 's3://test/' created [root@273aa9f2ee9f /]# s3cmd put /etc/hosts s3://test upload: '/etc/hosts' -> 's3://test/hosts' [1 of 1] 196 of 196 100% in0s 404.87 B/s done [root@273aa9f2ee9f /]# s3cmd ls s3://te

Re: [ceph-users] rgw bucket index manual copy

2016-09-21 Thread Ben Hines
Thanks. Will try it out once we get on Jewel. Just curious, does bucket deletion with --purge-objects work via radosgw-admin with the no index option? If not, i imagine rados could be used to delete them manually by prefix. On Sep 21, 2016 6:02 PM, "Stas Starikevich" wrote: > Hi Ben, > > Since

Re: [ceph-users] rgw bucket index manual copy

2016-09-21 Thread Stas Starikevich
Hi Ben, Since the 'Jewel' RadosGW supports blind buckets. To enable blind buckets configuration I used: radosgw-admin zone get --rgw-zone=default > default-zone.json #change index_type from 0 to 1 vi default-zone.json radosgw-admin zone set --rgw-zone=default --infile default-zone.json To apply

Re: [ceph-users] rgw bucket index manual copy

2016-09-21 Thread Ben Hines
Nice, thanks! Must have missed that one. It might work well for our use case since we don't really need the index. -Ben On Wed, Sep 21, 2016 at 11:23 AM, Gregory Farnum wrote: > On Wednesday, September 21, 2016, Ben Hines wrote: > >> Yes, 200 million is way too big for a single ceph RGW bucket

Re: [ceph-users] rgw bucket index manual copy

2016-09-21 Thread Gregory Farnum
On Wednesday, September 21, 2016, Ben Hines wrote: > Yes, 200 million is way too big for a single ceph RGW bucket. We > encountered this problem early on and sharded our buckets into 20 buckets, > each which have the sharded bucket index with 20 shards. > > Unfortunately, enabling the sharded RGW

Re: [ceph-users] rgw bucket index manual copy

2016-09-21 Thread Ben Hines
Yes, 200 million is way too big for a single ceph RGW bucket. We encountered this problem early on and sharded our buckets into 20 buckets, each which have the sharded bucket index with 20 shards. Unfortunately, enabling the sharded RGW index requires recreating the bucket and all objects. The fa

Re: [ceph-users] rgw bucket index manual copy

2016-09-20 Thread Wido den Hollander
> Op 20 september 2016 om 10:55 schreef Василий Ангапов : > > > Hello, > > Is there any way to copy rgw bucket index to another Ceph node to > lower the downtime of RGW? For now I have a huge bucket with 200 > million files and its backfilling is blocking RGW completely for an > hour and a hal

[ceph-users] rgw bucket index manual copy

2016-09-20 Thread Василий Ангапов
Hello, Is there any way to copy rgw bucket index to another Ceph node to lower the downtime of RGW? For now I have a huge bucket with 200 million files and its backfilling is blocking RGW completely for an hour and a half even with 10G network. Thanks! ___