On Thu, Oct 8, 2015 at 1:55 PM, Christian Sarrasin
<c.n...@cleansafecloud.com> wrote:
> After discovering this excellent blog post [1], I thought that taking
> advantage of users' "default_placement" feature would be a preferable way to
> achieve my multi-tenancy requirements (see previous post).
>
> Alas I seem to be hitting a snag. Any attempt to create a bucket with a user
> setup with a non-empty default_placement results in a 400 error thrown back
> to the client and the following msg in the radosgw logs:
>
> "could not find placement rule placement-user2 within region"
>
> (The pools exist, I reloaded the radosgw service and ran 'radosgw-admin
> regionmap update' as suggested in the blog post before running the client
> test)
>
> Here's the setup.  What am I doing wrong?  Any insight is really
> appreciated!

Not sure. Did you run 'radosgw-admin regionmap update'?

>
> radosgw-admin region get
> { "name": "default",
>   "api_name": "",
>   "is_master": "true",
>   "endpoints": [],
>   "master_zone": "",
>   "zones": [
>         { "name": "default",
>           "endpoints": [],
>           "log_meta": "false",
>           "log_data": "false"}],
>   "placement_targets": [
>         { "name": "default-placement",
>           "tags": []},
>         { "name": "placement-user2",
>           "tags": []}],
>   "default_placement": "default-placement"}
>
> radosgw-admin zone get default
> { "domain_root": ".rgw",
>   "control_pool": ".rgw.control",
>   "gc_pool": ".rgw.gc",
>   "log_pool": ".log",
>   "intent_log_pool": ".intent-log",
>   "usage_log_pool": ".usage",
>   "user_keys_pool": ".users",
>   "user_email_pool": ".users.email",
>   "user_swift_pool": ".users.swift",
>   "user_uid_pool": ".users.uid",
>   "system_key": { "access_key": "",
>       "secret_key": ""},
>   "placement_pools": [
>         { "key": "default-placement",
>           "val": { "index_pool": ".rgw.buckets.index",
>               "data_pool": ".rgw.buckets",
>               "data_extra_pool": ".rgw.buckets.extra"}},
>         { "key": "placement-user2",
>           "val": { "index_pool": ".rgw.index.user2",
>               "data_pool": ".rgw.buckets.user2",
>               "data_extra_pool": ".rgw.buckets.extra"}}]}
>
> radosgw-admin user info --uid=user2
> { "user_id": "user2",
>   "display_name": "User2",
>   "email": "",
>   "suspended": 0,
>   "max_buckets": 1000,
>   "auid": 0,
>   "subusers": [],
>   "keys": [
>         { "user": "user2",
>           "access_key": "VYM2EEU1X5H6Y82D0K4F",
>           "secret_key": "vEeJ9+yadvtqZrb2xoCAEuM2AlVyZ7UTArbfIEek"}],
>   "swift_keys": [],
>   "caps": [],
>   "op_mask": "read, write, delete",
>   "default_placement": "placement-user2",
>   "placement_tags": [],
>   "bucket_quota": { "enabled": false,
>       "max_size_kb": -1,
>       "max_objects": -1},
>   "user_quota": { "enabled": false,
>       "max_size_kb": -1,
>       "max_objects": -1},
>   "temp_url_keys": []}
>
> [1] http://cephnotes.ksperis.com/blog/2014/11/28/placement-pools-on-rados-gw
>
>
> On 03/10/15 19:48, Christian Sarrasin wrote:
>>
>> What are the best options to setup the Ceph radosgw so it supports
>> separate/independent "tenants"? What I'm after:
>>
>> 1. Ensure isolation between tenants, ie: no overlap/conflict in bucket
>> namespace; something separate radosgw "users" doesn't achieve
>> 2. Ability to backup/restore tenants' pools individually
>>
>> Referring to the docs [1], it seems this could possibly be achieved with
>> zones; one zone per tenant and leave out synchronization. Seems a little
>> heavy handed and presumably the overhead is non-negligible.
>>
>> Is this "supported"? Is there a better way?
>>
>> I'm running Firefly. I'm also rather new to Ceph so apologies if this is
>> already covered somewhere; kindly send pointers if so...
>>
>> Cheers,
>> Christian
>>
>> PS: cross-posted from [2]
>>
>> [1] http://docs.ceph.com/docs/v0.80/radosgw/federated-config/
>> [2]
>>
>> http://serverfault.com/questions/726491/how-to-setup-ceph-radosgw-to-support-multi-tenancy
>>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to