You need to tell each radosgw daemon which zone to use.  In ceph.conf, I
have:
[client.radosgw.ceph3c]
  host = ceph3c
  rgw socket path = /var/run/ceph/radosgw.ceph3c
  keyring = /etc/ceph/ceph.client.radosgw.ceph3c.keyring
  log file = /var/log/ceph/radosgw.log
  admin socket = /var/run/ceph/radosgw.asok
  rgw dns name = us-central-1.ceph.cdlocal
  rgw region = us
  rgw region root pool = .us.rgw.root
  rgw zone = us-central-1
  rgw zone root pool = .us-central-1.rgw.root




On Thu, Nov 6, 2014 at 6:35 AM, Marco Garcês <[email protected]> wrote:

> Update:
>
> I was able to fix the authentication error, and I have 2 radosgw
> running on the same host.
> The problem now, is, I believe I have created the zone wrong, or, I am
> doing something wrong, because I can login with the user I had before,
> and I can access his buckets. I need to have everything separated.
>
> Here are my zone info:
>
> default zone:
> { "domain_root": ".rgw",
>   "control_pool": ".rgw.control",
>   "gc_pool": ".rgw.gc",
>   "log_pool": ".log",
>   "intent_log_pool": ".intent-log",
>   "usage_log_pool": ".usage",
>   "user_keys_pool": ".users",
>   "user_email_pool": ".users.email",
>   "user_swift_pool": ".users.swift",
>   "user_uid_pool": ".users.uid",
>   "system_key": { "access_key": "",
>       "secret_key": ""},
>   "placement_pools": [
>         { "key": "default-placement",
>           "val": { "index_pool": ".rgw.buckets.index",
>               "data_pool": ".rgw.buckets",
>               "data_extra_pool": ".rgw.buckets.extra"}}]}
>
> env2 zone:
> { "domain_root": ".rgw",
>   "control_pool": ".rgw.control",
>   "gc_pool": ".rgw.gc",
>   "log_pool": ".log",
>   "intent_log_pool": ".intent-log",
>   "usage_log_pool": ".usage",
>   "user_keys_pool": ".users",
>   "user_email_pool": ".users.email",
>   "user_swift_pool": ".users.swift",
>   "user_uid_pool": ".users.uid",
>   "system_key": { "access_key": "",
>       "secret_key": ""},
>   "placement_pools": [
>         { "key": "default-placement",
>           "val": { "index_pool": ".rgw.buckets.index",
>               "data_pool": ".rgw.buckets",
>               "data_extra_pool": ".rgw.buckets.extra"}}]}
>
> Could you guys help me?
>
>
>
> Marco Garcês
>
>
> On Thu, Nov 6, 2014 at 3:56 PM, Marco Garcês <[email protected]> wrote:
> > By the way,
> > Is it possible to run 2 radosgw on the same host?
> >
> > I think I have created the zone, not sure if it was correct, because
> > it used the default pool names, even though I had changed them in the
> > json file I had provided.
> >
> > Now I am trying to run ceph-radosgw with two different entries in the
> > ceph.conf file, but without sucess. Example:
> >
> > [client.radosgw.gw]
> > host = GATEWAY
> > keyring = /etc/ceph/keyring.radosgw.gw
> > rgw socket path = /var/run/ceph/ceph.radosgw.gateway.fastcgi.sock
> > log file = /var/log/ceph/client.radosgw.gateway.log
> > rgw print continue = false
> > rgw dns name = gateway.local
> > rgw enable ops log = false
> > rgw enable usage log = true
> > rgw usage log tick interval = 30
> > rgw usage log flush threshold = 1024
> > rgw usage max shards = 32
> > rgw usage max user shards = 1
> > rgw cache lru size = 15000
> > rgw thread pool size = 2048
> >
> > #[client.radosgw.gw.env2]
> > #host = GATEWAY
> > #keyring = /etc/ceph/keyring.radosgw.gw
> > #rgw socket path = /var/run/ceph/ceph.env2.radosgw.gateway.fastcgi.sock
> > #log file = /var/log/ceph/client.env2.radosgw.gateway.log
> > #rgw print continue = false
> > #rgw dns name = cephppr.local
> > #rgw enable ops log = false
> > #rgw enable usage log = true
> > #rgw usage log tick interval = 30
> > #rgw usage log flush threshold = 1024
> > #rgw usage max shards = 32
> > #rgw usage max user shards = 1
> > #rgw cache lru size = 15000
> > #rgw thread pool size = 2048
> > #rgw zone = ppr
> >
> > It fails to create the socket:
> > 2014-11-06 15:39:08.862364 7f80cc670880  0 ceph version 0.80.5
> > (38b73c67d375a2552d8ed67843c8a65c2c0feba6), process radosgw, pid 7930
> > 2014-11-06 15:39:08.870429 7f80cc670880  0 librados:
> > client.radosgw.gw.env2 authentication error (1) Operation not
> > permitted
> > 2014-11-06 15:39:08.870889 7f80cc670880 -1 Couldn't init storage
> > provider (RADOS)
> >
> >
> > What am I doing wrong?
> >
> > Marco Garcês
> > #sysadmin
> > Maputo - Mozambique
> > [Skype] marcogarces
> >
> >
> > On Thu, Nov 6, 2014 at 10:11 AM, Marco Garcês <[email protected]> wrote:
> >> Your solution of pre-pending the environment name to the bucket, was
> >> my first choice, but at the moment I can't ask the devs to change the
> >> code to do that. For now I have to stick with the zones solution.
> >> Should I follow the federated zones docs
> >> (http://ceph.com/docs/master/radosgw/federated-config/) but skip the
> >> sync step?
> >>
> >> Thank you,
> >>
> >> Marco Garcês
> >>
> >> On Wed, Nov 5, 2014 at 8:13 PM, Craig Lewis <[email protected]>
> wrote:
> >>> You could setup dedicated zones for each environment, and not
> >>> replicate between them.
> >>>
> >>> Each zone would have it's own URL, but you would be able to re-use
> >>> usernames and bucket names.  If different URLs are a problem, you
> >>> might be able to get around that in the load balancer or the web
> >>> servers.  I wouldn't really recommend that, but it's possible.
> >>>
> >>>
> >>> I have a similar requirement.  I was able to pre-pending the
> >>> environment name to the bucket in my client code, which made things
> >>> much easier.
> >>>
> >>>
> >>> On Wed, Nov 5, 2014 at 8:52 AM, Marco Garcês <[email protected]> wrote:
> >>>> Hi there,
> >>>>
> >>>> I have this situation, where I'm using the same Ceph cluster (with
> >>>> radosgw), for two different environments, QUAL and PRE-PRODUCTION.
> >>>>
> >>>> I need different users for each environment, but I need to create the
> >>>> same buckets, with the same name; I understand there is no way to have
> >>>> 2 buckets with the same name, but how can I go around this? Perhaps
> >>>> creating a different pool for each user?
> >>>>
> >>>> Can you help me? Thank you in advance, my best regards,
> >>>>
> >>>> Marco Garcês
> >>>> _______________________________________________
> >>>> ceph-users mailing list
> >>>> [email protected]
> >>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to