Hi Orit,

Many thanks. I will try that over the weekend and let you know.

Are you sure removing the pool will not destroy my data, user info and buckets?

Thanks

----- Original Message -----
> From: "Orit Wasserman" <owass...@redhat.com>
> To: "andrei" <and...@arhont.com>
> Cc: "Yoann Moulin" <yoann.mou...@epfl.ch>, "ceph-users" 
> <ceph-users@lists.ceph.com>
> Sent: Friday, 11 November, 2016 11:24:51
> Subject: Re: [ceph-users] radosgw - http status 400 while creating a bucket

> I have a workaround:
> 
> 1. Use  zonegroup and zone jsons you have from before (default-zg.json
> and default-zone.json)
> 2. Make sure the realm id in the jsons is ""
> 3. Stop the gateways
> 4. Remove .rgw.root  pool(you can back it up if you want to by using
> mkpool and cppool commands):
>    rados rm .rgw.root
> 5. radosgw-admin realm create --rgw-realm=myrealm
> 6. radosgw-admin zonegroup set --rgw-zonegroup=default  --default <
> default-zg.json
> 7. radosgw-admin zone set --rgw-zone=default --deault < default-zone.json
> 8. radosgw-admin period update --commit
> 
> Good luck,
> Orit
> 
> On Thu, Nov 10, 2016 at 7:08 PM, Andrei Mikhailovsky <and...@arhont.com> 
> wrote:
>>
>> Orit, here is the output:
>>
>> root@arh-ibstorage2-ib:~# rados ls -p .rgw.root
>> region_map
>> default.zone.5b41b1b2-0f92-463d-b582-07552f83e66c
>> realms.5b41b1b2-0f92-463d-b582-07552f83e66c
>> zonegroups_names.default
>> zone_names.default
>> periods.a9543371-a073-4d73-ab6d-0f54991c7ad9.1
>> realms_names.default
>> realms_names.london-ldex
>> realms.f08592f2-5d53-4701-a895-b780b16b5374
>> periods.286475fa-625b-4fdb-97bf-dcec4b437960.latest_epoch
>> periods.5b41b1b2-0f92-463d-b582-07552f83e66c:staging
>> periods.286475fa-625b-4fdb-97bf-dcec4b437960.1
>> default.realm
>> default.zonegroup.5b41b1b2-0f92-463d-b582-07552f83e66c
>> periods.a9543371-a073-4d73-ab6d-0f54991c7ad9.latest_epoch
>> periods.5b41b1b2-0f92-463d-b582-07552f83e66c:staging.latest_epoch
>> realms.f08592f2-5d53-4701-a895-b780b16b5374.control
>> zone_info.default
>> zonegroup_info.default
>> realms.5b41b1b2-0f92-463d-b582-07552f83e66c.control
>>
>>
>> Thanks
>>
>> ----- Original Message -----
>>> From: "Orit Wasserman" <owass...@redhat.com>
>>> To: "Andrei Mikhailovsky" <and...@arhont.com>
>>> Cc: "Yoann Moulin" <yoann.mou...@epfl.ch>, "ceph-users"
>>> <ceph-users@lists.ceph.com>
>>> Sent: Thursday, 10 November, 2016 15:22:16
>>> Subject: Re: [ceph-users] radosgw - http status 400 while creating a bucket
>>
>>> On Thu, Nov 10, 2016 at 3:32 PM, Andrei Mikhailovsky <and...@arhont.com> 
>>> wrote:
>>>> Orit, true.
>>>>
>>>> yeah, all my servers are running 10.2.3-1xenial or 10.2.3-1trusty. I have a
>>>> small cluster and I always update all servers at once.
>>>>
>>>> I don't have any Hammer releases of ceph anywhere on the network.
>>>>
>>>
>>> can you run: rados ls .rgw.root?
>>>
>>>> Is 10.2.4 out already? I didn't see an update package to that.
>>>>
>>>
>>> It should be out soon
>>>
>>>> Thanks
>>>>
>>>> Andrei
>>>>
>>>> ----- Original Message -----
>>>>> From: "Orit Wasserman" <owass...@redhat.com>
>>>>> To: "Andrei Mikhailovsky" <and...@arhont.com>
>>>>> Cc: "Yoann Moulin" <yoann.mou...@epfl.ch>, "ceph-users"
>>>>> <ceph-users@lists.ceph.com>
>>>>> Sent: Thursday, 10 November, 2016 13:58:32
>>>>> Subject: Re: [ceph-users] radosgw - http status 400 while creating a 
>>>>> bucket
>>>>
>>>>> On Thu, Nov 10, 2016 at 2:55 PM, Andrei Mikhailovsky <and...@arhont.com> 
>>>>> wrote:
>>>>>> Orit,
>>>>>>
>>>>>> Here is what i've done just now:
>>>>>>
>>>>>> root@arh-ibstorage1-ib:~# service ceph-radosgw@radosgw.gateway stop
>>>>>>
>>>>>> (the above command was ran on both radosgw servers). Checked with ps and 
>>>>>> no
>>>>>> radosgw services were running. After that I've done:
>>>>>>
>>>>>>
>>>>>>
>>>>>> root@arh-ibstorage1-ib:~# ./ceph-zones-fix.sh
>>>>>> + RADOSGW_ADMIN=radosgw-admin
>>>>>> + echo Exercise initialization code
>>>>>> Exercise initialization code
>>>>>> + radosgw-admin user info --uid=foo
>>>>>> could not fetch user info: no user info saved
>>>>>> + echo Get default zonegroup
>>>>>> Get default zonegroup
>>>>>> + radosgw-admin zonegroup get --rgw-zonegroup=default
>>>>>> + sed s/"id":.*/"id": "default",/g
>>>>>> + sed s/"master_zone.*/"master_zone": "default",/g
>>>>>> + echo Get default zone
>>>>>> Get default zone
>>>>>> + radosgw-admin zone get --zone-id=default
>>>>>> + echo Creating realm
>>>>>> Creating realm
>>>>>> + radosgw-admin realm create --rgw-realm=london-ldex
>>>>>> ERROR: couldn't create realm london-ldex: (17) File exists
>>>>>> 2016-11-10 13:44:48.872839 7f87a13d9a00  0 ERROR creating new realm 
>>>>>> object
>>>>>> london-ldex: (17) File exists
>>>>>> + echo Creating default zonegroup
>>>>>> Creating default zonegroup
>>>>>> + radosgw-admin zonegroup set --rgw-zonegroup=default
>>>>>> {
>>>>>>     "id": "default",
>>>>>>     "name": "default",
>>>>>>     "api_name": "",
>>>>>>     "is_master": "true",
>>>>>>     "endpoints": [],
>>>>>>     "hostnames": [],
>>>>>>     "hostnames_s3website": [],
>>>>>>     "master_zone": "default",
>>>>>>     "zones": [
>>>>>>         {
>>>>>>             "id": "default",
>>>>>>             "name": "default",
>>>>>>             "endpoints": [],
>>>>>>             "log_meta": "false",
>>>>>>             "log_data": "false",
>>>>>>             "bucket_index_max_shards": 0,
>>>>>>             "read_only": "false"
>>>>>>         }
>>>>>>     ],
>>>>>>     "placement_targets": [
>>>>>>         {
>>>>>>             "name": "default-placement",
>>>>>>             "tags": []
>>>>>>         }
>>>>>>     ],
>>>>>>     "default_placement": "default-placement",
>>>>>>     "realm_id": "5b41b1b2-0f92-463d-b582-07552f83e66c"
>>>>>> }
>>>>>> + echo Creating default zone
>>>>>> Creating default zone
>>>>>> + radosgw-admin zone set --rgw-zone=default
>>>>>> zone id default{
>>>>>>     "id": "default",
>>>>>>     "name": "default",
>>>>>>     "domain_root": ".rgw",
>>>>>>     "control_pool": ".rgw.control",
>>>>>>     "gc_pool": ".rgw.gc",
>>>>>>     "log_pool": ".log",
>>>>>>     "intent_log_pool": ".intent-log",
>>>>>>     "usage_log_pool": ".usage",
>>>>>>     "user_keys_pool": ".users",
>>>>>>     "user_email_pool": ".users.email",
>>>>>>     "user_swift_pool": ".users.swift",
>>>>>>     "user_uid_pool": ".users.uid",
>>>>>>     "system_key": {
>>>>>>         "access_key": "",
>>>>>>         "secret_key": ""
>>>>>>     },
>>>>>>     "placement_pools": [
>>>>>>         {
>>>>>>             "key": "default-placement",
>>>>>>             "val": {
>>>>>>                 "index_pool": ".rgw.buckets.index",
>>>>>>                 "data_pool": ".rgw.buckets",
>>>>>>                 "data_extra_pool": "default.rgw.buckets.non-ec",
>>>>>>                 "index_type": 0
>>>>>>             }
>>>>>>         }
>>>>>>     ],
>>>>>>     "metadata_heap": ".rgw.meta",
>>>>>>     "realm_id": "5b41b1b2-0f92-463d-b582-07552f83e66c"
>>>>>> }
>>>>>> + echo Setting default zonegroup to 'default'
>>>>>> Setting default zonegroup to 'default'
>>>>>> + radosgw-admin zonegroup default --rgw-zonegroup=default
>>>>>> + echo Setting default zone to 'default'
>>>>>> Setting default zone to 'default'
>>>>>> + radosgw-admin zone default --rgw-zone=default
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> After that i've ran the following to make sure the details have been 
>>>>>> updated:
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> root@arh-ibstorage1-ib:~# radosgw-admin zone get --rgw-zone=default
>>>>>> {
>>>>>>     "id": "default",
>>>>>>     "name": "default",
>>>>>>     "domain_root": ".rgw",
>>>>>>     "control_pool": ".rgw.control",
>>>>>>     "gc_pool": ".rgw.gc",
>>>>>>     "log_pool": ".log",
>>>>>>     "intent_log_pool": ".intent-log",
>>>>>>     "usage_log_pool": ".usage",
>>>>>>     "user_keys_pool": ".users",
>>>>>>     "user_email_pool": ".users.email",
>>>>>>     "user_swift_pool": ".users.swift",
>>>>>>     "user_uid_pool": ".users.uid",
>>>>>>     "system_key": {
>>>>>>         "access_key": "",
>>>>>>         "secret_key": ""
>>>>>>     },
>>>>>>     "placement_pools": [
>>>>>>         {
>>>>>>             "key": "default-placement",
>>>>>>             "val": {
>>>>>>                 "index_pool": ".rgw.buckets.index",
>>>>>>                 "data_pool": ".rgw.buckets",
>>>>>>                 "data_extra_pool": "default.rgw.buckets.non-ec",
>>>>>>                 "index_type": 0
>>>>>>             }
>>>>>>         }
>>>>>>     ],
>>>>>>     "metadata_heap": ".rgw.meta",
>>>>>>     "realm_id": "5b41b1b2-0f92-463d-b582-07552f83e66c"
>>>>>> }
>>>>>>
>>>>>>
>>>>>>
>>>>>> root@arh-ibstorage1-ib:~# radosgw-admin zonegroup get 
>>>>>> --rgw-zonegroup=default
>>>>>> {
>>>>>>     "id": "default",
>>>>>>     "name": "default",
>>>>>>     "api_name": "",
>>>>>>     "is_master": "true",
>>>>>>     "endpoints": [],
>>>>>>     "hostnames": [],
>>>>>>     "hostnames_s3website": [],
>>>>>>     "master_zone": "default",
>>>>>>     "zones": [
>>>>>>         {
>>>>>>             "id": "default",
>>>>>>             "name": "default",
>>>>>>             "endpoints": [],
>>>>>>             "log_meta": "false",
>>>>>>             "log_data": "false",
>>>>>>             "bucket_index_max_shards": 0,
>>>>>>             "read_only": "false"
>>>>>>         }
>>>>>>     ],
>>>>>>     "placement_targets": [
>>>>>>         {
>>>>>>             "name": "default-placement",
>>>>>>             "tags": []
>>>>>>         }
>>>>>>     ],
>>>>>>     "default_placement": "default-placement",
>>>>>>     "realm_id": "5b41b1b2-0f92-463d-b582-07552f83e66c"
>>>>>> }
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> As far as I can see, the master_zone is now set to default.
>>>>>>
>>>>>> Now I start the radosgw service:
>>>>>>
>>>>>>
>>>>>> root@arh-ibstorage1-ib:~# service ceph-radosgw@radosgw.gateway start
>>>>>> root@arh-ibstorage1-ib:~#
>>>>>> root@arh-ibstorage1-ib:~#
>>>>>> root@arh-ibstorage1-ib:~#
>>>>>> root@arh-ibstorage1-ib:~#
>>>>>> root@arh-ibstorage1-ib:~#
>>>>>> root@arh-ibstorage1-ib:~# radosgw-admin zone get --rgw-zone=default
>>>>>> {
>>>>>>     "id": "default",
>>>>>>     "name": "default",
>>>>>>     "domain_root": ".rgw",
>>>>>>     "control_pool": ".rgw.control",
>>>>>>     "gc_pool": ".rgw.gc",
>>>>>>     "log_pool": ".log",
>>>>>>     "intent_log_pool": ".intent-log",
>>>>>>     "usage_log_pool": ".usage",
>>>>>>     "user_keys_pool": ".users",
>>>>>>     "user_email_pool": ".users.email",
>>>>>>     "user_swift_pool": ".users.swift",
>>>>>>     "user_uid_pool": ".users.uid",
>>>>>>     "system_key": {
>>>>>>         "access_key": "",
>>>>>>         "secret_key": ""
>>>>>>     },
>>>>>>     "placement_pools": [
>>>>>>         {
>>>>>>             "key": "default-placement",
>>>>>>             "val": {
>>>>>>                 "index_pool": ".rgw.buckets.index",
>>>>>>                 "data_pool": ".rgw.buckets",
>>>>>>                 "data_extra_pool": "default.rgw.buckets.non-ec",
>>>>>>                 "index_type": 0
>>>>>>             }
>>>>>>         }
>>>>>>     ],
>>>>>>     "metadata_heap": ".rgw.meta",
>>>>>>     "realm_id": "5b41b1b2-0f92-463d-b582-07552f83e66c"
>>>>>>
>>>>>>
>>>>>> root@arh-ibstorage1-ib:~# radosgw-admin zonegroup get 
>>>>>> --rgw-zonegroup=default
>>>>>> {
>>>>>>     "id": "default",
>>>>>>     "name": "default",
>>>>>>     "api_name": "",
>>>>>>     "is_master": "true",
>>>>>>     "endpoints": [],
>>>>>>     "hostnames": [],
>>>>>>     "hostnames_s3website": [],
>>>>>>     "master_zone": "",
>>>>>>     "zones": [
>>>>>>         {
>>>>>>             "id": "default",
>>>>>>             "name": "default",
>>>>>>             "endpoints": [],
>>>>>>             "log_meta": "false",
>>>>>>             "log_data": "false",
>>>>>>             "bucket_index_max_shards": 0,
>>>>>>             "read_only": "false"
>>>>>>         }
>>>>>>     ],
>>>>>>     "placement_targets": [
>>>>>>         {
>>>>>>             "name": "default-placement",
>>>>>>             "tags": []
>>>>>>         }
>>>>>>     ],
>>>>>>     "default_placement": "default-placement",
>>>>>>     "realm_id": ""
>>>>>> }
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> As you can see from above, the realm_id and the master_zone have 
>>>>>> reverted back
>>>>>> to being blank! Crazy!!!
>>>>>>
>>>>>
>>>>> Sad not crazy :(
>>>>> Are both radosgw and all the radosgw-admin are from jewel?
>>>>> I suspect you are hitting http://tracker.ceph.com/issues/17371 (it is
>>>>> also true for infernalis)
>>>>> It was fixed in 10.2.4 ...
>>>>>
>>>>>>
>>>>>> Andrei
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> ----- Original Message -----
>>>>>>> From: "Orit Wasserman" <owass...@redhat.com>
>>>>>>> To: "Andrei Mikhailovsky" <and...@arhont.com>
>>>>>>> Cc: "Yoann Moulin" <yoann.mou...@epfl.ch>, "ceph-users"
>>>>>>> <ceph-users@lists.ceph.com>
>>>>>>> Sent: Thursday, 10 November, 2016 13:36:01
>>>>>>> Subject: Re: [ceph-users] radosgw - http status 400 while creating a 
>>>>>>> bucket
>>>>>>
>>>>>>> On Thu, Nov 10, 2016 at 2:24 PM, Andrei Mikhailovsky 
>>>>>>> <and...@arhont.com> wrote:
>>>>>>>>
>>>>>>>> Hi Orit,
>>>>>>>>
>>>>>>>> Thanks for the links.
>>>>>>>>
>>>>>>>> I've had a look at the link that you've sent
>>>>>>>> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2016-July/011157.html
>>>>>>>>  and
>>>>>>>> followed the instructions. Created the script as depicted in the 
>>>>>>>> email. Changed
>>>>>>>> the realm name to something relevant. The script ran without any 
>>>>>>>> errors. I've
>>>>>>>> restarted radosgw services on both servers, but I am still unable to 
>>>>>>>> create
>>>>>>>> buckets. I am getting exactly the same error from the client:
>>>>>>>>
>>>>>>>
>>>>>>> So you have two radosgw running on different servers, on the same zone
>>>>>>> (default) and on the same rados cluster?
>>>>>>> Are both gateways from the same version?
>>>>>>> Did you shutdown both gateway when you run the script?
>>>>>>>
>>>>>>>>
>>>>>>>> S3ResponseError: 400 Bad Request
>>>>>>>> <?xml version="1.0"
>>>>>>>> encoding="UTF-8"?><Error><Code>InvalidArgument</Code><BucketName>my-new-bucket-31337</BucketName><RequestId>tx000000000000000000003-00582472a4-995ee8c-default</RequestId><HostId>995ee8c-default-default</HostId></Error>
>>>>>>>>
>>>>>>>>
>>>>>>>> I can delete a bucket, but I can't create a new one.
>>>>>>>>
>>>>>>>>
>>>>>>>> What I did notice is that after running the script and getting the 
>>>>>>>> zonegroup
>>>>>>>> info, I do see both the master_zone and the realm_id fields are set:
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>     "master_zone": "default",
>>>>>>>>     "realm_id": "5b41b1b2-0f92-463d-b582-07552f83e66c"
>>>>>>>>
>>>>>>>>
>>>>>>>> However, after I restart radosgw service, they go back to being blank:
>>>>>>>>
>>>>>>>>
>>>>>>>>     "master_zone": "",
>>>>>>>>     "realm_id": ""
>>>>>>>>
>>>>>>>>
>>>>>>>> In any case, creation of buckets doesn't work either way.
>>>>>>>>
>>>>>>>> Cheers
>>>>>>>>
>>>>>>>>
>>>>>>>> Andrei
>>>>>>>>
>>>>>>>>
>>>>>>>> ----- Original Message -----
>>>>>>>>> From: "Orit Wasserman" <owass...@redhat.com>
>>>>>>>>> To: "Yoann Moulin" <yoann.mou...@epfl.ch>
>>>>>>>>> Cc: "ceph-users" <ceph-users@lists.ceph.com>
>>>>>>>>> Sent: Thursday, 10 November, 2016 10:04:46
>>>>>>>>> Subject: Re: [ceph-users] radosgw - http status 400 while creating a 
>>>>>>>>> bucket
>>>>>>>>
>>>>>>>>> On Wed, Nov 9, 2016 at 10:20 PM, Yoann Moulin <yoann.mou...@epfl.ch> 
>>>>>>>>> wrote:
>>>>>>>>>> Hello,
>>>>>>>>>>
>>>>>>>>>>> many thanks for your help. I've tried setting the zone to master, 
>>>>>>>>>>> followed by
>>>>>>>>>>> the period update --commit command. This is what i've had:
>>>>>>>>>>
>>>>>>>>>> maybe it's related to this issue :
>>>>>>>>>>
>>>>>>>>>> http://tracker.ceph.com/issues/16839 (fixe in Jewel 10.2.3)
>>>>>>>>>>
>>>>>>>>>> or this one :
>>>>>>>>>>
>>>>>>>>>> http://tracker.ceph.com/issues/17239
>>>>>>>>>>
>>>>>>>>>> the "id" of the zonegroup shouldn't be "default" but an uuid afaik
>>>>>>>>>>
>>>>>>>>>> Best regards
>>>>>>>>>>
>>>>>>>>>> Yoann Moulin
>>>>>>>>>>
>>>>>>>>>>> root@arh-ibstorage1-ib:~# radosgw-admin zonegroup get 
>>>>>>>>>>> --rgw-zonegroup=default
>>>>>>>>>>> {
>>>>>>>>>>>     "id": "default",
>>>>>>>>>>>     "name": "default",
>>>>>>>>>>>     "api_name": "",
>>>>>>>>>>>     "is_master": "true",
>>>>>>>>>>>     "endpoints": [],
>>>>>>>>>>>     "hostnames": [],
>>>>>>>>>>>     "hostnames_s3website": [],
>>>>>>>>>>>     "master_zone": "default",
>>>>>>>>>>>     "zones": [
>>>>>>>>>>>         {
>>>>>>>>>>>             "id": "default",
>>>>>>>>>>>             "name": "default",
>>>>>>>>>>>             "endpoints": [],
>>>>>>>>>>>             "log_meta": "false",
>>>>>>>>>>>             "log_data": "false",
>>>>>>>>>>>             "bucket_index_max_shards": 0,
>>>>>>>>>>>             "read_only": "false"
>>>>>>>>>>>         }
>>>>>>>>>>>     ],
>>>>>>>>>>>     "placement_targets": [
>>>>>>>>>>>         {
>>>>>>>>>>>             "name": "default-placement",
>>>>>>>>>>>             "tags": []
>>>>>>>>>>>         }
>>>>>>>>>>>     ],
>>>>>>>>>>>     "default_placement": "default-placement",
>>>>>>>>>>>     "realm_id": "5b41b1b2-0f92-463d-b582-07552f83e66c"
>>>>>>>>>>> }
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> root@arh-ibstorage1-ib:~# radosgw-admin period update --commit
>>>>>>>>>>> cannot commit period: period does not have a master zone of a 
>>>>>>>>>>> master zonegroup
>>>>>>>>>>> failed to commit period: (22) Invalid argument
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> This is issue http://tracker.ceph.com/issues/17364 (I am working on it
>>>>>>>>> at he moment).
>>>>>>>>>
>>>>>>>>> Do the same procedure as before without the period update --commit 
>>>>>>>>> command,
>>>>>>>>> it should fix the master zone problem.
>>>>>>>>> see also
>>>>>>>>> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2016-July/011157.html
>>>>>>>>>
>>>>>>>>> This looks like an upgraded system (the id equals the name after an 
>>>>>>>>> upgrade).
>>>>>>>>>
>>>>>>>>> Orit
>>>>>>>>>
>>>>>>>>>>> root@arh-ibstorage1-ib:~# radosgw-admin zonegroup get 
>>>>>>>>>>> --rgw-zonegroup=default
>>>>>>>>>>> {
>>>>>>>>>>>     "id": "default",
>>>>>>>>>>>     "name": "default",
>>>>>>>>>>>     "api_name": "",
>>>>>>>>>>>     "is_master": "true",
>>>>>>>>>>>     "endpoints": [],
>>>>>>>>>>>     "hostnames": [],
>>>>>>>>>>>     "hostnames_s3website": [],
>>>>>>>>>>>     "master_zone": "",
>>>>>>>>>>>     "zones": [
>>>>>>>>>>>         {
>>>>>>>>>>>             "id": "default",
>>>>>>>>>>>             "name": "default",
>>>>>>>>>>>             "endpoints": [],
>>>>>>>>>>>             "log_meta": "false",
>>>>>>>>>>>             "log_data": "false",
>>>>>>>>>>>             "bucket_index_max_shards": 0,
>>>>>>>>>>>             "read_only": "false"
>>>>>>>>>>>         }
>>>>>>>>>>>     ],
>>>>>>>>>>>     "placement_targets": [
>>>>>>>>>>>         {
>>>>>>>>>>>             "name": "default-placement",
>>>>>>>>>>>             "tags": []
>>>>>>>>>>>         }
>>>>>>>>>>>     ],
>>>>>>>>>>>     "default_placement": "default-placement",
>>>>>>>>>>>     "realm_id": ""
>>>>>>>>>>> }
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> The strange thing as you can see, following the "radosgw-admin 
>>>>>>>>>>> period update
>>>>>>>>>>> --commit" command, the master_zone and the realm_id values reset to 
>>>>>>>>>>> blank. What
>>>>>>>>>>> could be causing this?
>>>>>>>>>>>
>>>>>>>>>>> Here is my ceph infrastructure setup, perhaps it will help with 
>>>>>>>>>>> finding the
>>>>>>>>>>> issue?:
>>>>>>>>>>>
>>>>>>>>>>> ceph osd and mon servers:
>>>>>>>>>>> arh-ibstorage1-ib (also radosgw server)
>>>>>>>>>>> arh-ibstorage2-ib (also radosgw server)
>>>>>>>>>>> arh-ibstorage3-ib
>>>>>>>>>>>
>>>>>>>>>>> ceph mon server:
>>>>>>>>>>> arh-cloud13-ib
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> Thus, overall, i have 4 mon servers, 3 osd servers and 2 radosgw 
>>>>>>>>>>> servers
>>>>>>>>>>>
>>>>>>>>>>> Thanks
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> ----- Original Message -----
>>>>>>>>>>>> From: "Yehuda Sadeh-Weinraub" <yeh...@redhat.com>
>>>>>>>>>>>> To: "Andrei Mikhailovsky" <and...@arhont.com>
>>>>>>>>>>>> Cc: "ceph-users" <ceph-users@lists.ceph.com>
>>>>>>>>>>>> Sent: Wednesday, 9 November, 2016 17:12:30
>>>>>>>>>>>> Subject: Re: [ceph-users] radosgw - http status 400 while creating 
>>>>>>>>>>>> a bucket
>>>>>>>>>>>
>>>>>>>>>>>> On Wed, Nov 9, 2016 at 1:30 AM, Andrei Mikhailovsky 
>>>>>>>>>>>> <and...@arhont.com> wrote:
>>>>>>>>>>>>> Hi Yehuda,
>>>>>>>>>>>>>
>>>>>>>>>>>>> just tried to run the command to set the master_zone to default 
>>>>>>>>>>>>> followed by the
>>>>>>>>>>>>> bucket create without doing the restart and I still have the same 
>>>>>>>>>>>>> error on the
>>>>>>>>>>>>> client:
>>>>>>>>>>>>>
>>>>>>>>>>>>> <?xml version="1.0"
>>>>>>>>>>>>> encoding="UTF-8"?><Error><Code>InvalidArgument</Code><BucketName>my-new-bucket-31337</BucketName><RequestId>tx000000000000000000010-005822ebbd-9951ad8-default</RequestId><HostId>9951ad8-default-default</HostId></Error>
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> After setting the master zone, try running:
>>>>>>>>>>>>
>>>>>>>>>>>> $ radosgw-admin period update --commit
>>>>>>>>>>>>
>>>>>>>>>>>> Yehuda
>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> Andrei
>>>>>>>>>>>>>
>>>>>>>>>>>>> ----- Original Message -----
>>>>>>>>>>>>>> From: "Yehuda Sadeh-Weinraub" <yeh...@redhat.com>
>>>>>>>>>>>>>> To: "Andrei Mikhailovsky" <and...@arhont.com>
>>>>>>>>>>>>>> Cc: "ceph-users" <ceph-users@lists.ceph.com>
>>>>>>>>>>>>>> Sent: Wednesday, 9 November, 2016 01:13:48
>>>>>>>>>>>>>> Subject: Re: [ceph-users] radosgw - http status 400 while 
>>>>>>>>>>>>>> creating a bucket
>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Tue, Nov 8, 2016 at 5:05 PM, Andrei Mikhailovsky 
>>>>>>>>>>>>>> <and...@arhont.com> wrote:
>>>>>>>>>>>>>>> Hi Yehuda,
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> I don't have a multizone setup. The radosgw service was 
>>>>>>>>>>>>>>> configured about two
>>>>>>>>>>>>>>> years ago according to the documentation on ceph.com and 
>>>>>>>>>>>>>>> haven't changed with
>>>>>>>>>>>>>>> numerous version updates. All was working okay until i've 
>>>>>>>>>>>>>>> upgraded to version
>>>>>>>>>>>>>>> 10.2.x.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Could you please point me in the right direction what exactly 
>>>>>>>>>>>>>>> needs to be done?
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> # radosgw-admin zonegroup get --rgw-zonegroup=default
>>>>>>>>>>>>>>> {
>>>>>>>>>>>>>>>     "id": "default",
>>>>>>>>>>>>>>>     "name": "default",
>>>>>>>>>>>>>>>     "api_name": "",
>>>>>>>>>>>>>>>     "is_master": "true",
>>>>>>>>>>>>>>>     "endpoints": [],
>>>>>>>>>>>>>>>     "hostnames": [],
>>>>>>>>>>>>>>>     "hostnames_s3website": [],
>>>>>>>>>>>>>>>     "master_zone": "",
>>>>>>>>>>>>>>>     "zones": [
>>>>>>>>>>>>>>>         {
>>>>>>>>>>>>>>>             "id": "default",
>>>>>>>>>>>>>>>             "name": "default",
>>>>>>>>>>>>>>>             "endpoints": [],
>>>>>>>>>>>>>>>             "log_meta": "false",
>>>>>>>>>>>>>>>             "log_data": "false",
>>>>>>>>>>>>>>>             "bucket_index_max_shards": 0,
>>>>>>>>>>>>>>>             "read_only": "false"
>>>>>>>>>>>>>>>         }
>>>>>>>>>>>>>>>     ],
>>>>>>>>>>>>>>>     "placement_targets": [
>>>>>>>>>>>>>>>         {
>>>>>>>>>>>>>>>             "name": "default-placement",
>>>>>>>>>>>>>>>             "tags": []
>>>>>>>>>>>>>>>         }
>>>>>>>>>>>>>>>     ],
>>>>>>>>>>>>>>>     "default_placement": "default-placement",
>>>>>>>>>>>>>>>     "realm_id": ""
>>>>>>>>>>>>>>> }
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Try:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> $ radosgw-admin zonegroup get --rgw-zonegroup=default > 
>>>>>>>>>>>>>> zonegroup.json
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> ... modify the master_zone to be "default"
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> $ radosgw-admin zonegroup set --rgw-zonegroup=default < 
>>>>>>>>>>>>>> zonegroup.json
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> (restart radosgw)
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Yehuda
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> # radosgw-admin zone get --rgw-zone=default
>>>>>>>>>>>>>>> {
>>>>>>>>>>>>>>>     "id": "default",
>>>>>>>>>>>>>>>     "name": "default",
>>>>>>>>>>>>>>>     "domain_root": ".rgw",
>>>>>>>>>>>>>>>     "control_pool": ".rgw.control",
>>>>>>>>>>>>>>>     "gc_pool": ".rgw.gc",
>>>>>>>>>>>>>>>     "log_pool": ".log",
>>>>>>>>>>>>>>>     "intent_log_pool": ".intent-log",
>>>>>>>>>>>>>>>     "usage_log_pool": ".usage",
>>>>>>>>>>>>>>>     "user_keys_pool": ".users",
>>>>>>>>>>>>>>>     "user_email_pool": ".users.email",
>>>>>>>>>>>>>>>     "user_swift_pool": ".users.swift",
>>>>>>>>>>>>>>>     "user_uid_pool": ".users.uid",
>>>>>>>>>>>>>>>     "system_key": {
>>>>>>>>>>>>>>>         "access_key": "",
>>>>>>>>>>>>>>>         "secret_key": ""
>>>>>>>>>>>>>>>     },
>>>>>>>>>>>>>>>     "placement_pools": [
>>>>>>>>>>>>>>>         {
>>>>>>>>>>>>>>>             "key": "default-placement",
>>>>>>>>>>>>>>>             "val": {
>>>>>>>>>>>>>>>                 "index_pool": ".rgw.buckets.index",
>>>>>>>>>>>>>>>                 "data_pool": ".rgw.buckets",
>>>>>>>>>>>>>>>                 "data_extra_pool": "default.rgw.buckets.non-ec",
>>>>>>>>>>>>>>>                 "index_type": 0
>>>>>>>>>>>>>>>             }
>>>>>>>>>>>>>>>         }
>>>>>>>>>>>>>>>     ],
>>>>>>>>>>>>>>>     "metadata_heap": ".rgw.meta",
>>>>>>>>>>>>>>>     "realm_id": "5b41b1b2-0f92-463d-b582-07552f83e66c"
>>>>>>>>>>>>>>> }
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Thanks
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> ----- Original Message -----
>>>>>>>>>>>>>>>> From: "Yehuda Sadeh-Weinraub" <yeh...@redhat.com>
>>>>>>>>>>>>>>>> To: "Andrei Mikhailovsky" <and...@arhont.com>
>>>>>>>>>>>>>>>> Cc: "ceph-users" <ceph-users@lists.ceph.com>
>>>>>>>>>>>>>>>> Sent: Wednesday, 9 November, 2016 00:48:50
>>>>>>>>>>>>>>>> Subject: Re: [ceph-users] radosgw - http status 400 while 
>>>>>>>>>>>>>>>> creating a bucket
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> On Tue, Nov 8, 2016 at 3:36 PM, Andrei Mikhailovsky 
>>>>>>>>>>>>>>>> <and...@arhont.com> wrote:
>>>>>>>>>>>>>>>>> Hello
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> I am having issues with creating buckets in radosgw. It 
>>>>>>>>>>>>>>>>> started with an
>>>>>>>>>>>>>>>>> upgrade to version 10.2.x
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> When I am creating a bucket I get the following error on the 
>>>>>>>>>>>>>>>>> client side:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> boto.exception.S3ResponseError: S3ResponseError: 400 Bad 
>>>>>>>>>>>>>>>>> Request
>>>>>>>>>>>>>>>>> <?xml version="1.0"
>>>>>>>>>>>>>>>>> encoding="UTF-8"?><Error><Code>InvalidArgument</Code><BucketName>my-new-bucket-31337</BucketName><RequestId>tx000000000000000000002-0058225bae-994d148-default</RequestId><HostId>994d148-default-default</HostId></Error>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> The radosgw logs are (redacted):
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> ###############################
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.876862 7f026d953700 20 enqueued request
>>>>>>>>>>>>>>>>> req=0x7f02ba07b0e0
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.876892 7f026d953700 20 RGWWQ:
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.876897 7f026d953700 20 req: 0x7f02ba07b0e0
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.876912 7f026d953700 10 allocated request
>>>>>>>>>>>>>>>>> req=0x7f02ba07b140
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.876975 7f026b94f700 20 dequeued request
>>>>>>>>>>>>>>>>> req=0x7f02ba07b0e0
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.876987 7f026b94f700 20 RGWWQ: empty
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.877050 7f026b94f700 20 CONTENT_LENGTH=0
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.877060 7f026b94f700 20 
>>>>>>>>>>>>>>>>> CONTEXT_DOCUMENT_ROOT=/var/www
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.877062 7f026b94f700 20 CONTEXT_PREFIX=
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.877063 7f026b94f700 20 
>>>>>>>>>>>>>>>>> DOCUMENT_ROOT=/var/www
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.877081 7f026b94f700 20 FCGI_ROLE=RESPONDER
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.877083 7f026b94f700 20 
>>>>>>>>>>>>>>>>> GATEWAY_INTERFACE=CGI/1.1
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.877084 7f026b94f700 20 
>>>>>>>>>>>>>>>>> HTTP_ACCEPT_ENCODING=identity
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.877086 7f026b94f700 20 
>>>>>>>>>>>>>>>>> HTTP_AUTHORIZATION=AWS
>>>>>>>>>>>>>>>>> XXXXXXXEDITEDXXXXXX:EDITEDXXXXXXXeWyiacaN26GcME
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.877087 7f026b94f700 20 HTTP_DATE=Tue, 08 
>>>>>>>>>>>>>>>>> Nov 2016
>>>>>>>>>>>>>>>>> 23:11:37 GMT
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.877088 7f026b94f700 20
>>>>>>>>>>>>>>>>> HTTP_HOST=s3service.editedname.com
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.877089 7f026b94f700 20 
>>>>>>>>>>>>>>>>> HTTP_USER_AGENT=Boto/2.38.0
>>>>>>>>>>>>>>>>> Python/2.7.12 Linux/4.8.4-040804-generic
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.877090 7f026b94f700 20 HTTPS=on
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.877092 7f026b94f700 20
>>>>>>>>>>>>>>>>> PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.877093 7f026b94f700 20 proxy-nokeepalive=1
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.877094 7f026b94f700 20 QUERY_STRING=
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.877095 7f026b94f700 20 
>>>>>>>>>>>>>>>>> REMOTE_ADDR=192.168.169.91
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.877096 7f026b94f700 20 REMOTE_PORT=45404
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.877097 7f026b94f700 20 REQUEST_METHOD=PUT
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.877098 7f026b94f700 20 
>>>>>>>>>>>>>>>>> REQUEST_SCHEME=https
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.877099 7f026b94f700 20 
>>>>>>>>>>>>>>>>> REQUEST_URI=/my-new-bucket-31337/
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.877104 7f026b94f700 20
>>>>>>>>>>>>>>>>> SCRIPT_FILENAME=proxy:fcgi://localhost:9000/my-new-bucket-31337/
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.877105 7f026b94f700 20 
>>>>>>>>>>>>>>>>> SCRIPT_NAME=/my-new-bucket-31337/
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.877107 7f026b94f700 20
>>>>>>>>>>>>>>>>> SCRIPT_URI=https://s3service.editedname.com/my-new-bucket-31337/
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.877108 7f026b94f700 20 
>>>>>>>>>>>>>>>>> SCRIPT_URL=/my-new-bucket-31337/
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.877109 7f026b94f700 20 
>>>>>>>>>>>>>>>>> SERVER_ADDR=192.168.169.201
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.877110 7f026b94f700 20
>>>>>>>>>>>>>>>>> SERVER_ADMIN=and...@editedname.com
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.877111 7f026b94f700 20
>>>>>>>>>>>>>>>>> SERVER_NAME=s3service.editedname.com
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.877112 7f026b94f700 20 SERVER_PORT=443
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.877113 7f026b94f700 20 
>>>>>>>>>>>>>>>>> SERVER_PROTOCOL=HTTP/1.1
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.877114 7f026b94f700 20 SERVER_SIGNATURE=
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.877115 7f026b94f700 20 
>>>>>>>>>>>>>>>>> SERVER_SOFTWARE=Apache/2.4.18
>>>>>>>>>>>>>>>>> (Ubuntu)
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.877116 7f026b94f700 20
>>>>>>>>>>>>>>>>> SSL_TLS_SNI=s3service.editedname.com
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.877119 7f026b94f700  1 ====== starting 
>>>>>>>>>>>>>>>>> new request
>>>>>>>>>>>>>>>>> req=0x7f02ba07b0e0 =====
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.877155 7f026b94f700  2 req 2:0.000035::PUT
>>>>>>>>>>>>>>>>> /my-new-bucket-31337/::initializing for trans_id =
>>>>>>>>>>>>>>>>> tx000000000000000000002-0058225bae-994d148-default
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.877175 7f026b94f700 10 rgw api priority: 
>>>>>>>>>>>>>>>>> s3=5
>>>>>>>>>>>>>>>>> s3website=4
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.877179 7f026b94f700 10 
>>>>>>>>>>>>>>>>> host=s3service.editedname.com
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.877199 7f026b94f700 20 subdomain=
>>>>>>>>>>>>>>>>> domain=s3service.editedname.com in_hosted_domain=1
>>>>>>>>>>>>>>>>> in_hosted_domain_s3website=0
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.877203 7f026b94f700 20 final 
>>>>>>>>>>>>>>>>> domain/bucket subdomain=
>>>>>>>>>>>>>>>>> domain=s3service.editedname.com in_hosted_domain=1
>>>>>>>>>>>>>>>>> in_hosted_domain_s3website=0 
>>>>>>>>>>>>>>>>> s->info.domain=s3service.editedname.com
>>>>>>>>>>>>>>>>> s->info.request_uri=/my-new-bucket-31337/
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.877277 7f026b94f700 20 get_handler
>>>>>>>>>>>>>>>>> handler=25RGWHandler_REST_Bucket_S3
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.877286 7f026b94f700 10
>>>>>>>>>>>>>>>>> handler=25RGWHandler_REST_Bucket_S3
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.877291 7f026b94f700  2 req 
>>>>>>>>>>>>>>>>> 2:0.000172:s3:PUT
>>>>>>>>>>>>>>>>> /my-new-bucket-31337/::getting op 1
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.877326 7f026b94f700 10 
>>>>>>>>>>>>>>>>> op=27RGWCreateBucket_ObjStore_S3
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.877334 7f026b94f700  2 req 
>>>>>>>>>>>>>>>>> 2:0.000215:s3:PUT
>>>>>>>>>>>>>>>>> /my-new-bucket-31337/:create_bucket:authorizing
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.877386 7f026b94f700 20 
>>>>>>>>>>>>>>>>> get_system_obj_state:
>>>>>>>>>>>>>>>>> rctx=0x7f026b94b7c0 obj=.users:XXXXXXXEDITEDXXXXXX 
>>>>>>>>>>>>>>>>> state=0x7f02b70cdfe8
>>>>>>>>>>>>>>>>> s->prefetch_data=0
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.877403 7f026b94f700 10 cache get:
>>>>>>>>>>>>>>>>> name=.users+XXXXXXXEDITEDXXXXXX : miss
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.877516 7f026b94f700  1 -- 
>>>>>>>>>>>>>>>>> 192.168.168.201:0/3703211654
>>>>>>>>>>>>>>>>> --> 192.168.168.201:6829/893 -- 
>>>>>>>>>>>>>>>>> osd_op(client.160747848.0:1019 16.e7d6c03f
>>>>>>>>>>>>>>>>> XXXXXXXEDITEDXXXXXX [getxattrs,stat] snapc 0=[] 
>>>>>>>>>>>>>>>>> ack+read+known_if_red
>>>>>>>>>>>>>>>>> irected e97148) v7 -- ?+0 0x7f02b70ab600 con 0x7f02c581a800
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.879265 7f02e9405700  1 -- 
>>>>>>>>>>>>>>>>> 192.168.168.201:0/3703211654
>>>>>>>>>>>>>>>>> <== osd.3 192.168.168.201:6829/893 52 ==== osd_op_reply(1019
>>>>>>>>>>>>>>>>> XXXXXXXEDITEDXXXXXX [getxattrs,stat] v0'0 uv7 ondisk = 0) v7 
>>>>>>>>>>>>>>>>> ==== 182+0+20
>>>>>>>>>>>>>>>>> (2521936
>>>>>>>>>>>>>>>>> 738 0 3070622072) 0x7f02c9897280 con 0x7f02c581a800
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.879391 7f026b94f700 10 cache put:
>>>>>>>>>>>>>>>>> name=.users+XXXXXXXEDITEDXXXXXX info.flags=6
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.879421 7f026b94f700 10 adding 
>>>>>>>>>>>>>>>>> .users+XXXXXXXEDITEDXXXXXX
>>>>>>>>>>>>>>>>> to cache LRU end
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.879442 7f026b94f700 20 
>>>>>>>>>>>>>>>>> get_system_obj_state: s->obj_tag
>>>>>>>>>>>>>>>>> was set empty
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.879452 7f026b94f700 10 cache get:
>>>>>>>>>>>>>>>>> name=.users+XXXXXXXEDITEDXXXXXX : type miss (requested=1, 
>>>>>>>>>>>>>>>>> cached=6)
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.879461 7f026b94f700 20 
>>>>>>>>>>>>>>>>> get_system_obj_state:
>>>>>>>>>>>>>>>>> rctx=0x7f026b94b7c0 obj=.users:XXXXXXXEDITEDXXXXXX 
>>>>>>>>>>>>>>>>> state=0x7f02b70cdfe8
>>>>>>>>>>>>>>>>> s->prefetch_data=0
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.879480 7f026b94f700 20 rados->read ofs=0 
>>>>>>>>>>>>>>>>> len=524288
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.879533 7f026b94f700  1 -- 
>>>>>>>>>>>>>>>>> 192.168.168.201:0/3703211654
>>>>>>>>>>>>>>>>> --> 192.168.168.201:6829/893 -- 
>>>>>>>>>>>>>>>>> osd_op(client.160747848.0:1020 16.e7d6c03f
>>>>>>>>>>>>>>>>> XXXXXXXEDITEDXXXXXX [read 0~524288] snapc 0=[] 
>>>>>>>>>>>>>>>>> ack+read+known_if_redi
>>>>>>>>>>>>>>>>> rected e97148) v7 -- ?+0 0x7f02b70ab980 con 0x7f02c581a800
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.880343 7f02e9405700  1 -- 
>>>>>>>>>>>>>>>>> 192.168.168.201:0/3703211654
>>>>>>>>>>>>>>>>> <== osd.3 192.168.168.201:6829/893 53 ==== osd_op_reply(1020
>>>>>>>>>>>>>>>>> XXXXXXXEDITEDXXXXXX [read 0~37] v0'0 uv7 ondisk = 0) v7 ==== 
>>>>>>>>>>>>>>>>> 140+0+37
>>>>>>>>>>>>>>>>> (3791206703 0
>>>>>>>>>>>>>>>>>  1897156770) 0x7f02c9897280 con 0x7f02c581a800
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.880433 7f026b94f700 20 rados->read r=0 
>>>>>>>>>>>>>>>>> bl.length=37
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.880464 7f026b94f700 10 cache put:
>>>>>>>>>>>>>>>>> name=.users+XXXXXXXEDITEDXXXXXX info.flags=1
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.880476 7f026b94f700 10 moving 
>>>>>>>>>>>>>>>>> .users+XXXXXXXEDITEDXXXXXX
>>>>>>>>>>>>>>>>> to cache LRU end
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.880509 7f026b94f700 20 
>>>>>>>>>>>>>>>>> get_system_obj_state:
>>>>>>>>>>>>>>>>> rctx=0x7f026b94b3f0 obj=.users.uid:EDITED - client name 
>>>>>>>>>>>>>>>>> state=0x7f02b70cede8
>>>>>>>>>>>>>>>>> s->prefetch_data=0
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.880519 7f026b94f700 10 cache get: 
>>>>>>>>>>>>>>>>> name=.users.uid+EDITED
>>>>>>>>>>>>>>>>> - client name : miss
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.880587 7f026b94f700  1 -- 
>>>>>>>>>>>>>>>>> 192.168.168.201:0/3703211654
>>>>>>>>>>>>>>>>> --> 192.168.168.203:6825/25725 -- 
>>>>>>>>>>>>>>>>> osd_op(client.160747848.0:1021 15.d14b4318
>>>>>>>>>>>>>>>>> EDITED - client name [call version.read,getxattrs,stat]
>>>>>>>>>>>>>>>>>  snapc 0=[] ack+read+known_if_redirected e97148) v7 -- ?+0 
>>>>>>>>>>>>>>>>> 0x7f02b70abd00
>>>>>>>>>>>>>>>>> con 0x7f02b9057180
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.882156 7f02bbefe700  1 -- 
>>>>>>>>>>>>>>>>> 192.168.168.201:0/3703211654
>>>>>>>>>>>>>>>>> <== osd.27 192.168.168.203:6825/25725 33 ==== 
>>>>>>>>>>>>>>>>> osd_op_reply(1021 EDITED -
>>>>>>>>>>>>>>>>> client name [call,getxattrs,stat] v0'0 uv14016 ondisk = 0)
>>>>>>>>>>>>>>>>> v7 ==== 237+0+139 (1797997075 0 3786209825) 0x7f02bc847600 con
>>>>>>>>>>>>>>>>> 0x7f02b9057180
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.882322 7f026b94f700 10 cache put: 
>>>>>>>>>>>>>>>>> name=.users.uid+EDITED
>>>>>>>>>>>>>>>>> - client name info.flags=22
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.882353 7f026b94f700 10 adding 
>>>>>>>>>>>>>>>>> .users.uid+EDITED - client
>>>>>>>>>>>>>>>>> name to cache LRU end
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.882363 7f026b94f700 20 
>>>>>>>>>>>>>>>>> get_system_obj_state: s->obj_tag
>>>>>>>>>>>>>>>>> was set empty
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.882374 7f026b94f700 10 cache get: 
>>>>>>>>>>>>>>>>> name=.users.uid+EDITED
>>>>>>>>>>>>>>>>> - client name : type miss (requested=17, cached=22)
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.882383 7f026b94f700 20 
>>>>>>>>>>>>>>>>> get_system_obj_state:
>>>>>>>>>>>>>>>>> rctx=0x7f026b94b3f0 obj=.users.uid:EDITED - client name 
>>>>>>>>>>>>>>>>> state=0x7f02b70cede8
>>>>>>>>>>>>>>>>> s->prefetch_data=0
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.882427 7f026b94f700 20 rados->read ofs=0 
>>>>>>>>>>>>>>>>> len=524288
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.882492 7f026b94f700  1 -- 
>>>>>>>>>>>>>>>>> 192.168.168.201:0/3703211654
>>>>>>>>>>>>>>>>> --> 192.168.168.203:6825/25725 -- 
>>>>>>>>>>>>>>>>> osd_op(client.160747848.0:1022 15.d14b4318
>>>>>>>>>>>>>>>>> EDITED - client name [call version.check_conds,call ver
>>>>>>>>>>>>>>>>> sion.read,read 0~524288] snapc 0=[] 
>>>>>>>>>>>>>>>>> ack+read+known_if_redirected e97148) v7
>>>>>>>>>>>>>>>>> -- ?+0 0x7f02b70ac400 con 0x7f02b9057180
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.883550 7f02bbefe700  1 -- 
>>>>>>>>>>>>>>>>> 192.168.168.201:0/3703211654
>>>>>>>>>>>>>>>>> <== osd.27 192.168.168.203:6825/25725 34 ==== 
>>>>>>>>>>>>>>>>> osd_op_reply(1022 EDITED -
>>>>>>>>>>>>>>>>> client name [call,call,read 0~401] v0'0 uv14016 ondisk = 0)
>>>>>>>>>>>>>>>>>  v7 ==== 237+0+449 (131137144 0 372490823) 0x7f02bc847980 con 
>>>>>>>>>>>>>>>>> 0x7f02b9057180
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.883653 7f026b94f700 20 rados->read r=0 
>>>>>>>>>>>>>>>>> bl.length=401
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.883695 7f026b94f700 10 cache put: 
>>>>>>>>>>>>>>>>> name=.users.uid+EDITED
>>>>>>>>>>>>>>>>> - client name info.flags=17
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.883698 7f026b94f700 10 moving 
>>>>>>>>>>>>>>>>> .users.uid+EDITED - client
>>>>>>>>>>>>>>>>> name to cache LRU end
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.883726 7f026b94f700 10 chain_cache_entry:
>>>>>>>>>>>>>>>>> cache_locator=.users.uid+EDITED - client name
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.883777 7f026b94f700 10 
>>>>>>>>>>>>>>>>> get_canon_resource():
>>>>>>>>>>>>>>>>> dest=/my-new-bucket-31337/
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.883782 7f026b94f700 10 auth_hdr:
>>>>>>>>>>>>>>>>> PUT
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Tue, 08 Nov 2016 23:11:37 GMT
>>>>>>>>>>>>>>>>> /my-new-bucket-31337/
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.883849 7f026b94f700 15 calculated
>>>>>>>>>>>>>>>>> digest=EDITEDXXXXXXXeWyiacaN26GcME
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.883852 7f026b94f700 15
>>>>>>>>>>>>>>>>> auth_sign=EDITEDXXXXXXXeWyiacaN26GcME
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.883853 7f026b94f700 15 compare=0
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.883858 7f026b94f700  2 req 
>>>>>>>>>>>>>>>>> 2:0.006739:s3:PUT
>>>>>>>>>>>>>>>>> /my-new-bucket-31337/:create_bucket:normalizing buckets and 
>>>>>>>>>>>>>>>>> tenants
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.883864 7f026b94f700 10 s->object=<NULL>
>>>>>>>>>>>>>>>>> s->bucket=my-new-bucket-31337
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.883869 7f026b94f700  2 req 
>>>>>>>>>>>>>>>>> 2:0.006750:s3:PUT
>>>>>>>>>>>>>>>>> /my-new-bucket-31337/:create_bucket:init permissions
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.883872 7f026b94f700  2 req 
>>>>>>>>>>>>>>>>> 2:0.006753:s3:PUT
>>>>>>>>>>>>>>>>> /my-new-bucket-31337/:create_bucket:recalculating target
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.883875 7f026b94f700  2 req 
>>>>>>>>>>>>>>>>> 2:0.006756:s3:PUT
>>>>>>>>>>>>>>>>> /my-new-bucket-31337/:create_bucket:reading permissions
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.883879 7f026b94f700  2 req 
>>>>>>>>>>>>>>>>> 2:0.006760:s3:PUT
>>>>>>>>>>>>>>>>> /my-new-bucket-31337/:create_bucket:init op
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.883884 7f026b94f700  2 req 
>>>>>>>>>>>>>>>>> 2:0.006765:s3:PUT
>>>>>>>>>>>>>>>>> /my-new-bucket-31337/:create_bucket:verifying op mask
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.883886 7f026b94f700 20 required_mask= 2 
>>>>>>>>>>>>>>>>> user.op_mask=7
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.883887 7f026b94f700  2 req 
>>>>>>>>>>>>>>>>> 2:0.006769:s3:PUT
>>>>>>>>>>>>>>>>> /my-new-bucket-31337/:create_bucket:verifying op permissions
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.883939 7f026b94f700  1 -- 
>>>>>>>>>>>>>>>>> 192.168.168.201:0/3703211654
>>>>>>>>>>>>>>>>> --> 192.168.168.203:6840/36134 -- 
>>>>>>>>>>>>>>>>> osd_op(client.160747848.0:1023 15.efcbc969
>>>>>>>>>>>>>>>>> EDITED - client name.buckets [call user.list_buckets] snapc 
>>>>>>>>>>>>>>>>> 0=[]
>>>>>>>>>>>>>>>>> ack+read+known_if_redirected e97148) v7 -- ?+0 0x7f02b70acb00 
>>>>>>>>>>>>>>>>> con
>>>>>>>>>>>>>>>>> 0x7f02cd43a900
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.885157 7f02bbfff700  1 -- 
>>>>>>>>>>>>>>>>> 192.168.168.201:0/3703211654
>>>>>>>>>>>>>>>>> <== osd.25 192.168.168.203:6840/36134 37 ==== 
>>>>>>>>>>>>>>>>> osd_op_reply(1023 EDITED -
>>>>>>>>>>>>>>>>> client name.buckets [call] v0'0 uv0 ack = -2 ((2) No such 
>>>>>>>>>>>>>>>>> file or
>>>>>>>>>>>>>>>>> directory)) v7 ==== 161+0+0 (2433927993 0 0) 0x7f02b886d280 
>>>>>>>>>>>>>>>>> con
>>>>>>>>>>>>>>>>> 0x7f02cd43a900
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.885307 7f026b94f700  2 req 
>>>>>>>>>>>>>>>>> 2:0.008187:s3:PUT
>>>>>>>>>>>>>>>>> /my-new-bucket-31337/:create_bucket:verifying op params
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.885338 7f026b94f700  2 req 
>>>>>>>>>>>>>>>>> 2:0.008219:s3:PUT
>>>>>>>>>>>>>>>>> /my-new-bucket-31337/:create_bucket:pre-executing
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.885342 7f026b94f700  2 req 
>>>>>>>>>>>>>>>>> 2:0.008223:s3:PUT
>>>>>>>>>>>>>>>>> /my-new-bucket-31337/:create_bucket:executing
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.885377 7f026b94f700 20 
>>>>>>>>>>>>>>>>> get_system_obj_state:
>>>>>>>>>>>>>>>>> rctx=0x7f026b94c6d0 obj=.rgw:my-new-bucket-31337 
>>>>>>>>>>>>>>>>> state=0x7f02b70cdfe8
>>>>>>>>>>>>>>>>> s->prefetch_data=0
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.885390 7f026b94f700 10 cache get:
>>>>>>>>>>>>>>>>> name=.rgw+my-new-bucket-31337 : miss
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.885444 7f026b94f700  1 -- 
>>>>>>>>>>>>>>>>> 192.168.168.201:0/3703211654
>>>>>>>>>>>>>>>>> --> 192.168.168.202:6821/14483 -- 
>>>>>>>>>>>>>>>>> osd_op(client.160747848.0:1024 13.501be1c3
>>>>>>>>>>>>>>>>> my-new-bucket-31337 [call version.read,getxattrs,stat] snapc 
>>>>>>>>>>>>>>>>> 0=[]
>>>>>>>>>>>>>>>>> ack+read+known_if_redirected e97148) v7 -- ?+0 0x7f02b70ace80 
>>>>>>>>>>>>>>>>> con
>>>>>>>>>>>>>>>>> 0x7f02cd43b980
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.886940 7f02bfefe700  1 -- 
>>>>>>>>>>>>>>>>> 192.168.168.201:0/3703211654
>>>>>>>>>>>>>>>>> <== osd.11 192.168.168.202:6821/14483 11 ==== 
>>>>>>>>>>>>>>>>> osd_op_reply(1024
>>>>>>>>>>>>>>>>> my-new-bucket-31337 [call,getxattrs,stat] v0'0 uv0 ack = -2 
>>>>>>>>>>>>>>>>> ((2) No such
>>>>>>>>>>>>>>>>> file or directory)) v7 ==== 223+0+0 (4048274385 0 0) 
>>>>>>>>>>>>>>>>> 0x7f02bf480280 con
>>>>>>>>>>>>>>>>> 0x7f02cd43b980
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.887120 7f026b94f700 10 cache put:
>>>>>>>>>>>>>>>>> name=.rgw+my-new-bucket-31337 info.flags=0
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.887135 7f026b94f700 10 adding 
>>>>>>>>>>>>>>>>> .rgw+my-new-bucket-31337
>>>>>>>>>>>>>>>>> to cache LRU end
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.887144 7f026b94f700  0 rest connection is 
>>>>>>>>>>>>>>>>> invalid
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Did you configure multi-site? Seems to me that it tries to 
>>>>>>>>>>>>>>>> connect to
>>>>>>>>>>>>>>>> a master zone, so if you have a single zonegroup, single zone
>>>>>>>>>>>>>>>> configuration you need to make sure that the zone (and 
>>>>>>>>>>>>>>>> zonegroup) are
>>>>>>>>>>>>>>>> set to be the master.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Yehuda
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.887149 7f026b94f700  2 req 
>>>>>>>>>>>>>>>>> 2:0.010030:s3:PUT
>>>>>>>>>>>>>>>>> /my-new-bucket-31337/:create_bucket:completing
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.887217 7f026b94f700  2 req 
>>>>>>>>>>>>>>>>> 2:0.010098:s3:PUT
>>>>>>>>>>>>>>>>> /my-new-bucket-31337/:create_bucket:op status=-22
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.887221 7f026b94f700  2 req 
>>>>>>>>>>>>>>>>> 2:0.010102:s3:PUT
>>>>>>>>>>>>>>>>> /my-new-bucket-31337/:create_bucket:http status=400
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.887244 7f026b94f700  1 ====== req done
>>>>>>>>>>>>>>>>> req=0x7f02ba07b0e0 op status=-22 http_status=400 ======
>>>>>>>>>>>>>>>>> 2016-11-08 23:11:42.887254 7f026b94f700 20 process_request() 
>>>>>>>>>>>>>>>>> returned -22
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> ###############################
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Apache proxy logs show the following:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> 192.168.169.91 - - [08/Nov/2016:23:11:42 +0000] "PUT 
>>>>>>>>>>>>>>>>> /my-new-bucket-31337/
>>>>>>>>>>>>>>>>> HTTP/1.1" 400 4379 "-" "Boto/2.38.0 Python/2.7.12
>>>>>>>>>>>>>>>>> Linux/4.8.4-040804-generic"
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> The existing buckets work perfectly well, i can list and put 
>>>>>>>>>>>>>>>>> objects. It's
>>>>>>>>>>>>>>>>> the creation of new buckets that i am having issues with.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Could someone please help me to figure out what the issue is?
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Thanks
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Andrei
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> _______________________________________________
>>>>>>>>>>>>>>>>> ceph-users mailing list
>>>>>>>>>>>>>>>>> ceph-users@lists.ceph.com
>>>>>>>>>>>>>>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>>>>>>>>> _______________________________________________
>>>>>>>>>>> ceph-users mailing list
>>>>>>>>>>> ceph-users@lists.ceph.com
>>>>>>>>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>> Yoann Moulin
>>>>>>>>>> EPFL IC-IT
>>>>>>>>> _______________________________________________
>>>>>>>>> ceph-users mailing list
>>>>>>>>> ceph-users@lists.ceph.com
>>>>> >> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>>
>>


_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to