Re: [ceph-users] Unknown error (95->500) when creating buckets or putting files to RGW after upgrade from Infernalis to Jewel

2016-07-27 Thread nick
I compared the pools with ours and I can see no difference to be honest. The 
issue sounds like you can not write into a specific pool (as get and delete 
works). 

Are all the filesystem permissions correct? Maybe another 'chown -R ceph:ceph' 
for all the OSD data dirs would help? Did you check the users permissions in 
rgw as well (op_mask of 'radosgw-admin user info --uid=""')?

Cheers
Nick

On Wednesday, July 27, 2016 07:55:14 AM Naruszewicz, Maciej wrote:
> Sure Nick, here they are:
> 
> # ceph osd lspools
> 72 .rgw.control,73 .rgw,74 .rgw.gc,75 .log,76 .users.uid,77 .users,78
> .users.swift,79 .rgw.buckets.index,80 .rgw.buckets.extra,81 .rgw.buckets,82
> .rgw.root.backup,83 .rgw.root,84 logs,85 default.rgw.meta,
> 
> Thanks for your help nonetheless!
> 
> -Original Message-
> From: nick [mailto:n...@nine.ch]
> Sent: Wednesday, July 27, 2016 6:31 AM
> To: Naruszewicz, Maciej 
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] Unknown error (95->500) when creating buckets or
> putting files to RGW after upgrade from Infernalis to Jewel
> 
> Hi Maciej,
> slowly I am running out of ideas :-) Could you send the output of 'ceph osd
> lspools' so that I can compare your pools with ours?
> 
> Maybe someone else got similiar problems and can help?
> 
> Cheers
> Nick
> 
> On Tuesday, July 26, 2016 03:56:39 PM Naruszewicz, Maciej wrote:
> > Unfortunately none of our pools are erasure-code pools - I just
> > double-checked that.
> > 
> > I found another issue with deleting (I only can't create buckets or
> > upload files, get/delete work fine) which looks almost identically
> > http://lists.ceph.com/pipermail/ceph-users-ceph.com/2015-July/003100.h
> > tml
> > but it was unanswered.
> > 
> > 
> > -Original Message-
> > From: nick [mailto:n...@nine.ch]
> > Sent: Tuesday, July 26, 2016 8:27 AM
> > To: Naruszewicz, Maciej 
> > Cc: ceph-users@lists.ceph.com
> > Subject: Re: [ceph-users] Unknown error (95->500) when creating
> > buckets or putting files to RGW after upgrade from Infernalis to Jewel
> > 
> > Hey Maciej,
> > I compared the output of your commands with the output on our cluster
> > and they are the same. So I do not see any problems on that site.
> > After that I googled for the warning you get in the debug log: """
> > WARNING: set_req_state_err err_no=95 resorting to 500 """
> > 
> > I found some reports about problems with EC coded pools and rados gw.
> > Do you use that?
> > 
> > 
> > Cheers
> > Nick
> > 
> > On Monday, July 25, 2016 04:50:56 PM Naruszewicz, Maciej wrote:
> > > WARNING: set_req_state_err err_no=95 resorting to 500
> 
> --
> Sebastian Nickel
> Nine Internet Solutions AG, Albisriederstr. 243a, CH-8047 Zuerich Tel +41 44
> 637 40 00 | Support +41 44 637 40 40 | www.nine.ch
 
-- 
Sebastian Nickel
Nine Internet Solutions AG, Albisriederstr. 243a, CH-8047 Zuerich
Tel +41 44 637 40 00 | Support +41 44 637 40 40 | www.nine.ch

signature.asc
Description: This is a digitally signed message part.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Unknown error (95->500) when creating buckets or putting files to RGW after upgrade from Infernalis to Jewel

2016-07-27 Thread Naruszewicz, Maciej
Sure Nick, here they are:

# ceph osd lspools
72 .rgw.control,73 .rgw,74 .rgw.gc,75 .log,76 .users.uid,77 .users,78 
.users.swift,79 .rgw.buckets.index,80 .rgw.buckets.extra,81 .rgw.buckets,82 
.rgw.root.backup,83 .rgw.root,84 logs,85 default.rgw.meta,

Thanks for your help nonetheless!

-Original Message-
From: nick [mailto:n...@nine.ch] 
Sent: Wednesday, July 27, 2016 6:31 AM
To: Naruszewicz, Maciej 
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Unknown error (95->500) when creating buckets or 
putting files to RGW after upgrade from Infernalis to Jewel

Hi Maciej,
slowly I am running out of ideas :-) Could you send the output of 'ceph osd 
lspools' so that I can compare your pools with ours?

Maybe someone else got similiar problems and can help?

Cheers
Nick

On Tuesday, July 26, 2016 03:56:39 PM Naruszewicz, Maciej wrote:
> Unfortunately none of our pools are erasure-code pools - I just 
> double-checked that.
> 
> I found another issue with deleting (I only can't create buckets or 
> upload files, get/delete work fine) which looks almost identically 
> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2015-July/003100.h
> tml
> but it was unanswered.
> 
> 
> -Original Message-
> From: nick [mailto:n...@nine.ch]
> Sent: Tuesday, July 26, 2016 8:27 AM
> To: Naruszewicz, Maciej 
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] Unknown error (95->500) when creating 
> buckets or putting files to RGW after upgrade from Infernalis to Jewel
> 
> Hey Maciej,
> I compared the output of your commands with the output on our cluster 
> and they are the same. So I do not see any problems on that site. 
> After that I googled for the warning you get in the debug log: """
> WARNING: set_req_state_err err_no=95 resorting to 500 """
> 
> I found some reports about problems with EC coded pools and rados gw. 
> Do you use that?
> 
> 
> Cheers
> Nick
> 
> On Monday, July 25, 2016 04:50:56 PM Naruszewicz, Maciej wrote:
> > WARNING: set_req_state_err err_no=95 resorting to 500
 
--
Sebastian Nickel
Nine Internet Solutions AG, Albisriederstr. 243a, CH-8047 Zuerich Tel +41 44 
637 40 00 | Support +41 44 637 40 40 | www.nine.ch
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Unknown error (95->500) when creating buckets or putting files to RGW after upgrade from Infernalis to Jewel

2016-07-26 Thread nick
Hi Maciej,
slowly I am running out of ideas :-) Could you send the output of 'ceph osd 
lspools' so that I can compare your pools with ours?

Maybe someone else got similiar problems and can help?

Cheers
Nick

On Tuesday, July 26, 2016 03:56:39 PM Naruszewicz, Maciej wrote:
> Unfortunately none of our pools are erasure-code pools - I just
> double-checked that.
> 
> I found another issue with deleting (I only can't create buckets or upload
> files, get/delete work fine) which looks almost identically
> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2015-July/003100.html
> but it was unanswered.
> 
> 
> -Original Message-
> From: nick [mailto:n...@nine.ch]
> Sent: Tuesday, July 26, 2016 8:27 AM
> To: Naruszewicz, Maciej 
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] Unknown error (95->500) when creating buckets or
> putting files to RGW after upgrade from Infernalis to Jewel
> 
> Hey Maciej,
> I compared the output of your commands with the output on our cluster and
> they are the same. So I do not see any problems on that site. After that I
> googled for the warning you get in the debug log: """
> WARNING: set_req_state_err err_no=95 resorting to 500 """
> 
> I found some reports about problems with EC coded pools and rados gw. Do you
> use that?
> 
> 
> Cheers
> Nick
> 
> On Monday, July 25, 2016 04:50:56 PM Naruszewicz, Maciej wrote:
> > WARNING: set_req_state_err err_no=95 resorting to 500
 
-- 
Sebastian Nickel
Nine Internet Solutions AG, Albisriederstr. 243a, CH-8047 Zuerich
Tel +41 44 637 40 00 | Support +41 44 637 40 40 | www.nine.ch

signature.asc
Description: This is a digitally signed message part.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Unknown error (95->500) when creating buckets or putting files to RGW after upgrade from Infernalis to Jewel

2016-07-26 Thread Naruszewicz, Maciej
Unfortunately none of our pools are erasure-code pools - I just double-checked 
that. 

I found another issue with deleting (I only can't create buckets or upload 
files, get/delete work fine) which looks almost identically 
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2015-July/003100.html but 
it was unanswered.


-Original Message-
From: nick [mailto:n...@nine.ch] 
Sent: Tuesday, July 26, 2016 8:27 AM
To: Naruszewicz, Maciej 
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Unknown error (95->500) when creating buckets or 
putting files to RGW after upgrade from Infernalis to Jewel

Hey Maciej,
I compared the output of your commands with the output on our cluster and they 
are the same. So I do not see any problems on that site. After that I googled 
for the warning you get in the debug log:
"""
WARNING: set_req_state_err err_no=95 resorting to 500 """

I found some reports about problems with EC coded pools and rados gw. Do you 
use that?


Cheers
Nick

On Monday, July 25, 2016 04:50:56 PM Naruszewicz, Maciej wrote:
> WARNING: set_req_state_err err_no=95 resorting to 500
 
-- 
Sebastian Nickel
Nine Internet Solutions AG, Albisriederstr. 243a, CH-8047 Zuerich
Tel +41 44 637 40 00 | Support +41 44 637 40 40 | www.nine.ch
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Unknown error (95->500) when creating buckets or putting files to RGW after upgrade from Infernalis to Jewel

2016-07-26 Thread Ben Hines
Fwiw this thread still has me terrified to upgrade my rgw cluster. Just
when I thought it was safe.

Anyone have any successful problem free rgw infernalis-jewel upgrade
reports?

On Jul 25, 2016 11:27 PM, "nick"  wrote:

> Hey Maciej,
> I compared the output of your commands with the output on our cluster and
> they
> are the same. So I do not see any problems on that site. After that I
> googled
> for the warning you get in the debug log:
> """
> WARNING: set_req_state_err err_no=95 resorting to 500
> """
>
> I found some reports about problems with EC coded pools and rados gw. Do
> you
> use that?
>
>
> Cheers
> Nick
>
> On Monday, July 25, 2016 04:50:56 PM Naruszewicz, Maciej wrote:
> > WARNING: set_req_state_err err_no=95 resorting to 500
>
> --
> Sebastian Nickel
> Nine Internet Solutions AG, Albisriederstr. 243a, CH-8047 Zuerich
> Tel +41 44 637 40 00 | Support +41 44 637 40 40 | www.nine.ch
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Unknown error (95->500) when creating buckets or putting files to RGW after upgrade from Infernalis to Jewel

2016-07-26 Thread nick
Hey Maciej,
I compared the output of your commands with the output on our cluster and they 
are the same. So I do not see any problems on that site. After that I googled 
for the warning you get in the debug log:
"""
WARNING: set_req_state_err err_no=95 resorting to 500
"""

I found some reports about problems with EC coded pools and rados gw. Do you 
use that?


Cheers
Nick

On Monday, July 25, 2016 04:50:56 PM Naruszewicz, Maciej wrote:
> WARNING: set_req_state_err err_no=95 resorting to 500
 
-- 
Sebastian Nickel
Nine Internet Solutions AG, Albisriederstr. 243a, CH-8047 Zuerich
Tel +41 44 637 40 00 | Support +41 44 637 40 40 | www.nine.ch

signature.asc
Description: This is a digitally signed message part.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Unknown error (95->500) when creating buckets or putting files to RGW after upgrade from Infernalis to Jewel

2016-07-25 Thread Naruszewicz, Maciej
Nick,

Thanks a lot for you input so far.

I re-ran the fix script from scratch and it turned out I made some mistakes in 
the process. I managed to run it correctly and now I am able to create buckets 
but I still can't upload anything. I looked for any issues in our configuration 
by searching at zonegroups, zones etc. but I haven't found anything missing 
there or in the logs. I'm attaching a log for failed file upload to an existing 
bucket and output of RGW configuration.

1. Creating bucket

2016-07-22 09:40:17.579446 7f40547f8700 20 RGWEnv::set(): HTTP_HOST: 
10.1.68.29:8080
2016-07-22 09:40:17.579461 7f40547f8700 20 RGWEnv::set(): HTTP_ACCEPT_ENCODING: 
identity
2016-07-22 09:40:17.579462 7f40547f8700 20 RGWEnv::set(): CONTENT_LENGTH: 0
2016-07-22 09:40:17.579463 7f40547f8700 20 RGWEnv::set(): 
HTTP_X_AMZ_CONTENT_SHA256: 
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
2016-07-22 09:40:17.579466 7f40547f8700 20 RGWEnv::set(): 
HTTP_X_AMZ_STORAGE_CLASS: STANDARD
2016-07-22 09:40:17.579479 7f40547f8700 20 RGWEnv::set(): 
HTTP_X_AMZ_META_S3CMD_ATTRS: 
uid:0/gname:root/uname:root/gid:0/mode:33188/mtime:1469007939/atime:1469007939/md5:d8160ddb9f4681ec985e03429f842b88/ctime:1469023832
2016-07-22 09:40:17.579481 7f40547f8700 20 RGWEnv::set(): HTTP_X_AMZ_DATE: 
20160722T094017Z
2016-07-22 09:40:17.579482 7f40547f8700 20 RGWEnv::set(): CONTENT_TYPE: 
application/octet-stream
2016-07-22 09:40:17.579483 7f40547f8700 20 RGWEnv::set(): HTTP_AUTHORIZATION: 
AWS4-HMAC-SHA256 
Credential=7VM2JP5QFARP8UMUW2KH/20160722/US/s3/aws4_request,SignedHeaders=content-type;host;x-amz-content-sha256;x-amz-date;x-amz-meta-s3cmd-attrs;x-amz-storage-class,Signature=dac5cb849ed2057d925a43f702b9e1f135618fd04d95beb943954df1d7c0df1c
2016-07-22 09:40:17.579485 7f40547f8700 20 RGWEnv::set(): REQUEST_METHOD: POST
2016-07-22 09:40:17.579486 7f40547f8700 20 RGWEnv::set(): REQUEST_URI: 
/test-bucket-0/s3-test-file-1
2016-07-22 09:40:17.579486 7f40547f8700 20 RGWEnv::set(): QUERY_STRING: uploads
2016-07-22 09:40:17.579488 7f40547f8700 20 RGWEnv::set(): REMOTE_USER: 
2016-07-22 09:40:17.579489 7f40547f8700 20 RGWEnv::set(): SCRIPT_URI: 
/test-bucket-0/s3-test-file-1
2016-07-22 09:40:17.579492 7f40547f8700 20 RGWEnv::set(): SERVER_PORT: 8080
2016-07-22 09:40:17.579493 7f40547f8700 20 CONTENT_LENGTH=0
2016-07-22 09:40:17.579494 7f40547f8700 20 CONTENT_TYPE=application/octet-stream
2016-07-22 09:40:17.579494 7f40547f8700 20 HTTP_ACCEPT_ENCODING=identity
2016-07-22 09:40:17.579498 7f40547f8700 20 HTTP_AUTHORIZATION=AWS4-HMAC-SHA256 
Credential=7VM2JP5QFARP8UMUW2KH/20160722/US/s3/aws4_request,SignedHeaders=content-type;host;x-amz-content-sha256;x-amz-date;x-amz-meta-s3cmd-attrs;x-amz-storage-class,Signature=dac5cb849ed2057d925a43f702b9e1f135618fd04d95beb943954df1d7c0df1c
2016-07-22 09:40:17.579499 7f40547f8700 20 HTTP_HOST=10.1.68.29:8080
2016-07-22 09:40:17.579499 7f40547f8700 20 
HTTP_X_AMZ_CONTENT_SHA256=e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
2016-07-22 09:40:17.579500 7f40547f8700 20 HTTP_X_AMZ_DATE=20160722T094017Z
2016-07-22 09:40:17.579500 7f40547f8700 20 
HTTP_X_AMZ_META_S3CMD_ATTRS=uid:0/gname:root/uname:root/gid:0/mode:33188/mtime:1469007939/atime:1469007939/md5:d8160ddb9f4681ec985e03429f842b88/ctime:1469023832
2016-07-22 09:40:17.579501 7f40547f8700 20 HTTP_X_AMZ_STORAGE_CLASS=STANDARD
2016-07-22 09:40:17.579501 7f40547f8700 20 QUERY_STRING=uploads
2016-07-22 09:40:17.579502 7f40547f8700 20 REMOTE_USER=
2016-07-22 09:40:17.579502 7f40547f8700 20 REQUEST_METHOD=POST
2016-07-22 09:40:17.579502 7f40547f8700 20 
REQUEST_URI=/test-bucket-0/s3-test-file-1
2016-07-22 09:40:17.579503 7f40547f8700 20 
SCRIPT_URI=/test-bucket-0/s3-test-file-1
2016-07-22 09:40:17.579503 7f40547f8700 20 SERVER_PORT=8080
2016-07-22 09:40:17.579505 7f40547f8700  1 == starting new request 
req=0x7f40547f2710 =
2016-07-22 09:40:17.579527 7f40547f8700  2 req 5:0.22::POST 
/test-bucket-0/s3-test-file-1::initializing for trans_id = 
tx5-005791ea01-8c23-default
2016-07-22 09:40:17.579530 7f40547f8700 10 host=10.1.68.29
2016-07-22 09:40:17.579533 7f40547f8700 20 subdomain= domain= 
in_hosted_domain=0 in_hosted_domain_s3website=0
2016-07-22 09:40:17.579542 7f40547f8700 10 meta>> HTTP_X_AMZ_CONTENT_SHA256
2016-07-22 09:40:17.579547 7f40547f8700 10 meta>> HTTP_X_AMZ_DATE
2016-07-22 09:40:17.579549 7f40547f8700 10 meta>> HTTP_X_AMZ_META_S3CMD_ATTRS
2016-07-22 09:40:17.579550 7f40547f8700 10 meta>> HTTP_X_AMZ_STORAGE_CLASS
2016-07-22 09:40:17.579552 7f40547f8700 10 x>> 
x-amz-content-sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
2016-07-22 09:40:17.579553 7f40547f8700 10 x>> x-amz-date:20160722T094017Z
2016-07-22 09:40:17.579553 7f40547f8700 10 x>> 
x-amz-meta-s3cmd-attrs:uid:0/gname:root/uname:root/gid:0/mode:33188/mtime:1469007939/atime:1469007939/md5:d8160ddb9f4681ec985e03429f842b88/ctime:1469023832
2016-07-22 09:40:17.579554 7f40547f8700 10 x>> x-amz-storage-class:STANDARD

Re: [ceph-users] Unknown error (95->500) when creating buckets or putting files to RGW after upgrade from Infernalis to Jewel

2016-07-21 Thread nick
Hi Maciej,
I am not really sure how to fix this error but executing the same command on 
our cluster outputs:

"""
$~ # radosgw-admin zonegroup get
{
"id": "default",
"name": "default",
"api_name": "",
"is_master": "true",
"endpoints": [],
"hostnames": [],
"hostnames_s3website": [],
"master_zone": "default",
"zones": [
{
"id": "default",
"name": "default",
"endpoints": [],
"log_meta": "false",
"log_data": "false",
"bucket_index_max_shards": 0,
"read_only": "false"
}
],
"placement_targets": [
{
"name": "default-placement",
"tags": []
}
],
"default_placement": "default-placement",
"realm_id": "43e149da-7dd9-4b0f-a6b6-3ee039e48d92"
}
"""

The big difference is that there is a master_zone somehow configured in our 
cluster. Maybe you can update your master_zone to 'default' somehow?

Cheers
Nick

On Thursday, July 21, 2016 01:12:05 PM Naruszewicz, Maciej wrote:
> radosgw-admin zonegroup get  --zonegroup-id
 
-- 
Sebastian Nickel
Nine Internet Solutions AG, Albisriederstr. 243a, CH-8047 Zuerich
Tel +41 44 637 40 00 | Support +41 44 637 40 40 | www.nine.ch

signature.asc
Description: This is a digitally signed message part.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Unknown error (95->500) when creating buckets or putting files to RGW after upgrade from Infernalis to Jewel

2016-07-21 Thread Naruszewicz, Maciej
Hi Nick,

Thanks for your suggestion, I've tried the script on an isolated testing 
cluster. Unfortunately, the script did not help us a lot, it only made creating 
buckets possible. 

The logs I provided earlier actually make some sense because they were 
collected using RGW in Jewel and Ceph in Infernalis, so it only makes sense 
that some of the operations requested by RGW are not supported. However, if 
both Ceph and RGW are upgraded to Jewel I still get the following errors when 
creating a bucket and trying to upload a file:

1) Trying to create a bucket:
2016-07-21 12:10:39.389397 7f67d57fa700  0 sending create_bucket 
request to master zonegroup
2016-07-21 12:10:39.389399 7f67d57fa700  0 ERROR: endpoints not 
configured for upstream zone
2016-07-21 12:10:39.389403 7f67d57fa700  2 req 2:0.003300:s3:PUT 
/test-bucket-2/:create_bucket:completing
2016-07-21 12:10:39.389406 7f67d57fa700  0 WARNING: set_req_state_err 
err_no=5 resorting to 500
2016-07-21 12:10:39.389486 7f67d57fa700  2 req 2:0.003383:s3:PUT 
/test-bucket-2/:create_bucket:op status=-5
2016-07-21 12:10:39.389491 7f67d57fa700  2 req 2:0.003388:s3:PUT 
/test-bucket-2/:create_bucket:http status=500

I looked at the zonegroup (simplest setup with one zone and one zonegroup which 
was probably created during upgrade) and indeed, it does not contain any 
endpoints:

# radosgw-admin zonegroup get  --zonegroup-id
{
"id": "default",
"name": "default",
"api_name": "",
"is_master": "true",
"endpoints": [],
"hostnames": [],
"hostnames_s3website": [],
"master_zone": "",
"zones": [
{
"id": "default",
"name": "default",
"endpoints": [],
"log_meta": "false",
"log_data": "false",
"bucket_index_max_shards": 0,
"read_only": "false"
}
],
"placement_targets": [
{
"name": "default-placement",
"tags": []
}
],
"default_placement": "default-placement",
"realm_id": ""
}

In one cluster, we have one RGW instance, in the second we have three. I wonder 
whether setting up the zonegroup is needed at all...? I'll try to modify the 
zonegroup settings and see if it might help with anything.

2) Trying to upload a file:

2016-07-21 12:40:55.851011 7f67737fe700  2 req 5:0.003166:s3:POST 
/test-bucket-0/s3-test-file-1:init_multipart:verifying op params
2016-07-21 12:40:55.851012 7f67737fe700  2 req 5:0.003167:s3:POST 
/test-bucket-0/s3-test-file-1:init_multipart:pre-executing
2016-07-21 12:40:55.851014 7f67737fe700  2 req 5:0.003168:s3:POST 
/test-bucket-0/s3-test-file-1:init_multipart:executing
2016-07-21 12:40:55.851031 7f67737fe700 10 x>> 
x-amz-content-sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
2016-07-21 12:40:55.851037 7f67737fe700 10 x>> 
x-amz-date:20160721T124055Z
2016-07-21 12:40:55.851041 7f67737fe700 10 x>> 
x-amz-meta-s3cmd-attrs:uid:0/gname:root/uname:root/gid:0/mode:33188/mtime:1469007939/atime:1469007939/md5:d8160ddb9f4681ec985e03429f842b88/ctime:1469023832
2016-07-21 12:40:55.851048 7f67737fe700 10 x>> 
x-amz-storage-class:STANDARD
2016-07-21 12:40:55.851122 7f67737fe700 20 get_obj_state: 
rctx=0x7f67737f7e50 
obj=test-bucket-0:_multipart_s3-test-file-1.2~orci2-8OGWvX6FkSCsreSitUc-DEQ7Z.meta
 state=0x7f6888023358 s->prefetch_data=0
2016-07-21 12:40:55.852738 7f67737fe700 20 get_obj_state: 
rctx=0x7f67737f7e50 
obj=test-bucket-0:_multipart_s3-test-file-1.2~orci2-8OGWvX6FkSCsreSitUc-DEQ7Z.meta
 state=0x7f6888023358 s->prefetch_data=0
2016-07-21 12:40:55.852746 7f67737fe700 20 prepare_atomic_modification: 
state is not atomic. state=0x7f6888023358
2016-07-21 12:40:55.852841 7f67737fe700 20 reading from 
.rgw:.bucket.meta.test-bucket-0:default.25873.1
2016-07-21 12:40:55.852860 7f67737fe700 20 get_system_obj_state: 
rctx=0x7f67737f6cc0 obj=.rgw:.bucket.meta.test-bucket-0:default.25873.1 
state=0x7f6888034e48 s->prefetch_data=0
2016-07-21 12:40:55.852863 7f67737fe700 10 cache get: 
name=.rgw+.bucket.meta.test-bucket-0:default.25873.1 : hit (requested=22, 
cached=23)
2016-07-21 12:40:55.852884 7f67737fe700 20 get_system_obj_state: 
s->obj_tag was set empty
2016-07-21 12:40:55.852886 7f67737fe700 10 cache get: 
name=.rgw+.bucket.meta.test-bucket-0:default.25873.1 : hit (requested=17, 
cached=23)
2016-07-21 12:40:55.852908 7f67737fe700 20  bucket index object: 
.dir.default.25873.1
2016-07-21 12:40:55.857254 7f67737fe700  2 req 5:0.009408:s3:POST 
/test-bucket-0/s3-test-file-1:init_multipart:completing
2016-07-21 12:40:55.857262 7f67737fe700  0 WARNING: set_req_state_err 
err_no=95 resorting to 500
2016-07-21 12:40:55.857413 7f67737fe700  2 req 5:0.009567:s3:POST 
/test-bucket-0/s3-test-file-1:init_multipart:op status=-95

I cannot see any error 

Re: [ceph-users] Unknown error (95->500) when creating buckets or putting files to RGW after upgrade from Infernalis to Jewel

2016-07-19 Thread nick
Hi Maciej,
we also had problems when upgrading our infernalis RGW cluster to jewel. In 
the end I managed to upgrade with the help of a script (from Yehuda). Search 
for the thread "[ceph-users] radosgw hammer -> jewel upgrade (default zone & 
region config)" on the mailing list. There you can find more information about 
thisaltough I do not know if the issue you experience is the same like we 
had.

Cheers
Nick

On Monday, July 18, 2016 02:13:15 PM Naruszewicz, Maciej wrote:
> Hi,
> 
> We recently upgraded our Ceph Cluster to Jewel including RGW. Everything
> seems to be in order except for RGW which doesn't let us create buckets or
> add new files.
> 
> # s3cmd --version
> s3cmd version 1.6.1
> 
> # s3cmd mb s3://test
> WARNING: Retrying failed request: /
> WARNING: 500 (UnknownError)
> WARNING: Waiting 3 sec...
> 
> # s3cmd put test s3://nginx-proxy/test
> upload: 'test' -> 's3://nginx-proxy/test'  [1 of 1]
> 7 of 7   100% in0s   224.55 B/s  done
> WARNING: Upload failed: /test (500 (UnknownError))
> WARNING: Waiting 3 sec...
> 
> I am able to read and even remove files, I just can't add anything new.
> 
> I enabled RGW logs to check what went wrong and got the following trying to
> upload a file:
> 
> 2016-07-18 12:09:22.301512 7fdcc57fa700  1 -- 10.251.97.13:0/563287553 -->
> 10.251.97.1:6800/4104 -- osd_op(client.199724.0:927 11.1f0a02a1
> default.194977.1_test [getxattrs,stat] snapc 0=[]
> ack+read+known_if_redirected e479) v7 -- ?+0 0x7fdd64020220 con
> 0x7fde100487c0 2016-07-18 12:09:22.303323 7fddef3f3700  1 --
> 10.251.97.13:0/563287553 <== osd.27 10.251.97.1:6800/4104 10 
> osd_op_reply(927 default.194977.1_test [getxattrs,stat] v0'0 uv0 ack = -2
> ((2) No such file or directory)) v6  230+0+0 (25 91304629 0 0)
> 0x7fda7d00 con 0x7fde100487c0
> 2016-07-18 12:09:22.303629 7fdcc57fa700  1 -- 10.251.97.13:0/563287553 -->
> 10.251.97.1:6818/6493 -- osd_op(client.199724.0:928 10.cecde97a
> .dir.default.194977.1 [call rgw.bucket_prepare_op] snapc 0=[]
> ondisk+write+known_if_redirected e479 ) v7 -- ?+0 0x7fdd6402af60 con
> 0x7fde10032110
> 2016-07-18 12:09:22.308437 7fddee9e9700  1 -- 10.251.97.13:0/563287553 <==
> osd.6 10.251.97.1:6818/6493 13  osd_op_reply(928 .dir.default.194977.1
> [call] v479'126 uv126 ondisk = 0) v6  188+0+0 (1238951509 0 0)
> 0x7fda6c000cc0 con 0x 7fde10032110
> 2016-07-18 12:09:22.308528 7fdcc57fa700  1 -- 10.251.97.13:0/563287553 -->
> 10.251.97.1:6800/4104 -- osd_op(client.199724.0:929 11.1f0a02a1
> default.194977.1_test [create 0~0 [excl],setxattr user.rgw.idtag
> (17),writefull 0~7,setxattr user.r gw.manifest (413),setxattr user.rgw.acl
> (127),setxattr user.rgw.content_type (11),setxattr user.rgw.etag
> (33),setxattr user.rgw.x-amz-content-sha256 (65),setxattr
> user.rgw.x-amz-date (17),setxattr user.rgw.x-amz-meta-s3cmd-attrs (133),set
> xattr user.rgw.x-amz-storage-class (9),call rgw.obj_store_pg_ver,setxattr
> user.rgw.source_zone (4)] snapc 0=[] ondisk+write+known_if_redirected e479)
> v7 -- ?+0 0x7fdd64024ae0 con 0x7fde100487c0 2016-07-18 12:09:22.309371
> 7fddef3f3700  1 -- 10.251.97.13:0/563287553 <== osd.27
> 10.251.97.1:6800/4104 11  osd_op_reply(929 default.194977.1_test
> [create 0~0 [excl],setxattr (17),writefull 0~7,setxattr (413),setxattr
> (127),setxattr ( 11),setxattr (33),setxattr (65),setxattr (17),setxattr
> (133),setxattr (9),call,setxattr (4)] v0'0 uv0 ondisk = -95 ((95) Operation
> not supported)) v6  692+0+0 (982388421 0 0) 0x7fda7d00 con
> 0x7fde100487c0 2016-07-18 12:09:22.309471 7fdcc57fa700  1 --
> 10.251.97.13:0/563287553 --> 10.251.97.1:6818/6493 --
> osd_op(client.199724.0:930 10.cecde97a .dir.default.194977.1 [call
> rgw.bucket_complete_op] snapc 0=[] ack+ondisk+write+known_if_redirected
> e479) v7 -- ?+0 0x7fdd64024ae0 con 0x7fde10032110
> 2016-07-18 12:09:22.309504 7fdcc57fa700  2 req 3:0.047834:s3:PUT
> /nginx-proxy/test:put_obj:completing 2016-07-18 12:09:22.309509
> 7fdcc57fa700  0 WARNING: set_req_state_err err_no=95 resorting to 500
> 2016-07-18 12:09:22.309580 7fdcc57fa700  2 req 3:0.047910:s3:PUT
> /nginx-proxy/test:put_obj:op status=-95 2016-07-18 12:09:22.309585
> 7fdcc57fa700  2 req 3:0.047915:s3:PUT /nginx-proxy/test:put_obj:http
> status=500
> 
> I tried to look for any information around this error but I only found one
> similar unanswered thread.
> 
> The issue disappears if I use RGW Infernalis instead, the create does not
> fail and everything goes smoothly. It is also not dependent on the daemons
> version, the situation is the same in our second Infernalis-based cluster
> where only RGW was updated for tests.
> 
> Could anyone recommend what is wrong here?
> 
> Thanks,
> MN
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Unknown error (95->500) when creating buckets or putting files to RGW after upgrade from Infernalis to Jewel

2016-07-18 Thread Naruszewicz, Maciej
Hi,

We recently upgraded our Ceph Cluster to Jewel including RGW. Everything seems 
to be in order except for RGW which doesn't let us create buckets or add new 
files.

# s3cmd --version
s3cmd version 1.6.1

# s3cmd mb s3://test
WARNING: Retrying failed request: /
WARNING: 500 (UnknownError)
WARNING: Waiting 3 sec...

# s3cmd put test s3://nginx-proxy/test
upload: 'test' -> 's3://nginx-proxy/test'  [1 of 1]
7 of 7   100% in0s   224.55 B/s  done
WARNING: Upload failed: /test (500 (UnknownError))
WARNING: Waiting 3 sec...

I am able to read and even remove files, I just can't add anything new.

I enabled RGW logs to check what went wrong and got the following trying to 
upload a file:

2016-07-18 12:09:22.301512 7fdcc57fa700  1 -- 10.251.97.13:0/563287553 --> 
10.251.97.1:6800/4104 -- osd_op(client.199724.0:927 11.1f0a02a1 
default.194977.1_test [getxattrs,stat] snapc 0=[] ack+read+known_if_redirected 
e479) v7 -- ?+0 0x7fdd64020220 con 0x7fde100487c0
2016-07-18 12:09:22.303323 7fddef3f3700  1 -- 10.251.97.13:0/563287553 <== 
osd.27 10.251.97.1:6800/4104 10  osd_op_reply(927 default.194977.1_test 
[getxattrs,stat] v0'0 uv0 ack = -2 ((2) No such file or directory)) v6  
230+0+0 (25
91304629 0 0) 0x7fda7d00 con 0x7fde100487c0
2016-07-18 12:09:22.303629 7fdcc57fa700  1 -- 10.251.97.13:0/563287553 --> 
10.251.97.1:6818/6493 -- osd_op(client.199724.0:928 10.cecde97a 
.dir.default.194977.1 [call rgw.bucket_prepare_op] snapc 0=[] 
ondisk+write+known_if_redirected e479
) v7 -- ?+0 0x7fdd6402af60 con 0x7fde10032110
2016-07-18 12:09:22.308437 7fddee9e9700  1 -- 10.251.97.13:0/563287553 <== 
osd.6 10.251.97.1:6818/6493 13  osd_op_reply(928 .dir.default.194977.1 
[call] v479'126 uv126 ondisk = 0) v6  188+0+0 (1238951509 0 0) 
0x7fda6c000cc0 con 0x
7fde10032110
2016-07-18 12:09:22.308528 7fdcc57fa700  1 -- 10.251.97.13:0/563287553 --> 
10.251.97.1:6800/4104 -- osd_op(client.199724.0:929 11.1f0a02a1 
default.194977.1_test [create 0~0 [excl],setxattr user.rgw.idtag (17),writefull 
0~7,setxattr user.r
gw.manifest (413),setxattr user.rgw.acl (127),setxattr user.rgw.content_type 
(11),setxattr user.rgw.etag (33),setxattr user.rgw.x-amz-content-sha256 
(65),setxattr user.rgw.x-amz-date (17),setxattr user.rgw.x-amz-meta-s3cmd-attrs 
(133),set
xattr user.rgw.x-amz-storage-class (9),call rgw.obj_store_pg_ver,setxattr 
user.rgw.source_zone (4)] snapc 0=[] ondisk+write+known_if_redirected e479) v7 
-- ?+0 0x7fdd64024ae0 con 0x7fde100487c0
2016-07-18 12:09:22.309371 7fddef3f3700  1 -- 10.251.97.13:0/563287553 <== 
osd.27 10.251.97.1:6800/4104 11  osd_op_reply(929 default.194977.1_test 
[create 0~0 [excl],setxattr (17),writefull 0~7,setxattr (413),setxattr 
(127),setxattr (
11),setxattr (33),setxattr (65),setxattr (17),setxattr (133),setxattr 
(9),call,setxattr (4)] v0'0 uv0 ondisk = -95 ((95) Operation not supported)) v6 
 692+0+0 (982388421 0 0) 0x7fda7d00 con 0x7fde100487c0
2016-07-18 12:09:22.309471 7fdcc57fa700  1 -- 10.251.97.13:0/563287553 --> 
10.251.97.1:6818/6493 -- osd_op(client.199724.0:930 10.cecde97a 
.dir.default.194977.1 [call rgw.bucket_complete_op] snapc 0=[] 
ack+ondisk+write+known_if_redirected
e479) v7 -- ?+0 0x7fdd64024ae0 con 0x7fde10032110
2016-07-18 12:09:22.309504 7fdcc57fa700  2 req 3:0.047834:s3:PUT 
/nginx-proxy/test:put_obj:completing
2016-07-18 12:09:22.309509 7fdcc57fa700  0 WARNING: set_req_state_err err_no=95 
resorting to 500
2016-07-18 12:09:22.309580 7fdcc57fa700  2 req 3:0.047910:s3:PUT 
/nginx-proxy/test:put_obj:op status=-95
2016-07-18 12:09:22.309585 7fdcc57fa700  2 req 3:0.047915:s3:PUT 
/nginx-proxy/test:put_obj:http status=500

I tried to look for any information around this error but I only found one 
similar unanswered thread.

The issue disappears if I use RGW Infernalis instead, the create does not fail 
and everything goes smoothly. It is also not dependent on the daemons version, 
the situation is the same in our second Infernalis-based cluster where only RGW 
was updated for tests.

Could anyone recommend what is wrong here?

Thanks,
MN
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com