[ceph-users] Problems with multipart RGW uploads.

2016-12-09 Thread Martin Bureau
Hello,


I am looking for help with a problem we have with our Jewel (10.2.4-1-g5d3c76c) 
cluster. Some files (which show up in the bucket listing) cannot be downloaded 
and return HTTP 404 and "ERROR: got unexpected error when trying to read 
object: -2" in the rgw log.


Regards,

Martin




___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Can't download some files from RGW

2016-11-24 Thread Martin Bureau

Hello,

I have some files that have been uploaded to a Ceph Jewel (10.2.2) 
cluster but can't be downloaded afterwards. HEAD on the file is 
successful but GET returns 404.


Here is the output from object stat for one of these files :

# radosgw-admin object stat --bucket=sam-storage-mtl-8m-00 
--object=data/000/053/264/53264382
2016-11-24 22:11:10.217854 7fe2985f39c0  0 RGWZoneParams::create(): 
error creating default zone params: (17) File exists

{
"name": "data\/000\/053\/264\/53264382",
"size": 8348156246,
"policy": {
"acl": {
"acl_user_map": [
{
"user": "sam",
"acl": 15
}
],
"acl_group_map": [
{
"group": 2,
"acl": 1
}
],
"grant_map": [
{
"id": "",
"grant": {
"type": {
"type": 2
},
"id": "",
"email": "",
"permission": {
"flags": 1
},
"name": "",
"group": 2
}
},
{
"id": "sam",
"grant": {
"type": {
"type": 0
},
"id": "sam",
"email": "",
"permission": {
"flags": 15
},
"name": "SAM User",
"group": 0
}
},
{
"id": "sam",
"grant": {
"type": {
"type": 0
},
"id": "sam",
"email": "",
"permission": {
"flags": 2
},
"name": "SAM User",
"group": 0
}
},
{
"id": "sam",
"grant": {
"type": {
"type": 0
},
"id": "sam",
"email": "",
"permission": {
"flags": 1
},
"name": "SAM User",
"group": 0
}
},
{
"id": "sam",
"grant": {
"type": {
"type": 0
},
"id": "sam",
"email": "",
"permission": {
"flags": 8
},
"name": "SAM User",
"group": 0
}
},
{
"id": "sam",
"grant": {
"type": {
"type": 0
},
"id": "sam",
"email": "",
"permission": {
"flags": 4
},
"name": "SAM User",
"group": 0
}
}
]
},
"owner": {
"id": "sam",
"display_name": "SAM User"
}
},
"etag": "4168efac71e53734d8abf877ffcc8b39-1593\u",
"tag": "_Tpd8KLPuSWQWaZCUGo1VkvMm-RN0Psx\u",
"manifest": {
"objs": [],
"obj_size": 8348156246,
"explicit_objs": "false",
"head_obj": {
"bucket": {
"name": "sam-storage-mtl-8m-00",
"pool": "default.rgw.buckets.data",
"data_extra_pool": "default.rgw.buckets.non-ec",
"index_pool": "default.rgw.buckets.index",
"marker": "9efa3174-0317-4a8d-b1e0-4c37570e7b4c.144598.1",
"bucket_id": 
"9efa3174-0317-4a8d-b1e0-4c37570e7b4c.144598.1"

},
"key": "",
"ns": "",
"object": "data\/000\/053\/264\/53264382",
"instance": "",
"orig_obj": "data\/000\/053\/264\/53264382"
},
"head_size": 0,
"max_head_size": 0,
"prefix": 
"data\/000\/053\/264\/53264382.2~FehndlSB93HWK7y4YoQHcwEwek6CRqF",

"tail_bucket": {
"name": "sam-storage-mtl-8m-00",
"pool": 

[ceph-users] Same pg scrubbed over and over (Jewel)

2016-09-20 Thread Martin Bureau
Hello,


I noticed that the same pg gets scrubbed repeatedly on our new Jewel cluster:


Here's an excerpt from log:


2016-09-20 20:36:31.236123 osd.12 10.1.82.82:6820/14316 150514 : cluster [INF] 
25.3f scrub ok
2016-09-20 20:36:32.232918 osd.12 10.1.82.82:6820/14316 150515 : cluster [INF] 
25.3f scrub starts
2016-09-20 20:36:32.236876 osd.12 10.1.82.82:6820/14316 150516 : cluster [INF] 
25.3f scrub ok
2016-09-20 20:36:33.233268 osd.12 10.1.82.82:6820/14316 150517 : cluster [INF] 
25.3f deep-scrub starts
2016-09-20 20:36:33.242258 osd.12 10.1.82.82:6820/14316 150518 : cluster [INF] 
25.3f deep-scrub ok
2016-09-20 20:36:36.233604 osd.12 10.1.82.82:6820/14316 150519 : cluster [INF] 
25.3f scrub starts
2016-09-20 20:36:36.237221 osd.12 10.1.82.82:6820/14316 150520 : cluster [INF] 
25.3f scrub ok
2016-09-20 20:36:41.234490 osd.12 10.1.82.82:6820/14316 150521 : cluster [INF] 
25.3f deep-scrub starts
2016-09-20 20:36:41.243720 osd.12 10.1.82.82:6820/14316 150522 : cluster [INF] 
25.3f deep-scrub ok
2016-09-20 20:36:45.235128 osd.12 10.1.82.82:6820/14316 150523 : cluster [INF] 
25.3f deep-scrub starts
2016-09-20 20:36:45.352589 osd.12 10.1.82.82:6820/14316 150524 : cluster [INF] 
25.3f deep-scrub ok
2016-09-20 20:36:47.235310 osd.12 10.1.82.82:6820/14316 150525 : cluster [INF] 
25.3f scrub starts
2016-09-20 20:36:47.239348 osd.12 10.1.82.82:6820/14316 150526 : cluster [INF] 
25.3f scrub ok
2016-09-20 20:36:49.235538 osd.12 10.1.82.82:6820/14316 150527 : cluster [INF] 
25.3f deep-scrub starts
2016-09-20 20:36:49.243121 osd.12 10.1.82.82:6820/14316 150528 : cluster [INF] 
25.3f deep-scrub ok
2016-09-20 20:36:51.235956 osd.12 10.1.82.82:6820/14316 150529 : cluster [INF] 
25.3f deep-scrub starts
2016-09-20 20:36:51.244201 osd.12 10.1.82.82:6820/14316 150530 : cluster [INF] 
25.3f deep-scrub ok
2016-09-20 20:36:52.236076 osd.12 10.1.82.82:6820/14316 150531 : cluster [INF] 
25.3f scrub starts
2016-09-20 20:36:52.239376 osd.12 10.1.82.82:6820/14316 150532 : cluster [INF] 
25.3f scrub ok
2016-09-20 20:36:56.236740 osd.12 10.1.82.82:6820/14316 150533 : cluster [INF] 
25.3f scrub starts


How can I troubleshoot / resolve this ?


Regards,

Martin

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Read performance in VMs

2015-10-05 Thread Martin Bureau
Hello,

Is there a way to improve sequentials reads in a VM ? My understanding
is that a read will be served from a single osd at a time, so it can't
be faster than the drive that osd in running on. Is that correct ? Is
there any ways to improve this ?

Thanks for any answers,
Martin
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com