[ceph-users] [radosgw] Admin REST API wrong results

2014-09-30 Thread Patrycja Szabłowska
Hi,

I'm using the radosgw REST API (via Python's boto library and also using
some radosgw-agent methods) to fetch some data from Ceph (version 0.85).

When I try to get the admin log for some specific dates, the radosgw seems
to give me bad results.

For example when I try to get entries since 2014-09-30 08:00:00+00:00, it
seems to give me ALL entries for this day instead of entries since 8 AM.

Here's a sample:

{u'marker': u'1_1412065259.449518_102.1', u'entries': [{u'timestamp':
u'2014-09-30 07:35:08.774367Z', u'key': u'testbucket:default.4146.33',
u'entity_type': u'bucket'}, {u'timestamp': u'2014-09-30 07:36:23.092266Z',
u'key': u'testbucket:default.4146.33', u'entity_type': u'bucket'},
{u'timestamp': u'2014-09-30 07:36:37.068249Z', u'key':
u'testbucket:default.4146.34', u'entity_type': u'bucket'}, {u'timestamp':
u'2014-09-30 07:37:38.431647Z', u'key': u'testbucket:default.4146.34',
u'entity_type': u'bucket'}, {u'timestamp': u'2014-09-30 07:37:45.589333Z',
u'key': u'testbucket:default.4146.35', u'entity_type': u'bucket'}, [...]
{u'timestamp': u'2014-09-30 08:20:59.449518Z', u'key':
u'testbucket:default.4146.52', u'entity_type': u'bucket'}], u'truncated':
False}


The apache's access logs:

ceph-rgw:80 192.168.43.1 - - [30/Sep/2014:08:21:09 +] GET
/admin/log?start-time=2014-09-30+08%3A00%3A00%2B00%3A00type=dataid=82
HTTP/1.1 200 4480 - Boto/2.31.1 Python/2.7.6 Linux/3.13.0-36-generic
ceph-rgw:80 192.168.43.1 - - [30/Sep/2014:08:22:13 +] GET
/admin/log?start-time=2014-09-30+08%3A00%3A00%2B00%3A00type=dataid=82
HTTP/1.1 200 4480 - Boto/2.31.1 Python/2.7.6 Linux/3.13.0-36-generic
ceph-rgw:80 192.168.43.1 - - [30/Sep/2014:08:23:29 +] GET
/admin/log?start-time=1412064000.0type=dataid=82 HTTP/1.1 400 216 -
Boto/2.31.1 Python/2.7.6 Linux/3.13.0-36-generic



I've tried different dates (10AM, 7AM), etc, but nothing seems to change.
Full result of admin/log on pastebin in case it is useful to someone:
http://pastebin.com/KSTXdGw5


Thanks,


Patrycja Szabłowska
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] [radosgw] Admin REST API wrong results

2014-09-30 Thread Patrycja Szabłowska
Sending this again, because it wasn't published to the mailing list (I
think because I wasn't a subscriber).

Hi,

I'm using the radosgw REST API (via Python's boto library and also using
some radosgw-agent methods) to fetch some data from Ceph (version 0.85).

When I try to get the admin log for some specific dates, the radosgw seems
to give me bad results.

For example when I try to get entries since 2014-09-30 08:00:00+00:00, it
seems to give me ALL entries for this day instead of entries since 8 AM.

Here's a sample:

{u'marker': u'1_1412065259.449518_102.1', u'entries': [{u'timestamp':
u'2014-09-30 07:35:08.774367Z', u'key': u'testbucket:default.4146.33',
u'entity_type': u'bucket'}, {u'timestamp': u'2014-09-30 07:36:23.092266Z',
u'key': u'testbucket:default.4146.33', u'entity_type': u'bucket'},
{u'timestamp': u'2014-09-30 07:36:37.068249Z', u'key':
u'testbucket:default.4146.34', u'entity_type': u'bucket'}, {u'timestamp':
u'2014-09-30 07:37:38.431647Z', u'key': u'testbucket:default.4146.34',
u'entity_type': u'bucket'}, {u'timestamp': u'2014-09-30 07:37:45.589333Z',
u'key': u'testbucket:default.4146.35', u'entity_type': u'bucket'}, [...]
{u'timestamp': u'2014-09-30 08:20:59.449518Z', u'key':
u'testbucket:default.4146.52', u'entity_type': u'bucket'}], u'truncated':
False}


The apache's access logs:

ceph-rgw:80 192.168.43.1 - - [30/Sep/2014:08:21:09 +] GET
/admin/log?start-time=2014-09-30+08%3A00%3A00%2B00%3A00type=dataid=82
HTTP/1.1 200 4480 - Boto/2.31.1 Python/2.7.6 Linux/3.13.0-36-generic
ceph-rgw:80 192.168.43.1 - - [30/Sep/2014:08:22:13 +] GET
/admin/log?start-time=2014-09-30+08%3A00%3A00%2B00%3A00type=dataid=82
HTTP/1.1 200 4480 - Boto/2.31.1 Python/2.7.6 Linux/3.13.0-36-generic
ceph-rgw:80 192.168.43.1 - - [30/Sep/2014:08:23:29 +] GET
/admin/log?start-time=1412064000.0type=dataid=82 HTTP/1.1 400 216 -
Boto/2.31.1 Python/2.7.6 Linux/3.13.0-36-generic



I've tried different dates (10AM, 7AM), etc, but nothing seems to change.
Full result of admin/log on pastebin in case it is useful to someone:
http://pastebin.com/KSTXdGw5


Thanks,


Patrycja Szabłowska
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph monitor load, low performance

2014-08-27 Thread Patrycja Szabłowska
Irrelevant, but I need to say this: Cephers aren't only men, you know... :-)


Cheers,

Patrycja

2014-08-26 12:58 GMT+02:00  pawel.orzechow...@budikom.net:
 Hello Gentelmen:-)

 Let me point one important aspect of this low performance problem: from
 all 4 nodes of our ceph cluster only one node shows bad metrics, that is
 very high latency on its osd's (from 200-600ms), while other three nodes
 behave normaly, thats is latency of their osds is between 1-10ms.

 So, the idea of putting journals on SSD is something that we are looking at,
 but we think that we have in general some problem with that particular node,
 what affects whole cluster.

 So can the number (4) of hosts a reason for that? Any other hints?

 Thanks

 Pawel


 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] [radosgw-admin] bilog list confusion

2014-08-18 Thread Patrycja Szabłowska
Hi,


Is there any configuration option in ceph.conf for enabling/disabling
the bilog list?
I mean the result of this command:
radosgw-admin bilog list

One ceph cluster gives me results - list of operations which were made
to the bucket, and the other one gives me just an empty list. I can't
see what's the reason.


I can't find it anywhere here in the ceph.conf file.
http://ceph.com/docs/master/rados/configuration/ceph-conf/

My guess is it's in region info, but when I've changed these values to
false for the cluster with working bilog, the bilog would still show.

1. cluster with empty bilog list:
  zones: [
{ name: default,
  endpoints: [],
  log_meta: false,
  log_data: false}],
2. cluster with *proper* bilog list:
  zones: [
{ name: master-1,
  endpoints: [
http:\/\/[...]],
  log_meta: true,
  log_data: true}],


Here are pools on both of the clusters:

1. cluster with *proper* bilog list:
rbd
.rgw.root
.rgw.control
.rgw
.rgw.gc
.users.uid
.users.email
.users
.rgw.buckets
.rgw.buckets.index
.log
''

2. cluster with empty bilog list:
data
metadata
rbd
.rgw.root
.rgw.control
.rgw
.rgw.gc
.users.uid
.users.email
.users
''
.rgw.buckets.index
.rgw.buckets
.log


And here is the zone info (just the placement_pools, rest of the
config is the same):
1. cluster with *proper* bilog list:
placement_pools: []

2. cluster with *empty* bilog list:
  placement_pools: [
{ key: default-placement,
  val: { index_pool: .rgw.buckets.index,
  data_pool: .rgw.buckets,
  data_extra_pool: }}]}


Any thoughts? I've tried to figure it out by myself, but no luck.



Thanks,
Patrycja Szabłowska
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] [radosgw] Creating buckets with different owner

2014-07-16 Thread Patrycja Szabłowska
Hi,

Is it possible to set owner of a bucket or an object to someone else?
I've got a user who was created with flag --system and is able to
create buckets and objects.
I've created a bucket using boto and I've got FULL CONTROL over it:
Policy: http://acs.amazonaws.com/groups/global/AllUsers = READ, M.
Tester (owner) = FULL_CONTROL

but trying to set owner to someone else gives me this:

boto.exception.S3ResponseError: S3ResponseError: 403 Forbidden
?xml version=1.0 encoding=UTF-8?ErrorCodeAccessDenied/Code/Error


So I wonder - is it even possible to change owner of a bucket or
create a bucket for a different owner than myself?



Thanks,
Patrycja Szabłowska
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Multipart upload on ceph 0.8 doesn't work?

2014-07-08 Thread Patrycja Szabłowska
Thank you Josh,

I'm not sure if this is ceph's fault or perhaps FastCGI's or Boto's.
When trying to upload 10KB chunk I've had those mysterious FastCGI's
errors on server side and boto.exception.BotoServerError:
BotoServerError: 500 Internal Server Error on client side.
I've tried now to upload 4MB part - the error on client's side was as
expected EntityTooSmall - but returned at the end, when I've called
the method complete_upload.

Perhaps this output can be helpful to someone:

$ python boto_multi.py
  begin upload of Bosphorus_1920x1080_30fps_420_8bit_AVC_MP4.mp4
  size 7507998, 2 parts
upload part 1 size 4194304
upload part 2 size 3313694
  end upload
Traceback (most recent call last):
  File boto_multi.py, line 48, in module
part.complete_upload()
  File local/lib/python2.7/site-packages/boto/s3/multipart.py, line
319, in complete_upload
self.id, xml)
  File local/lib/python2.7/site-packages/boto/s3/bucket.py, line
1779, in complete_multipart_upload
response.status, response.reason, body)
boto.exception.S3ResponseError: S3ResponseError: 400 Bad Request
?xml version=1.0 encoding=UTF-8?ErrorCodeEntityTooSmall/Code/Error


Cheers,

Patrycja Szabłowska





2014-07-08 0:10 GMT+02:00 Josh Durgin josh.dur...@inktank.com:
 On 07/07/2014 05:41 AM, Patrycja Szabłowska wrote:

 OK, the mystery is solved.

  From https://www.mail-archive.com/ceph-users@lists.ceph.com/msg10368.html
 During a multi part upload you can't upload parts smaller than 5M

 I've tried to upload smaller chunks, like 10KB. I've changed chunk size
 to 5MB and it works now.

 It's a pity that the Ceph's docs don't mention the limit (or I couldn't
 found it anywhere). And that the error wasn't helpful at all.


 Glad you figured it out. This is in the s3 docs [1], but the lack of
 error message is a regression. I added a couple tickets for this:

 http://tracker.ceph.com/issues/8764
 http://tracker.ceph.com/issues/8766

 Josh

 [1] http://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadUploadPart.html
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Multipart upload on ceph 0.8 doesn't work?

2014-07-07 Thread Patrycja Szabłowska
I've installed Ubuntu 12.04 in order to test multiupload with the ceph's
modified fastcgi (according to this:
https://ceph.com/docs/master/install/install-ceph-gateway/#apache-fastcgi-w-100-continue
).

The problem is still the same: I can initiate the multi part upload or
upload single part, but when trying to put a part of multi part upload I
get the error with Fastcgi as I've shown before.

Here's a part of the error log once again:

== apache.error.log ==
[Fri Jul 04 15:40:41.868621 2014] [fastcgi:error] [pid 14199] [client
127.0.0.1:46571] FastCGI: incomplete headers (0 bytes) received from server
/home/pszablow/ceph/src/htdocs/rgw.fcgi
[Fri Jul 04 15:40:42.571543 2014] [fastcgi:error] [pid 14200]
(111)Connection refused: [client 127.0.0.1:46572] FastCGI: failed to
connect to server /home/pszablow/ceph/src/htdocs/rgw.fcgi: connect()
failed
[Fri Jul 04 15:40:42.571660 2014] [fastcgi:error] [pid 14200] [client
127.0.0.1:46572] FastCGI: incomplete headers (0 bytes) received from server
/home/pszablow/ceph/src/htdocs/rgw.fcgi


It seems to me that it doesn't matter if I turn the rgw print continue on
or off.

I've got no idea what to try next to make it work... Are there any other
pools needed to put parts in upload?


Thanks,

Patrycja Szabłowska



2014-07-04 16:26 GMT+02:00 Patrycja Szabłowska 
szablowska.patry...@gmail.com:

 Still not sure do I need the ceph's modified fastcgi or not.
 But I guess this explains my problem with the installation:
 http://tracker.ceph.com/issues/8233


 It would be nice to have at least a workaround for this...

 Thanks,

 Patrycja Szabłowska



 2014-07-04 16:02 GMT+02:00 Patrycja Szabłowska 
 szablowska.patry...@gmail.com:

 Thank you Luis for your response.

 Quite unbelievable, but your solution worked!
 Unfortunately, I'm stuck again when trying to upload parts of the file.

 Apache's logs:


 == apache.access.log ==
 127.0.0.1 l - [04/Jul/2014:15:40:41 +0200] PUT /bucketbig/ HTTP/1.1 200
 477 {Referer}i Boto/2.30.0 Python/2.7.6 Linux/3.13.0-30-generic
 127.0.0.1 l - [04/Jul/2014:15:40:41 +0200] POST
 /bucketbig/Bosphorus?uploads HTTP/1.1 200 249 {Referer}i Boto/2.30.0
 Python/2.7.6 Linux/3.13.0-30-generic

 == apache.error.log ==
 [Fri Jul 04 15:40:41.868621 2014] [fastcgi:error] [pid 14199] [client
 127.0.0.1:46571] FastCGI: incomplete headers (0 bytes) received from
 server /home/pszablow/ceph/src/htdocs/rgw.fcgi

 == apache.access.log ==
 127.0.0.1 l - [04/Jul/2014:15:40:41 +0200] PUT
 /bucketbig/Bosphorus?uploadId=2/fURJChPdpUqA3Z1oVLUjT7ROsnxIqZ9partNumber=1
 HTTP/1.1 500 531 {Referer}i Boto/2.30.0 Python/2.7.6
 Linux/3.13.0-30-generic

 == apache.error.log ==
 [Fri Jul 04 15:40:42.571543 2014] [fastcgi:error] [pid 14200]
 (111)Connection refused: [client 127.0.0.1:46572] FastCGI: failed to
 connect to server /home/pszablow/ceph/src/htdocs/rgw.fcgi: connect()
 failed
 [Fri Jul 04 15:40:42.571660 2014] [fastcgi:error] [pid 14200] [client
 127.0.0.1:46572] FastCGI: incomplete headers (0 bytes) received from
 server /home/pszablow/ceph/src/htdocs/rgw.fcgi



 I'm using the default fastcgi module, not the one provided by Ceph. I've
 tried installing it on my ubuntu 14.04, but unfortunately I keep getting
 the error:

 libapache2-mod-fastcgi : requires: apache2.2-common (= 2.2.4)


 Is the modified fastcgi module mandatory in order to use multi part
 upload?


 Thanks,

 Patrycja Szabłowska


 2014-07-03 18:34 GMT+02:00 Luis Periquito luis.periqu...@ocado.com:

 I was at this issue this morning. It seems radosgw requires you to have
 a pool named '' to work with multipart. I just created a pool with that
 name
 rados mkpool ''

 either that or allow the pool be created by the radosgw...


 On 3 July 2014 16:27, Patrycja Szabłowska szablowska.patry...@gmail.com
  wrote:

 Hi,

 I'm trying to make multi part upload work. I'm using ceph
 0.80-702-g9bac31b (from the ceph's github).

 I've tried the code provided by Mark Kirkwood here:


 http://lists.ceph.com/pipermail/ceph-users-ceph.com/2013-October/034940.html


 But unfortunately, it gives me the error:

 (multitest)pszablow@pat-desktop:~/$ python boto_multi.py
   begin upload of abc.yuv
   size 746496, 7 parts
 Traceback (most recent call last):
   File boto_multi.py, line 36, in module
 part = bucket.initiate_multipart_upload(objname)
   File
 /home/pszablow/venvs/multitest/local/lib/python2.7/site-packages/boto/s3/bucket.py,
 line 1742, in initiate_multipart_upload
 response.status, response.reason, body)
 boto.exception.S3ResponseError: S3ResponseError: 403 Forbidden
 ?xml version=1.0
 encoding=UTF-8?ErrorCodeAccessDenied/Code/Error


 The single part upload works for me. I am able to create buckets and
 objects.
 I've tried also other similar examples, but none of them works.


 Any ideas what's wrong? Does the ceph's multi part upload actually
 work for anybody?


 Thanks,

 Patrycja Szabłowska
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com

Re: [ceph-users] Multipart upload on ceph 0.8 doesn't work?

2014-07-07 Thread Patrycja Szabłowska
OK, the mystery is solved.

From https://www.mail-archive.com/ceph-users@lists.ceph.com/msg10368.html
During a multi part upload you can't upload parts smaller than 5M

I've tried to upload smaller chunks, like 10KB. I've changed chunk size to
5MB and it works now.

It's a pity that the Ceph's docs don't mention the limit (or I couldn't
found it anywhere). And that the error wasn't helpful at all.


Cheers,

Patrycja Szabłowska




2014-07-07 14:05 GMT+02:00 Patrycja Szabłowska 
szablowska.patry...@gmail.com:

 I've installed Ubuntu 12.04 in order to test multiupload with the ceph's
 modified fastcgi (according to this:
 https://ceph.com/docs/master/install/install-ceph-gateway/#apache-fastcgi-w-100-continue
 ).

 The problem is still the same: I can initiate the multi part upload or
 upload single part, but when trying to put a part of multi part upload I
 get the error with Fastcgi as I've shown before.

  Here's a part of the error log once again:


 == apache.error.log ==
 [Fri Jul 04 15:40:41.868621 2014] [fastcgi:error] [pid 14199] [client
 127.0.0.1:46571] FastCGI: incomplete headers (0 bytes) received from
 server /home/pszablow/ceph/src/htdocs/rgw.fcgi
 [Fri Jul 04 15:40:42.571543 2014] [fastcgi:error] [pid 14200]
 (111)Connection refused: [client 127.0.0.1:46572] FastCGI: failed to
 connect to server /home/pszablow/ceph/src/htdocs/rgw.fcgi: connect()
 failed
 [Fri Jul 04 15:40:42.571660 2014] [fastcgi:error] [pid 14200] [client
 127.0.0.1:46572] FastCGI: incomplete headers (0 bytes) received from
 server /home/pszablow/ceph/src/htdocs/rgw.fcgi


 It seems to me that it doesn't matter if I turn the rgw print continue on
 or off.

 I've got no idea what to try next to make it work... Are there any other
 pools needed to put parts in upload?


 Thanks,

 Patrycja Szabłowska



 2014-07-04 16:26 GMT+02:00 Patrycja Szabłowska 
 szablowska.patry...@gmail.com:

 Still not sure do I need the ceph's modified fastcgi or not.
 But I guess this explains my problem with the installation:
 http://tracker.ceph.com/issues/8233


 It would be nice to have at least a workaround for this...

 Thanks,

 Patrycja Szabłowska



 2014-07-04 16:02 GMT+02:00 Patrycja Szabłowska 
 szablowska.patry...@gmail.com:

 Thank you Luis for your response.

 Quite unbelievable, but your solution worked!
 Unfortunately, I'm stuck again when trying to upload parts of the file.

 Apache's logs:


 == apache.access.log ==
 127.0.0.1 l - [04/Jul/2014:15:40:41 +0200] PUT /bucketbig/ HTTP/1.1
 200 477 {Referer}i Boto/2.30.0 Python/2.7.6 Linux/3.13.0-30-generic
 127.0.0.1 l - [04/Jul/2014:15:40:41 +0200] POST
 /bucketbig/Bosphorus?uploads HTTP/1.1 200 249 {Referer}i Boto/2.30.0
 Python/2.7.6 Linux/3.13.0-30-generic

 == apache.error.log ==
 [Fri Jul 04 15:40:41.868621 2014] [fastcgi:error] [pid 14199] [client
 127.0.0.1:46571] FastCGI: incomplete headers (0 bytes) received from
 server /home/pszablow/ceph/src/htdocs/rgw.fcgi

 == apache.access.log ==
 127.0.0.1 l - [04/Jul/2014:15:40:41 +0200] PUT
 /bucketbig/Bosphorus?uploadId=2/fURJChPdpUqA3Z1oVLUjT7ROsnxIqZ9partNumber=1
 HTTP/1.1 500 531 {Referer}i Boto/2.30.0 Python/2.7.6
 Linux/3.13.0-30-generic

 == apache.error.log ==
 [Fri Jul 04 15:40:42.571543 2014] [fastcgi:error] [pid 14200]
 (111)Connection refused: [client 127.0.0.1:46572] FastCGI: failed to
 connect to server /home/pszablow/ceph/src/htdocs/rgw.fcgi: connect()
 failed
 [Fri Jul 04 15:40:42.571660 2014] [fastcgi:error] [pid 14200] [client
 127.0.0.1:46572] FastCGI: incomplete headers (0 bytes) received from
 server /home/pszablow/ceph/src/htdocs/rgw.fcgi



 I'm using the default fastcgi module, not the one provided by Ceph. I've
 tried installing it on my ubuntu 14.04, but unfortunately I keep getting
 the error:

 libapache2-mod-fastcgi : requires: apache2.2-common (= 2.2.4)


 Is the modified fastcgi module mandatory in order to use multi part
 upload?


 Thanks,

 Patrycja Szabłowska


 2014-07-03 18:34 GMT+02:00 Luis Periquito luis.periqu...@ocado.com:

 I was at this issue this morning. It seems radosgw requires you to have
 a pool named '' to work with multipart. I just created a pool with that
 name
 rados mkpool ''

 either that or allow the pool be created by the radosgw...


 On 3 July 2014 16:27, Patrycja Szabłowska 
 szablowska.patry...@gmail.com wrote:

 Hi,

 I'm trying to make multi part upload work. I'm using ceph
 0.80-702-g9bac31b (from the ceph's github).

 I've tried the code provided by Mark Kirkwood here:


 http://lists.ceph.com/pipermail/ceph-users-ceph.com/2013-October/034940.html


 But unfortunately, it gives me the error:

 (multitest)pszablow@pat-desktop:~/$ python boto_multi.py
   begin upload of abc.yuv
   size 746496, 7 parts
 Traceback (most recent call last):
   File boto_multi.py, line 36, in module
 part = bucket.initiate_multipart_upload(objname)
   File
 /home/pszablow/venvs/multitest/local/lib/python2.7/site-packages/boto/s3/bucket.py,
 line 1742, in initiate_multipart_upload

Re: [ceph-users] Multipart upload on ceph 0.8 doesn't work?

2014-07-04 Thread Patrycja Szabłowska
Thank you Luis for your response.

Quite unbelievable, but your solution worked!
Unfortunately, I'm stuck again when trying to upload parts of the file.

Apache's logs:


== apache.access.log ==
127.0.0.1 l - [04/Jul/2014:15:40:41 +0200] PUT /bucketbig/ HTTP/1.1 200
477 {Referer}i Boto/2.30.0 Python/2.7.6 Linux/3.13.0-30-generic
127.0.0.1 l - [04/Jul/2014:15:40:41 +0200] POST
/bucketbig/Bosphorus?uploads HTTP/1.1 200 249 {Referer}i Boto/2.30.0
Python/2.7.6 Linux/3.13.0-30-generic

== apache.error.log ==
[Fri Jul 04 15:40:41.868621 2014] [fastcgi:error] [pid 14199] [client
127.0.0.1:46571] FastCGI: incomplete headers (0 bytes) received from server
/home/pszablow/ceph/src/htdocs/rgw.fcgi

== apache.access.log ==
127.0.0.1 l - [04/Jul/2014:15:40:41 +0200] PUT
/bucketbig/Bosphorus?uploadId=2/fURJChPdpUqA3Z1oVLUjT7ROsnxIqZ9partNumber=1
HTTP/1.1 500 531 {Referer}i Boto/2.30.0 Python/2.7.6
Linux/3.13.0-30-generic

== apache.error.log ==
[Fri Jul 04 15:40:42.571543 2014] [fastcgi:error] [pid 14200]
(111)Connection refused: [client 127.0.0.1:46572] FastCGI: failed to
connect to server /home/pszablow/ceph/src/htdocs/rgw.fcgi: connect()
failed
[Fri Jul 04 15:40:42.571660 2014] [fastcgi:error] [pid 14200] [client
127.0.0.1:46572] FastCGI: incomplete headers (0 bytes) received from server
/home/pszablow/ceph/src/htdocs/rgw.fcgi



I'm using the default fastcgi module, not the one provided by Ceph. I've
tried installing it on my ubuntu 14.04, but unfortunately I keep getting
the error:
libapache2-mod-fastcgi : requires: apache2.2-common (= 2.2.4)


Is the modified fastcgi module mandatory in order to use multi part upload?


Thanks,

Patrycja Szabłowska


2014-07-03 18:34 GMT+02:00 Luis Periquito luis.periqu...@ocado.com:

 I was at this issue this morning. It seems radosgw requires you to have a
 pool named '' to work with multipart. I just created a pool with that name
 rados mkpool ''

 either that or allow the pool be created by the radosgw...


 On 3 July 2014 16:27, Patrycja Szabłowska szablowska.patry...@gmail.com
 wrote:

 Hi,

 I'm trying to make multi part upload work. I'm using ceph
 0.80-702-g9bac31b (from the ceph's github).

 I've tried the code provided by Mark Kirkwood here:


 http://lists.ceph.com/pipermail/ceph-users-ceph.com/2013-October/034940.html


 But unfortunately, it gives me the error:

 (multitest)pszablow@pat-desktop:~/$ python boto_multi.py
   begin upload of abc.yuv
   size 746496, 7 parts
 Traceback (most recent call last):
   File boto_multi.py, line 36, in module
 part = bucket.initiate_multipart_upload(objname)
   File
 /home/pszablow/venvs/multitest/local/lib/python2.7/site-packages/boto/s3/bucket.py,
 line 1742, in initiate_multipart_upload
 response.status, response.reason, body)
 boto.exception.S3ResponseError: S3ResponseError: 403 Forbidden
 ?xml version=1.0
 encoding=UTF-8?ErrorCodeAccessDenied/Code/Error


 The single part upload works for me. I am able to create buckets and
 objects.
 I've tried also other similar examples, but none of them works.


 Any ideas what's wrong? Does the ceph's multi part upload actually
 work for anybody?


 Thanks,

 Patrycja Szabłowska
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




 --

 Luis Periquito

 Unix Engineer

 Ocado.com http://www.ocado.com/

 Head Office, Titan Court, 3 Bishop Square, Hatfield Business Park,
 Hatfield, Herts AL10 9NE

 Notice:  This email is confidential and may contain copyright material of
 members of the Ocado Group. Opinions and views expressed in this message
 may not necessarily reflect the opinions and views of the members of the
 Ocado Group.

 If you are not the intended recipient, please notify us immediately and
 delete all copies of this message. Please note that it is your
 responsibility to scan this message for viruses.

 References to the “Ocado Group” are to Ocado Group plc (registered in
 England and Wales with number 7098618) and its subsidiary undertakings (as
 that expression is defined in the Companies Act 2006) from time to time.
 The registered office of Ocado Group plc is Titan Court, 3 Bishops Square,
 Hatfield Business Park, Hatfield, Herts. AL10 9NE.




-- 
Pozdrawiam
Patrycja Szabłowska
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Multipart upload on ceph 0.8 doesn't work?

2014-07-04 Thread Patrycja Szabłowska
Still not sure do I need the ceph's modified fastcgi or not.
But I guess this explains my problem with the installation:
http://tracker.ceph.com/issues/8233


It would be nice to have at least a workaround for this...

Thanks,

Patrycja Szabłowska



2014-07-04 16:02 GMT+02:00 Patrycja Szabłowska 
szablowska.patry...@gmail.com:

 Thank you Luis for your response.

 Quite unbelievable, but your solution worked!
 Unfortunately, I'm stuck again when trying to upload parts of the file.

 Apache's logs:


 == apache.access.log ==
 127.0.0.1 l - [04/Jul/2014:15:40:41 +0200] PUT /bucketbig/ HTTP/1.1 200
 477 {Referer}i Boto/2.30.0 Python/2.7.6 Linux/3.13.0-30-generic
 127.0.0.1 l - [04/Jul/2014:15:40:41 +0200] POST
 /bucketbig/Bosphorus?uploads HTTP/1.1 200 249 {Referer}i Boto/2.30.0
 Python/2.7.6 Linux/3.13.0-30-generic

 == apache.error.log ==
 [Fri Jul 04 15:40:41.868621 2014] [fastcgi:error] [pid 14199] [client
 127.0.0.1:46571] FastCGI: incomplete headers (0 bytes) received from
 server /home/pszablow/ceph/src/htdocs/rgw.fcgi

 == apache.access.log ==
 127.0.0.1 l - [04/Jul/2014:15:40:41 +0200] PUT
 /bucketbig/Bosphorus?uploadId=2/fURJChPdpUqA3Z1oVLUjT7ROsnxIqZ9partNumber=1
 HTTP/1.1 500 531 {Referer}i Boto/2.30.0 Python/2.7.6
 Linux/3.13.0-30-generic

 == apache.error.log ==
 [Fri Jul 04 15:40:42.571543 2014] [fastcgi:error] [pid 14200]
 (111)Connection refused: [client 127.0.0.1:46572] FastCGI: failed to
 connect to server /home/pszablow/ceph/src/htdocs/rgw.fcgi: connect()
 failed
 [Fri Jul 04 15:40:42.571660 2014] [fastcgi:error] [pid 14200] [client
 127.0.0.1:46572] FastCGI: incomplete headers (0 bytes) received from
 server /home/pszablow/ceph/src/htdocs/rgw.fcgi



 I'm using the default fastcgi module, not the one provided by Ceph. I've
 tried installing it on my ubuntu 14.04, but unfortunately I keep getting
 the error:

 libapache2-mod-fastcgi : requires: apache2.2-common (= 2.2.4)


 Is the modified fastcgi module mandatory in order to use multi part upload?


 Thanks,

 Patrycja Szabłowska


 2014-07-03 18:34 GMT+02:00 Luis Periquito luis.periqu...@ocado.com:

 I was at this issue this morning. It seems radosgw requires you to have a
 pool named '' to work with multipart. I just created a pool with that name
 rados mkpool ''

 either that or allow the pool be created by the radosgw...


 On 3 July 2014 16:27, Patrycja Szabłowska szablowska.patry...@gmail.com
 wrote:

 Hi,

 I'm trying to make multi part upload work. I'm using ceph
 0.80-702-g9bac31b (from the ceph's github).

 I've tried the code provided by Mark Kirkwood here:


 http://lists.ceph.com/pipermail/ceph-users-ceph.com/2013-October/034940.html


 But unfortunately, it gives me the error:

 (multitest)pszablow@pat-desktop:~/$ python boto_multi.py
   begin upload of abc.yuv
   size 746496, 7 parts
 Traceback (most recent call last):
   File boto_multi.py, line 36, in module
 part = bucket.initiate_multipart_upload(objname)
   File
 /home/pszablow/venvs/multitest/local/lib/python2.7/site-packages/boto/s3/bucket.py,
 line 1742, in initiate_multipart_upload
 response.status, response.reason, body)
 boto.exception.S3ResponseError: S3ResponseError: 403 Forbidden
 ?xml version=1.0
 encoding=UTF-8?ErrorCodeAccessDenied/Code/Error


 The single part upload works for me. I am able to create buckets and
 objects.
 I've tried also other similar examples, but none of them works.


 Any ideas what's wrong? Does the ceph's multi part upload actually
 work for anybody?


 Thanks,

 Patrycja Szabłowska
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




 --

 Luis Periquito

 Unix Engineer

 Ocado.com http://www.ocado.com/

 Head Office, Titan Court, 3 Bishop Square, Hatfield Business Park,
 Hatfield, Herts AL10 9NE

 Notice:  This email is confidential and may contain copyright material of
 members of the Ocado Group. Opinions and views expressed in this message
 may not necessarily reflect the opinions and views of the members of the
 Ocado Group.

 If you are not the intended recipient, please notify us immediately and
 delete all copies of this message. Please note that it is your
 responsibility to scan this message for viruses.

 References to the “Ocado Group” are to Ocado Group plc (registered in
 England and Wales with number 7098618) and its subsidiary undertakings (as
 that expression is defined in the Companies Act 2006) from time to time.
 The registered office of Ocado Group plc is Titan Court, 3 Bishops Square,
 Hatfield Business Park, Hatfield, Herts. AL10 9NE.




 --
 Pozdrawiam
 Patrycja Szabłowska

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Multipart upload on ceph 0.8 doesn't work?

2014-07-03 Thread Patrycja Szabłowska
Hi,

I'm trying to make multi part upload work. I'm using ceph
0.80-702-g9bac31b (from the ceph's github).

I've tried the code provided by Mark Kirkwood here:

http://lists.ceph.com/pipermail/ceph-users-ceph.com/2013-October/034940.html


But unfortunately, it gives me the error:

(multitest)pszablow@pat-desktop:~/$ python boto_multi.py
  begin upload of abc.yuv
  size 746496, 7 parts
Traceback (most recent call last):
  File boto_multi.py, line 36, in module
part = bucket.initiate_multipart_upload(objname)
  File 
/home/pszablow/venvs/multitest/local/lib/python2.7/site-packages/boto/s3/bucket.py,
line 1742, in initiate_multipart_upload
response.status, response.reason, body)
boto.exception.S3ResponseError: S3ResponseError: 403 Forbidden
?xml version=1.0 encoding=UTF-8?ErrorCodeAccessDenied/Code/Error


The single part upload works for me. I am able to create buckets and objects.
I've tried also other similar examples, but none of them works.


Any ideas what's wrong? Does the ceph's multi part upload actually
work for anybody?


Thanks,

Patrycja Szabłowska
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Radosgw fastcgi problem - comm with server /var/www/s3gw.fcgi aborted: idle timeout (30 sec)

2014-06-17 Thread Patrycja Szabłowska
Hi,

I've got a problem with uploading files into buckets I've created.
I'm using the python boto client for my tests.

My code creates the bucket and then uploads the file. The apache
server is working, the bucket gets created, I can access it via
browser.
However, when trying to upload files the client hangs and says it got
500 response.

Here's the apache log:


[Tue Jun 17 11:28:09 2014] [error] [client 192.168.X.X] FastCGI: comm
with server /var/www/s3gw.fcgi aborted: idle timeout (30 sec)
[Tue Jun 17 11:28:09 2014] [error] [client 192.168.X.X] FastCGI:
incomplete headers (0 bytes) received from server /var/www/s3gw.fcgi


Somebody here had similar problem, but there was no solution:
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2013-September/004489.html

And the MTU on my ceph server is 1400.


My config is from ceph-ansible, but I've also tried this
http://ceph.com/docs/master/radosgw/config/
The code and the cluster was previously working OK on this host, but
it stopped after my experiments (I've created more zones, regions, and
then deleted them without touching the default region).


Perhaps somebody had similar problem and managed to fix it...

Here's also the boto log (if relevant).

Tue, 17 Jun 2014 09:00:57 GMT
/ceph-test/
2014-06-17 11:00:57,165 boto [DEBUG]:Signature:
AWS UZWH6AMGXZIWU4QH60TC:BTleaE7rREKPph3umD9g4442/a4=
2014-06-17 11:00:57,176 boto [DEBUG]:path=/ceph-test/abc
2014-06-17 11:00:57,177 boto [DEBUG]:auth_path=/ceph-test/abc
2014-06-17 11:00:57,177 boto [DEBUG]:Method: PUT
2014-06-17 11:00:57,177 boto [DEBUG]:Path: /ceph-test/abc
2014-06-17 11:00:57,177 boto [DEBUG]:Data:
2014-06-17 11:00:57,177 boto [DEBUG]:Headers: {'Content-Length': '5',
'Content-MD5': 'j0kMILqPh1g3YDpNbEt3HA==', 'Content-Type':
'application/octet-stream', 'Expect': '100-Continue', 'User-Agent':
'Boto/2.27.0 Python/2.6.6 Linux/2.6.32-431.el6.x86_64'}
2014-06-17 11:00:57,179 boto [DEBUG]:Host: Y
2014-06-17 11:00:57,179 boto [DEBUG]:Port: 80
2014-06-17 11:00:57,179 boto [DEBUG]:Params: {}
2014-06-17 11:00:57,179 boto [DEBUG]:establishing HTTP connection:
kwargs={'port': 80, 'timeout': 100}
2014-06-17 11:00:57,179 boto [DEBUG]:Token: None
2014-06-17 11:00:57,180 boto [DEBUG]:StringToSign:
PUT
j0kMILqPh1g3YDpNbEt3HA==
application/octet-stream
Tue, 17 Jun 2014 09:00:57 GMT
/ceph-test/abc
2014-06-17 11:00:57,180 boto [DEBUG]:Signature:
AWS UZWH6AMGXZIWU4QH60TC:rNQ+0GjYV0iMMNLrdJHFI2muL4M=
2014-06-17 11:01:27,215 boto [DEBUG]:Received 500 response.  Retrying
in 0.8 seconds
2014-06-17 11:01:28,001 boto [DEBUG]:Token: None
2014-06-17 11:01:28,001 boto [DEBUG]:StringToSign:
PUT


Thanks

Patrycja Szabłowska
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] radosgw-agent - syncing two zones and regions

2014-06-05 Thread Patrycja Szabłowska
Hi,


I'm new with ceph and I'm trying to understand how replicating data
between two regions/zones work.

I've read this http://ceph.com/docs/master/radosgw/federated-config/
and this 
http://www.sebastien-han.fr/blog/2013/01/28/ceph-geo-replication-sort-of/
and tried that http://blog.kri5.fr/?p=21

Here are my thoughts and please let me know if I'm wrong.

1. Data can be synchronized between two zones - a master and a slave.
Can it be reversed? Let's say the master fails and I want to use the
slave as the replacement.
2. Data can't be synchronized between two regions. Two regions can
only share metadata.
Why is that? I know Python and I've seen that radosgw-agent throws an
exception in this case ('data sync can only occur between zones in the
same region'). But I'm curious what is the reason to do so. Are there
any plans to change this behaviour?
3. One cluster can live in many datacenters, but only when they are
close and the latency is low.

Unfortunately I haven't found any other resources about multiregion
config. Perhaps someone could recommend me some other resouces to
read...


Thanks,

Patrycja Szabłowska
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com