[ceph-users] One rados account, more S3 API keyes

2013-08-22 Thread Mihály Árva-Tóth
Hello, Is there any method to one radosgw user has more than one access/secret_key? Thank you, Mihaly ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] failing on 0.67.1 radosgw install

2013-08-22 Thread Fuchs, Andreas (SwissTXT)
My apache conf is as follows cat /etc/apache2/httpd.conf ServerName radosgw01.swisstxt.ch cat /etc/apache2/sites-enabled/000_radosgw VirtualHost *:80 ServerName *.radosgw01.swisstxt.ch # ServerAdmin {email.address} ServerAdmin serviced...@swisstxt.ch DocumentRoot

Re: [ceph-users] ceph-deploy and journal on separate disk

2013-08-22 Thread Pavel Timoschenkov
Hi. With this patch - is all ok. Thanks for help! -Original Message- From: Alfredo Deza [mailto:alfredo.d...@inktank.com] Sent: Wednesday, August 21, 2013 7:16 PM To: Pavel Timoschenkov Cc: ceph-us...@ceph.com Subject: Re: [ceph-users] ceph-deploy and journal on separate disk On Wed,

[ceph-users] RE : OpenStack Cinder + Ceph, unable to remove unattached volumes, still watchers

2013-08-22 Thread HURTEVENT VINCENT
Hi Josh, thank you for your answer, but I was in Bobtail so no listwatchers command :) I planned a reboot of concerned compute nodes and all went fine then. I updated Ceph to last stable though. De : Josh Durgin [josh.dur...@inktank.com] Date d'envoi

[ceph-users] radosgw crash

2013-08-22 Thread Pawel Stefanski
hello! Today our radosgw crashed while running multiple deletions via s3 api. Is this known bug ? POST WSTtobXBlBrm2r78B67LtQ== Thu, 22 Aug 2013 11:38:34 GMT /inna-a/?delete -11 2013-08-22 13:39:26.650499 7f36347d8700 2 req 95:0.000555:s3:POST /inna-a/:multi_object_delete:reading

Re: [ceph-users] ceph-deploy and journal on separate disk

2013-08-22 Thread Alfredo Deza
On Thu, Aug 22, 2013 at 4:36 AM, Pavel Timoschenkov pa...@bayonetteas.onmicrosoft.com wrote: Hi. With this patch - is all ok. Thanks for help! Thanks for confirming this, I have opened a ticket (http://tracker.ceph.com/issues/6085 ) and will work on this patch to get it merged.

Re: [ceph-users] RBD hole punching

2013-08-22 Thread Mike Lowe
There is TRIM/discard support and I use it with some success. There are some details here http://ceph.com/docs/master/rbd/qemu-rbd/ The one caveat I have is that I've sometimes been able to crash an osd by doing fstrim inside a guest. On Aug 22, 2013, at 10:24 AM, Guido Winkelmann

[ceph-users] ??????Failed to create a single mon using ceph-deploy mon create **??

2013-08-22 Thread SOLO
And here is my ceph.log . . . [ceph@cephadmin my-clusters]$ less ceph.log 2013-08-22 09:01:27,375 ceph_deploy.new DEBUG Creating new cluster named ceph 2013-08-22 09:01:27,375 ceph_deploy.new DEBUG Resolving host cephs1 2013-08-22 09:01:27,382 ceph_deploy.new DEBUG Monitor cephs1 at 10.2.9.223

[ceph-users] bucket count limit

2013-08-22 Thread Mostowiec Dominik
Hi, I think about sharding s3 buckets in CEPH cluster, create bucket-per-XX (256 buckets) or even bucket-per-XXX (4096 buckets) where XXX is sign from object md5 url. Could this be the problem? (performance, or some limits) -- Regards Dominik ___

[ceph-users] Failed to create a single mon using ceph-deploy mon create **??

2013-08-22 Thread SOLO
Hi! I am trying ceph on RHEL 6.4 My ceph version is cuttlefish I followed the intro and ceph-deploy new .. ceph-deploy instal .. --stable cuttlefish It didn't appear an error until here. And then I typed ceph-deploy mon create .. Here comes the error as bellow . . .

Re: [ceph-users] Significant slowdown of osds since v0.67 Dumpling

2013-08-22 Thread Mark Nelson
For what it's worth, I was still seeing some small sequential write degradation with kernel RBD with dumpling, though random writes were not consistently slower in the testing I did. There was also some variation in performance between 0.61.2 and 0.61.7 likely due to the workaround we had to

Re: [ceph-users] Failed to create a single mon using ceph-deploy mon create **“

2013-08-22 Thread Alfredo Deza
On Wed, Aug 21, 2013 at 10:05 PM, SOLO sol...@foxmail.com wrote: Hi! I am trying ceph on RHEL 6.4 My ceph version is cuttlefish I followed the intro and ceph-deploy new .. ceph-deploy instal .. --stable cuttlefish It didn't appear an error until here. And then I typed ceph-deploy

[ceph-users] Snapshot a KVM VM with RBD backend and libvirt

2013-08-22 Thread Tobias Brunner
Hi, I'm trying to create a snapshot from a KVM VM: # virsh snapshot-create one-5 error: unsupported configuration: internal checkpoints require at least one disk to be selected for snapshot RBD should support such snapshot, according to the wiki:

Re: [ceph-users] Unexpectedly slow write performance (RBD cinder volumes)

2013-08-22 Thread Oliver Daudey
Hey Greg, I encountered a similar problem and we're just in the process of tracking it down here on the list. Try downgrading your OSD-binaries to 0.61.8 Cuttlefish and re-test. If it's significantly faster on RBD, you're probably experiencing the same problem I have with Dumpling. PS: Only

Re: [ceph-users] Unexpectedly slow write performance (RBD cinder volumes)

2013-08-22 Thread Gregory Farnum
On Thu, Aug 22, 2013 at 2:23 PM, Oliver Daudey oli...@xs4all.nl wrote: Hey Greg, I encountered a similar problem and we're just in the process of tracking it down here on the list. Try downgrading your OSD-binaries to 0.61.8 Cuttlefish and re-test. If it's significantly faster on RBD,

Re: [ceph-users] Unexpectedly slow write performance (RBD cinder volumes)

2013-08-22 Thread Oliver Daudey
Hey Greg, Thanks for the tip! I was assuming a clean shutdown of the OSD should flush the journal for you and have the OSD try to exit with it's data-store in a clean state? Otherwise, I would first have to stop updates a that particular OSD, then flush the journal, then stop it? Regards,

Re: [ceph-users] Unexpectedly slow write performance (RBD cinder volumes)

2013-08-22 Thread Oliver Daudey
Hey Greg, I didn't know that option, but I'm always careful to downgrade and upgrade the OSDs one by one and wait for the cluster to report healthy again before proceeding to the next, so, as you said, chances of losing data should have been minimal. Will flush the journals too next time.

Re: [ceph-users] Unexpectedly slow write performance (RBD cinder volumes)

2013-08-22 Thread Greg Poirier
On Thu, Aug 22, 2013 at 2:34 PM, Gregory Farnum g...@inktank.com wrote: You don't appear to have accounted for the 2x replication (where all writes go to two OSDs) in these calculations. I assume your pools have Ah. Right. So I should then be looking at: # OSDs * Throughput per disk / 2 /

[ceph-users] Network failure scenarios

2013-08-22 Thread Keith Phua
Hi, It was mentioned in the devel mailing list that for 2 networks setup, if the cluster network failed, the cluster behave pretty badly. Ref: http://article.gmane.org/gmane.comp.file-systems.ceph.devel/12285/match=cluster+network+fail May I know if this problem still exist in cuttlefish or