Hello,
Is there any method to one radosgw user has more than one
access/secret_key?
Thank you,
Mihaly
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
My apache conf is as follows
cat /etc/apache2/httpd.conf
ServerName radosgw01.swisstxt.ch
cat /etc/apache2/sites-enabled/000_radosgw
VirtualHost *:80
ServerName *.radosgw01.swisstxt.ch
# ServerAdmin {email.address}
ServerAdmin serviced...@swisstxt.ch
DocumentRoot
Hi.
With this patch - is all ok.
Thanks for help!
-Original Message-
From: Alfredo Deza [mailto:alfredo.d...@inktank.com]
Sent: Wednesday, August 21, 2013 7:16 PM
To: Pavel Timoschenkov
Cc: ceph-us...@ceph.com
Subject: Re: [ceph-users] ceph-deploy and journal on separate disk
On Wed,
Hi Josh,
thank you for your answer, but I was in Bobtail so no listwatchers command :)
I planned a reboot of concerned compute nodes and all went fine then. I updated
Ceph to last stable though.
De : Josh Durgin [josh.dur...@inktank.com]
Date d'envoi
hello!
Today our radosgw crashed while running multiple deletions via s3 api.
Is this known bug ?
POST
WSTtobXBlBrm2r78B67LtQ==
Thu, 22 Aug 2013 11:38:34 GMT
/inna-a/?delete
-11 2013-08-22 13:39:26.650499 7f36347d8700 2 req 95:0.000555:s3:POST
/inna-a/:multi_object_delete:reading
On Thu, Aug 22, 2013 at 4:36 AM, Pavel Timoschenkov
pa...@bayonetteas.onmicrosoft.com wrote:
Hi.
With this patch - is all ok.
Thanks for help!
Thanks for confirming this, I have opened a ticket
(http://tracker.ceph.com/issues/6085 ) and will work on this patch to
get it merged.
There is TRIM/discard support and I use it with some success. There are some
details here http://ceph.com/docs/master/rbd/qemu-rbd/ The one caveat I have
is that I've sometimes been able to crash an osd by doing fstrim inside a guest.
On Aug 22, 2013, at 10:24 AM, Guido Winkelmann
And here is my ceph.log
.
.
.
[ceph@cephadmin my-clusters]$ less ceph.log
2013-08-22 09:01:27,375 ceph_deploy.new DEBUG Creating new cluster named ceph
2013-08-22 09:01:27,375 ceph_deploy.new DEBUG Resolving host cephs1
2013-08-22 09:01:27,382 ceph_deploy.new DEBUG Monitor cephs1 at 10.2.9.223
Hi,
I think about sharding s3 buckets in CEPH cluster, create bucket-per-XX (256
buckets) or even bucket-per-XXX (4096 buckets) where XXX is sign from object
md5 url.
Could this be the problem? (performance, or some limits)
--
Regards
Dominik
___
Hi!
I am trying ceph on RHEL 6.4
My ceph version is cuttlefish
I followed the intro and ceph-deploy new .. ceph-deploy instal ..
--stable cuttlefish
It didn't appear an error until here.
And then I typed ceph-deploy mon create ..
Here comes the error as bellow
.
.
.
For what it's worth, I was still seeing some small sequential write
degradation with kernel RBD with dumpling, though random writes were not
consistently slower in the testing I did. There was also some variation
in performance between 0.61.2 and 0.61.7 likely due to the workaround we
had to
On Wed, Aug 21, 2013 at 10:05 PM, SOLO sol...@foxmail.com wrote:
Hi!
I am trying ceph on RHEL 6.4
My ceph version is cuttlefish
I followed the intro and ceph-deploy new .. ceph-deploy instal ..
--stable cuttlefish
It didn't appear an error until here.
And then I typed ceph-deploy
Hi,
I'm trying to create a snapshot from a KVM VM:
# virsh snapshot-create one-5
error: unsupported configuration: internal checkpoints require at least
one disk to be selected for snapshot
RBD should support such snapshot, according to the wiki:
Hey Greg,
I encountered a similar problem and we're just in the process of
tracking it down here on the list. Try downgrading your OSD-binaries to
0.61.8 Cuttlefish and re-test. If it's significantly faster on RBD,
you're probably experiencing the same problem I have with Dumpling.
PS: Only
On Thu, Aug 22, 2013 at 2:23 PM, Oliver Daudey oli...@xs4all.nl wrote:
Hey Greg,
I encountered a similar problem and we're just in the process of
tracking it down here on the list. Try downgrading your OSD-binaries to
0.61.8 Cuttlefish and re-test. If it's significantly faster on RBD,
Hey Greg,
Thanks for the tip! I was assuming a clean shutdown of the OSD should
flush the journal for you and have the OSD try to exit with it's
data-store in a clean state? Otherwise, I would first have to stop
updates a that particular OSD, then flush the journal, then stop it?
Regards,
Hey Greg,
I didn't know that option, but I'm always careful to downgrade and
upgrade the OSDs one by one and wait for the cluster to report healthy
again before proceeding to the next, so, as you said, chances of losing
data should have been minimal. Will flush the journals too next time.
On Thu, Aug 22, 2013 at 2:34 PM, Gregory Farnum g...@inktank.com wrote:
You don't appear to have accounted for the 2x replication (where all
writes go to two OSDs) in these calculations. I assume your pools have
Ah. Right. So I should then be looking at:
# OSDs * Throughput per disk / 2 /
Hi,
It was mentioned in the devel mailing list that for 2 networks setup, if the
cluster network failed, the cluster behave pretty badly. Ref:
http://article.gmane.org/gmane.comp.file-systems.ceph.devel/12285/match=cluster+network+fail
May I know if this problem still exist in cuttlefish or
19 matches
Mail list logo