Hi Tommi.
On 6 Sep 2012, at 21:31, Tommi Virtanen wrote:
On Thu, Sep 6, 2012 at 11:51 AM, Jimmy Tang jt...@tchpc.tcd.ie wrote:
Also, the ceph osd setcrushmap... command doesn't up when a ceph
--help is run in the 0.51 release, however it is documented on the
wiki as far as I recall. It'd be
Hi,
whilst testing the new rbd layering feature I found a problem with rbd
map. It seems rbd map doesn't support the new format.
-martin
ceph -v
ceph version 0.51-265-gc7d11cd
(commit:c7d11cd7b813a47167108c160358f70ec1aab7d6)
rbd create --size 10 --new-format new
rbd map new
add
Problem solved. Everything because Nginx, and a request_uri in fcgi
params to radosgw.
Now request_uri is ok, and problem disappear.
Big thanks for help Yehuda.
Regards
Dnia 12 wrz 2012 o godz. 01:27 Yehuda Sadeh yeh...@inktank.com napisał(a):
On Tue, Sep 11, 2012 at 1:41 PM, Sławomir
On 09/12/2012 05:56 AM, Martin Mailand wrote:
Hi,
whilst testing the new rbd layering feature I found a problem with rbd
map. It seems rbd map doesn't support the new format.
Yeah, format 2 and layering support is in progress for kernel rbd,
but not ready yet. The userspace side is all ready
On 09/12/2012 02:25 AM, Sage Weil wrote:
The next stable release will have cephx authentication enabled by default.
We will probably do it in the next development release (v0.53) to work out
any upgrade kinks well before that. The process for setting up teh
authentication keys on an existing
On Wed, 12 Sep 2012, Jimmy Tang wrote:
On Tue, Jul 31, 2012 at 06:09:59PM -0700, Yehuda Sadeh wrote:
On Tue, Jul 31, 2012 at 3:17 PM, Joe Landman
land...@scalableinformatics.com wrote:
Hi folks
I was struggling and failing to get Ceph properly built/installed for
CentOS 6 (and
On Tue, Sep 11, 2012 at 7:22 PM, Jan Engelhardt jeng...@inai.de wrote:
per our short exchange in private, I am taking to spruce up the
ceph build system definitions. First comes leveldb, more shall
follow soonish.
Thanks!
Patches #1–2 are mandatory for continued successful operation,
#3–5
On Wed, 12 Sep 2012, Jan Engelhardt wrote:
The library is used by ceph (main repo), and code share is a good
thing, is it not?
I'm not sure making this a .so has much value in this case. We can link
against an installed libleveldb (and the upstream debian package does
that). It's present
Hi,
This is completely off-list, but I`m asking because only ceph trigger
such a bug :) .
With 0.51, following happens: if I kill an osd, one or more neighbor
nodes may go to hanged state with cpu lockups, not related to
temperature or overall interrupt count or la and it happens randomly
over
On Mon, Sep 10, 2012 at 10:39:58PM +0200, Mark Nelson wrote:
On 09/10/2012 03:15 PM, Mike Ryan wrote:
*Disclaimer*: these results are an investigation into potential
bottlenecks in RADOS.
I appreciate this investigation very much !
The test setup is wholly unrealistic, and these
numbers
Hi,
may i ask which part of codes within ceph deal with the distribution
of workloads among the OSDs? i am interested in ceph's source codes
and want to understand that part of codes.
Thanks,
Sheng
On Tue, Sep 11, 2012 at 11:51 AM, Tommi Virtanen t...@inktank.com wrote:
On Mon, Sep 10, 2012 at
On Wed, Sep 12, 2012 at 1:15 PM, sheng qiu herbert1984...@gmail.com wrote:
may i ask which part of codes within ceph deal with the distribution
of workloads among the OSDs? i am interested in ceph's source codes
and want to understand that part of codes.
You want to read the academic papers on
-- Forwarded message --
From: sheng qiu herbert1984...@gmail.com
Date: Wed, Sep 12, 2012 at 3:26 PM
Subject: Re: does ceph consider the device performance for distributing data?
To: Tommi Virtanen t...@inktank.com
actually i have read both these two papers. i understand the
On Wed, Sep 12, 2012 at 1:27 PM, sheng qiu herbert1984...@gmail.com wrote:
actually i have read both these two papers. i understand the theory
but just curious about how is it implemented within the codes. i
looked into the ceph source codes, it seems very complicated for me.
The CRUSH library
On Wed, Sep 12, 2012 at 10:33 AM, Andrey Korolyov and...@xdel.ru wrote:
Hi,
This is completely off-list, but I`m asking because only ceph trigger
such a bug :) .
With 0.51, following happens: if I kill an osd, one or more neighbor
nodes may go to hanged state with cpu lockups, not related to
Hi,
On Tue, 11 Sep 2012, Yan, Zheng wrote:
From: Yan, Zheng zheng.z@intel.com
We need set truncate_seq when redirect the newop to CEPH_OSD_OP_WRITE,
otherwise the code handles CEPH_OSD_OP_WRITE may quietly drop the data.
Signed-off-by: Yan, Zheng zheng.z@intel.com
Applying.
This
On Thu, Sep 13, 2012 at 1:09 AM, Tommi Virtanen t...@inktank.com wrote:
On Wed, Sep 12, 2012 at 10:33 AM, Andrey Korolyov and...@xdel.ru wrote:
Hi,
This is completely off-list, but I`m asking because only ceph trigger
such a bug :) .
With 0.51, following happens: if I kill an osd, one or
On 13 September 2012 08:25, Mark Nelson mark.nel...@inktank.com wrote:
On 09/12/2012 03:08 PM, Dieter Kasper wrote:
On Mon, Sep 10, 2012 at 10:39:58PM +0200, Mark Nelson wrote:
On 09/10/2012 03:15 PM, Mike Ryan wrote:
*Disclaimer*: these results are an investigation into potential
On 09/12/2012 06:24 PM, Joseph Glanville wrote:
On 13 September 2012 08:25, Mark Nelsonmark.nel...@inktank.com wrote:
On 09/12/2012 03:08 PM, Dieter Kasper wrote:
On Mon, Sep 10, 2012 at 10:39:58PM +0200, Mark Nelson wrote:
On 09/10/2012 03:15 PM, Mike Ryan wrote:
*Disclaimer*: these
19 matches
Mail list logo