On 09/12/2012 06:24 PM, Joseph Glanville wrote:
On 13 September 2012 08:25, Mark Nelson wrote:
On 09/12/2012 03:08 PM, Dieter Kasper wrote:
On Mon, Sep 10, 2012 at 10:39:58PM +0200, Mark Nelson wrote:
On 09/10/2012 03:15 PM, Mike Ryan wrote:
*Disclaimer*: these results are an investigatio
On 13 September 2012 08:25, Mark Nelson wrote:
> On 09/12/2012 03:08 PM, Dieter Kasper wrote:
>>
>> On Mon, Sep 10, 2012 at 10:39:58PM +0200, Mark Nelson wrote:
>>>
>>> On 09/10/2012 03:15 PM, Mike Ryan wrote:
*Disclaimer*: these results are an investigation into potential
bottlenec
On 09/12/2012 03:08 PM, Dieter Kasper wrote:
On Mon, Sep 10, 2012 at 10:39:58PM +0200, Mark Nelson wrote:
On 09/10/2012 03:15 PM, Mike Ryan wrote:
*Disclaimer*: these results are an investigation into potential
bottlenecks in RADOS.
I appreciate this investigation very much !
The test setup
On Thu, Sep 13, 2012 at 1:09 AM, Tommi Virtanen wrote:
> On Wed, Sep 12, 2012 at 10:33 AM, Andrey Korolyov wrote:
>> Hi,
>> This is completely off-list, but I`m asking because only ceph trigger
>> such a bug :) .
>>
>> With 0.51, following happens: if I kill an osd, one or more neighbor
>> nodes
Applied. Nice catch!
Thanks-
sage
On Fri, 7 Sep 2012, Yan, Zheng wrote:
> From: "Yan, Zheng"
>
> The check 'p->second.last_tx > cutoff' should always be false
> since last_tx is periodically updated by OSD::heartbeat()
>
> Signed-off-by: Yan, Zheng
> ---
> src/osd/OSD.cc | 4 ++--
> 1 fil
Hi,
On Tue, 11 Sep 2012, Yan, Zheng wrote:
> From: "Yan, Zheng"
>
> We need set truncate_seq when redirect the newop to CEPH_OSD_OP_WRITE,
> otherwise the code handles CEPH_OSD_OP_WRITE may quietly drop the data.
>
> Signed-off-by: Yan, Zheng
Applying.
This is correct, but I don't think ther
On Wed, Sep 12, 2012 at 10:33 AM, Andrey Korolyov wrote:
> Hi,
> This is completely off-list, but I`m asking because only ceph trigger
> such a bug :) .
>
> With 0.51, following happens: if I kill an osd, one or more neighbor
> nodes may go to hanged state with cpu lockups, not related to
> temper
On Wed, Sep 12, 2012 at 1:27 PM, sheng qiu wrote:
> actually i have read both these two papers. i understand the theory
> but just curious about how is it implemented within the codes. i
> looked into the ceph source codes, it seems very complicated for me.
The CRUSH library is in src/crush. It's
-- Forwarded message --
From: sheng qiu
Date: Wed, Sep 12, 2012 at 3:26 PM
Subject: Re: does ceph consider the device performance for distributing data?
To: Tommi Virtanen
actually i have read both these two papers. i understand the theory
but just curious about how is it implem
On Wed, Sep 12, 2012 at 1:15 PM, sheng qiu wrote:
> may i ask which part of codes within ceph deal with the distribution
> of workloads among the OSDs? i am interested in ceph's source codes
> and want to understand that part of codes.
You want to read the academic papers on CRUSH (or just the wh
Hi,
may i ask which part of codes within ceph deal with the distribution
of workloads among the OSDs? i am interested in ceph's source codes
and want to understand that part of codes.
Thanks,
Sheng
On Tue, Sep 11, 2012 at 11:51 AM, Tommi Virtanen wrote:
> On Mon, Sep 10, 2012 at 8:54 PM, sheng
On Mon, Sep 10, 2012 at 10:39:58PM +0200, Mark Nelson wrote:
> On 09/10/2012 03:15 PM, Mike Ryan wrote:
> > *Disclaimer*: these results are an investigation into potential
> > bottlenecks in RADOS.
I appreciate this investigation very much !
> > The test setup is wholly unrealistic, and these
> >
Hi,
This is completely off-list, but I`m asking because only ceph trigger
such a bug :) .
With 0.51, following happens: if I kill an osd, one or more neighbor
nodes may go to hanged state with cpu lockups, not related to
temperature or overall interrupt count or la and it happens randomly
over 16
On Wed, 12 Sep 2012, Jan Engelhardt wrote:
> The library is used by ceph (main repo), and code share is a good
> thing, is it not?
I'm not sure making this a .so has much value in this case. We can link
against an installed libleveldb (and the upstream debian package does
that). It's present h
On Tue, Sep 11, 2012 at 7:22 PM, Jan Engelhardt wrote:
> per our short exchange in private, I am taking to spruce up the
> ceph build system definitions. First comes leveldb, more shall
> follow soonish.
Thanks!
> Patches #1–2 are mandatory for continued successful operation,
> #3–5 cosmetical,
On Tue, Sep 11, 2012 at 7:22 PM, Jan Engelhardt wrote:
> The library is used by ceph (main repo), and code share is a good
> thing, is it not?
Sorry, what does this change do? That commit message isn't helpful.
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body o
On Wed, 12 Sep 2012, Jimmy Tang wrote:
> On Tue, Jul 31, 2012 at 06:09:59PM -0700, Yehuda Sadeh wrote:
> > On Tue, Jul 31, 2012 at 3:17 PM, Joe Landman
> > wrote:
> > > Hi folks
> > >
> > > I was struggling and failing to get Ceph properly built/installed for
> > > CentOS 6 (and 5) last week. I
On Tue, Jul 31, 2012 at 06:09:59PM -0700, Yehuda Sadeh wrote:
> On Tue, Jul 31, 2012 at 3:17 PM, Joe Landman
> wrote:
> > Hi folks
> >
> > I was struggling and failing to get Ceph properly built/installed for
> > CentOS 6 (and 5) last week. Is this simply not a recommended platform?
> > Please
On 09/12/2012 02:25 AM, Sage Weil wrote:
The next stable release will have cephx authentication enabled by default.
We will probably do it in the next development release (v0.53) to work out
any upgrade kinks well before that. The process for setting up teh
authentication keys on an existing clu
On 09/12/2012 05:56 AM, Martin Mailand wrote:
Hi,
whilst testing the new rbd layering feature I found a problem with rbd
map. It seems rbd map doesn't support the new format.
Yeah, format 2 and layering support is in progress for kernel rbd,
but not ready yet. The userspace side is all ready i
Problem solved. Everything because Nginx, and a request_uri in fcgi
params to radosgw.
Now request_uri is ok, and problem disappear.
Big thanks for help Yehuda.
Regards
Dnia 12 wrz 2012 o godz. 01:27 Yehuda Sadeh napisał(a):
> On Tue, Sep 11, 2012 at 1:41 PM, Sławomir Skowron wrote:
>> And m
On 09/12/2012 02:12 PM, hemant surale wrote:
> Hi Community,
> I hv started Ceph v0.51 on single node as per instructions
> given at ceph.com/docs . "Ceph -w" works fine but I am observing
> problme with Gceph , Its unable to show live mds & osds .
> "Error: hunting for new mon . "
Hi Community,
I hv started Ceph v0.51 on single node as per instructions
given at ceph.com/docs . "Ceph -w" works fine but I am observing
problme with Gceph , Its unable to show live mds & osds .
"Error: hunting for new mon . "
Please help me to figure it out.
Thanks & regards,
H
Hi,
whilst testing the new rbd layering feature I found a problem with rbd
map. It seems rbd map doesn't support the new format.
-martin
ceph -v
ceph version 0.51-265-gc7d11cd
(commit:c7d11cd7b813a47167108c160358f70ec1aab7d6)
rbd create --size 10 --new-format new
rbd map new
add fail
Hi Tommi.
On 6 Sep 2012, at 21:31, Tommi Virtanen wrote:
> On Thu, Sep 6, 2012 at 11:51 AM, Jimmy Tang wrote:
>> Also, the "ceph osd setcrushmap..." command doesn't up when a ceph
>> --help is run in the 0.51 release, however it is documented on the
>> wiki as far as I recall. It'd be real nice
Same file, same s3 credentials on two clusters.
Really strange,
good:
2012-09-12 09:30:51.935383 7f0939ffb700 5 Searching permissions for
uid=anonymous mask=1
2012-09-12 09:30:51.935385 7f0939ffb700 5 Permissions for user not found
2012-09-12 09:30:51.935387 7f0939ffb700 5 Searching permission
26 matches
Mail list logo