rbd map fail when the crushmap algorithm changed to tree

2012-07-06 Thread Eric_YH_Chen
Hi all: Here is the original crushmap, I change the algorithm of host to tree and set back to ceph cluster. However, when I try to map one imge to rados block device (RBD), it would hang and no response until I press ctrl-c. ( rbd map = then hang) Is there any wrong in the crushmap?

Re: domino-style OSD crash

2012-07-06 Thread Yann Dupont
Le 05/07/2012 23:32, Gregory Farnum a écrit : [...] ok, so as all nodes were identical, I probably have hit a btrfs bug (like a erroneous out of space ) in more or less the same time. And when 1 osd was out, OH , I didn't finish the sentence... When 1 osd was out, missing data was copied on

Re: OSD doesn't start

2012-07-06 Thread Székelyi Szabolcs
On 2012. July 5. 16:12:42 Székelyi Szabolcs wrote: On 2012. July 4. 09:34:04 Gregory Farnum wrote: Hrm, it looks like the OSD data directory got a little busted somehow. How did you perform your upgrade? (That is, how did you kill your daemons, in what order, and when did you bring them

Re: speedup ceph / scaling / find the bottleneck

2012-07-06 Thread Stefan Priebe
Am 06.07.2012 um 05:50 schrieb Alexandre DERUMIER aderum...@odiso.com: Hi, Stefan is on vacation for the moment,I don't know if he can reply you. Thanks! But I can reoly for him for the kvm part (as we do same tests together in parallel). - kvm is 1.1 - rbd 0.48 - drive option

Re: [PATCH] Generate URL-safe base64 strings for keys.

2012-07-06 Thread Wido den Hollander
On 07/05/2012 04:31 PM, Sage Weil wrote: On Thu, 5 Jul 2012, Wido den Hollander wrote: On 04-07-12 18:18, Sage Weil wrote: On Wed, 4 Jul 2012, Wido den Hollander wrote: On Wed, 4 Jul 2012, Wido den Hollander wrote: By using this we prevent scenarios where cephx keys are not accepted in

Re: [PATCH] librados: Bump the version to 0.48

2012-07-06 Thread Wido den Hollander
On 07/06/2012 12:33 AM, Gregory Farnum wrote: On Wed, Jul 4, 2012 at 9:33 AM, Sage Weil s...@inktank.com wrote: On Wed, 4 Jul 2012, Gregory Farnum wrote: Hmmm ÿÿ we generally try to modify these versions when the API changes, not on every sprint. It looks to me like Sage added one function

oops in rbd module (con_work in libceph)

2012-07-06 Thread Yann Dupont
Hello. Bug happens in rbd client, at least in Kernel 3.4.4 . I have a completely reproductible bug. here is the oops : Jul 6 10:16:52 label5.u14.univ-nantes.prive kernel: [ 329.456285] EXT4-fs (rbd1): mounted filesystem with ordered data mode. Opts: (null) Jul 6 10:18:38

unpackaged files in rpmbuild of 0.48

2012-07-06 Thread Jimmy Tang
Hi All, I'm not sure if this is intentional or not, but during a rpm build of 0.48 gives the following error: Installed (but unpackaged) file(s) found: /sbin/ceph-disk-activate /sbin/ceph-disk-prepare RPM build errors: Installed (but unpackaged) file(s) found:

Re: oops in rbd module (con_work in libceph)

2012-07-06 Thread Yann Dupont
Le 06/07/2012 10:31, Yann Dupont a écrit : Hello. Bug happens in rbd client, at least in Kernel 3.4.4 . I have a completely reproductible bug. just a note : 3.2.22 doesn't seems to exhibit the problem. I repeated the process 2 times without problems on this kernel. I'll launch realistic

Re: unpackaged files in rpmbuild of 0.48

2012-07-06 Thread Sage Weil
On Fri, 6 Jul 2012, Jimmy Tang wrote: Hi All, I'm not sure if this is intentional or not, but during a rpm build of 0.48 gives the following error: Installed (but unpackaged) file(s) found: /sbin/ceph-disk-activate /sbin/ceph-disk-prepare RPM build errors: Installed (but

mds fails to start on SL6

2012-07-06 Thread Jimmy Tang
Hi All, I was giving ceph 0.48 a try on SL6x, the OSD's startup okay, but the mds fails to start, below is a snippet of the error, 2012-07-06 16:38:17.838055 7f2d6828d700 -1 mds.-1.0 *** got signal Terminated *** 2012-07-06 16:38:17.838139 7f2d6828d700 1 mds.-1.0 suicide. wanted down:dne,

Can I try CEPH on top of the openstack Essex ?

2012-07-06 Thread Chen, Hb
Hi, Can I try CEPH on top of the openstack/Swift Object store (Essex release) ? HB -- To unsubscribe from this list: send the line unsubscribe ceph-devel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Can I try CEPH on top of the openstack Essex ?

2012-07-06 Thread Gregory Farnum
On Fri, Jul 6, 2012 at 8:29 AM, Chen, Hb hbc...@lanl.gov wrote: Hi, Can I try CEPH on top of the openstack/Swift Object store (Essex release) ? Nope! CephFS is heavily dependent on the features provided by the RADOS object store (and so is RBD, if that's what you're interested in). If you

Re: mds fails to start on SL6

2012-07-06 Thread Gregory Farnum
Do you have more in the log? It looks like it's being instructed to shut down before it's fully come up (thus the error in the Objecter http://tracker.newdream.net/issues/2740, but is not the root cause), but I can't see why. -Greg On Fri, Jul 6, 2012 at 8:42 AM, Jimmy Tang jt...@tchpc.tcd.ie

Re: domino-style OSD crash

2012-07-06 Thread Gregory Farnum
On Fri, Jul 6, 2012 at 12:19 AM, Yann Dupont yann.dup...@univ-nantes.fr wrote: Le 05/07/2012 23:32, Gregory Farnum a écrit : [...] ok, so as all nodes were identical, I probably have hit a btrfs bug (like a erroneous out of space ) in more or less the same time. And when 1 osd was out,

Re: speedup ceph / scaling / find the bottleneck

2012-07-06 Thread Stefan Priebe - Profihost AG
Am 06.07.2012 um 19:11 schrieb Gregory Farnum g...@inktank.com: On Thu, Jul 5, 2012 at 8:50 PM, Alexandre DERUMIER aderum...@odiso.com wrote: Hi, Stefan is on vacation for the moment,I don't know if he can reply you. But I can reoly for him for the kvm part (as we do same tests together

Re: speedup ceph / scaling / find the bottleneck

2012-07-06 Thread Gregory Farnum
On Fri, Jul 6, 2012 at 11:09 AM, Stefan Priebe - Profihost AG s.pri...@profihost.ag wrote: Am 06.07.2012 um 19:11 schrieb Gregory Farnum g...@inktank.com: On Thu, Jul 5, 2012 at 8:50 PM, Alexandre DERUMIER aderum...@odiso.com wrote: Hi, Stefan is on vacation for the moment,I don't know if

RE: mds fails to start on SL6

2012-07-06 Thread Tim Bell
Does SL6 have the kernel level required ? Tim -Original Message- From: ceph-devel-ow...@vger.kernel.org [mailto:ceph-devel- ow...@vger.kernel.org] On Behalf Of Jimmy Tang Sent: 06 July 2012 17:43 To: ceph-devel@vger.kernel.org Subject: mds fails to start on SL6 Hi All, I was

Re: mds fails to start on SL6

2012-07-06 Thread Gregory Farnum
On Fri, Jul 6, 2012 at 12:07 PM, Tim Bell tim.b...@cern.ch wrote: Does SL6 have the kernel level required ? The MDS is a userspace daemon that demands absolutely nothing unusual from the kernel. :) -Greg Tim -Original Message- From: ceph-devel-ow...@vger.kernel.org

[ANNOUNCE] Linux 3.4 Ceph stable release branch

2012-07-06 Thread Alex Elder
There is a new branch available at the Ceph client git repository, which is located here: http://github.com/ceph/ceph-client.git The branch is named linux-3.4.4-ceph, and it is based on the latest Linux 3.4.y stable release. This Ceph stable branch contains ported bug fixes that have been

Re: oops in rbd module (con_work in libceph)

2012-07-06 Thread Alex Elder
On 07/06/2012 10:35 AM, Yann Dupont wrote: Le 06/07/2012 10:31, Yann Dupont a écrit : Hello. Bug happens in rbd client, at least in Kernel 3.4.4 . I have a completely reproductible bug. just a note : 3.2.22 doesn't seems to exhibit the problem. I repeated the process 2 times without

RE: mkcephfs failing on v0.48 argonaut

2012-07-06 Thread Paul Pettigrew
Hi again Sage This is very perplexing. Confirming this system is a stock Ubuntu 12.04 x64, with no custom kernel or anything else, fully apt-get dist-upgrade'd up to date. root@dsanb1-coy:~# uname -r 3.2.0-26-generic I have added in the suggestions you made to the script, we now have:

RE: mkcephfs failing on v0.48 argonaut

2012-07-06 Thread Paul Pettigrew
UPDATED code now within the below (paste snafu, sorry - ignore most recent post), my comments/findings the same however... Paul -Original Message- Hi again Sage This is very perplexing. Confirming this system is a stock Ubuntu 12.04 x64, with no custom kernel or anything else,

RE: mkcephfs failing on v0.48 argonaut

2012-07-06 Thread Sage Weil
On Sat, 7 Jul 2012, Paul Pettigrew wrote: Hi again Sage This is very perplexing. Confirming this system is a stock Ubuntu 12.04 x64, with no custom kernel or anything else, fully apt-get dist-upgrade'd up to date. root@dsanb1-coy:~# uname -r 3.2.0-26-generic I have added in the