rbd command to display free space in a cluster ?

2012-10-15 Thread Alexandre DERUMIER
Hi, I'm looking for a way to retrieve the free space from a rbd cluster with rbd command. Any hint ? (something like ceph -w status, but without need to parse the result) Regards, Alexandre -- To unsubscribe from this list: send the line unsubscribe ceph-devel in the body of a message to

Re: OSD::mkfs: couldn't mount FileStore: error -22

2012-10-15 Thread Adam Nielsen
current/ is a btrfs subvolume.. 'btrfs sub delete current' will remove it. Ah, that worked, thanks. Unfortunately mkcephfs still fails with the same error. The warning in the previous email suggets you're running a fairly old kernel.. there is probably something handled incorrectly during

[PATCH] rbd: zero return code in rbd_dev_image_id()

2012-10-15 Thread Alex Elder
There is a call in rbd_dev_image_id() to rbd_req_sync_exec() to get the image id for an image. Despite the get_id class method only returning 0 on success, I am getting back a positive value (I think the number of bytes returned with the call). That may or may not be how rbd_req_sync_exec() is

[PATCH] rbd: kill rbd_device-rbd_opts

2012-10-15 Thread Alex Elder
The rbd_device structure has an embedded rbd_options structure. Such a structure is needed to work with the generic ceph argument parsing code, but there's no need to keep it around once argument parsing is done. Use a local variable to hold the rbd options used in parsing in rbd_get_client(),

Re: rbd command to display free space in a cluster ?

2012-10-15 Thread Sage Weil
On Mon, 15 Oct 2012, Alexandre DERUMIER wrote: Hi, I'm looking for a way to retrieve the free space from a rbd cluster with rbd command. Any hint ? (something like ceph -w status, but without need to parse the result) rados df is the closest. sage Regards, Alexandre --

Ceph benchmark high wait on journal device

2012-10-15 Thread Martin Mailand
Hi, inspired from the performance test Mark did, I tried to compile my own one. I have four OSD processes on one Node, each process has a Intel 710 SSD for its journal and 4 SAS Disk via an Lsi 9266-8i in Raid 0. If I test the SSD with fio they are quite fast and the w_wait time is quite low.

Help...MDS Continuously Segfaulting

2012-10-15 Thread Nick Couchman
Well, both of my MDSs seem to be down right now, and then continually segfault (every time I try to start them) with the following: ceph-mdsmon-a:~ # ceph-mds -n mds.b -c /etc/ceph/ceph.conf -f starting mds.b at :/0 *** Caught signal (Segmentation fault) ** in thread 7fbe0d61d700 ceph version

Re: Help...MDS Continuously Segfaulting

2012-10-15 Thread Gregory Farnum
Something in the MDS log is bad or is poking at a bug in the code. Can you turn on MDS debugging and restart a daemon and put that log somewhere accessible? debug mds = 20 debug journaler = 20 debug ms = 1 -Greg On Mon, Oct 15, 2012 at 10:02 AM, Nick Couchman nick.couch...@seakr.com wrote: Well,

Re: Help...MDS Continuously Segfaulting

2012-10-15 Thread Nick Couchman
Anywhere in particular I should make it available? It's a little over a million lines of debug in the file - I can put it on a pastebin, if that works, or perhaps zip it up and throw it somewhere? -Nick On 2012/10/15 at 11:26, Gregory Farnum g...@inktank.com wrote: Something in the MDS log

Re: Help...MDS Continuously Segfaulting

2012-10-15 Thread Gregory Farnum
Yeah, zip it and post — somebody's going to have to download it and do fun things. :) -Greg On Mon, Oct 15, 2012 at 10:43 AM, Nick Couchman nick.couch...@seakr.com wrote: Anywhere in particular I should make it available? It's a little over a million lines of debug in the file - I can put it

New branch: Python packaging integrated into automake

2012-10-15 Thread Tommi Virtanen
Hi. While working on the external journal stuff, for a while I thought I needed more python code than I ended up needing. To support that code, I put in the skeleton of import ceph.foo support. While I ultimately didn't need it, I didn't want to throw away the results. If you later need to have

Re: Two questions about client writes update to Ceph

2012-10-15 Thread Samuel Just
Hi Alex, 1) When a replica goes, down the write won't complete until the replica is detected as down. At that point, the write can complete without the down replica. Shortly thereafter, if the down replica does not come back, a new replica will replace it bringing the replication count to what

Re: Ceph benchmark high wait on journal device

2012-10-15 Thread Mark Nelson
Hi Martin, I haven't tested the 9266-8i specifically, but it may behave similarly to the 9265-8i. This is just a theory, but I get the impression that the controller itself introduces some latency getting data to disk, and that it may get worse as the more data is pushed across the

Re: osd crash in ReplicatedPG::add_object_context_to_pg_stat(ReplicatedPG::ObjectContext*, pg_stat_t*)

2012-10-15 Thread Samuel Just
Do you have a coredump for the crash? Can you reproduce the crash with: debug filestore = 20 debug osd = 20 and post the logs? As far as the incomplete pg goes, can you post the output of ceph pg pgid query where pgid is the pgid of the incomplete pg (e.g. 1.34)? Thanks -Sam On Thu, Oct

Re: rbd command to display free space in a cluster ?

2012-10-15 Thread Dan Mick
Nothing like that exists at the moment; see http://tracker.newdream.net/issues/3283 fpr the other side of it. On 10/15/2012 12:52 AM, Alexandre DERUMIER wrote: Hi, I'm looking for a way to retrieve the free space from a rbd cluster with rbd command. Any hint ? (something like ceph -w

Re: Ceph benchmark high wait on journal device

2012-10-15 Thread Martin Mailand
Hi Mark, I think there is no differences between the 9266-8i and the 9265-8i, except for the cache vault and the angel of the SAS connectors. In the last test, which I posted, the SSDs where connected to the onboard SATA ports. Further test showed if I reduce the the object size (the -b

Re: Ceph benchmark high wait on journal device

2012-10-15 Thread Sage Weil
On Mon, 15 Oct 2012, Travis Rhoden wrote: Martin, btw. Is there a nice way to format the output of ceph --admin-daemon ceph-osd.0.asok perf_dump? I use: ceph --admin-daemon /var/run/ceph/ceph-osd.3.asok perf dump | python -mjson.tool There is also