Re: Heavy speed difference between rbd and custom pool

2012-06-19 Thread Stefan Priebe
Ok thanks but wouldn't it make sense to set the default to the same as rbd has? How is the value for rbd calculated? I've also seen that rbd has a different crushmap. What's the difference between crushmap 0 and 2? Stefan Am 19.06.2012 um 00:41 schrieb Dan Mick dan.m...@inktank.com: Yes,

Re: Heavy speed difference between rbd and custom pool

2012-06-19 Thread Stefan Priebe - Profihost AG
Sorry i meant crush_ruleset Am 19.06.2012 08:06, schrieb Stefan Priebe: Ok thanks but wouldn't it make sense to set the default to the same as rbd has? How is the value for rbd calculated? I've also seen that rbd has a different crushmap. What's the difference between crushmap 0 and 2?

Re: Heavy speed difference between rbd and custom pool

2012-06-19 Thread Stefan Priebe - Profihost AG
Am 19.06.2012 06:41, schrieb Alexandre DERUMIER: Hi Stephann recommandations are 30-50 PGS by osd if I remember. rbd, data and metadata have 2176 PGs with 12 OSD. This is 181,3 per OSD?! Stefan -- To unsubscribe from this list: send the line unsubscribe ceph-devel in the body of a

difference between qemu cache settings and ceph.conf rbd cache settings?

2012-06-19 Thread Stefan Priebe - Profihost AG
Hello list, i've now patched my qemu to include the qemu cache settings for rbd. Now i can change my drive caching mode. But there are also these settings for global section in ceph.conf. rbd_cache = true rbd_cache_size = 33554432 rbd_cache_max_age = 2.0 How do they

Re: difference between qemu cache settings and ceph.conf rbd cache settings?

2012-06-19 Thread Josh Durgin
On 06/18/2012 11:37 PM, Stefan Priebe - Profihost AG wrote: Hello list, i've now patched my qemu to include the qemu cache settings for rbd. Now i can change my drive caching mode. But there are also these settings for global section in ceph.conf. rbd_cache = true rbd_cache_size = 33554432

Re: difference between qemu cache settings and ceph.conf rbd cache settings?

2012-06-19 Thread Stefan Priebe - Profihost AG
Am 19.06.2012 08:42, schrieb Josh Durgin: In qemu, the cache settings map to: writeback: rbd_cache = true writethrough: rbd_cache = true rbd_cache_max_dirty = 0 none: rbd_cache = false qemu's settings are overridden by any custom settings you have in a config file or on the qemu command

Re: [PATCH] net/ceph/osd_client.c add sem to osdmap destroy

2012-06-19 Thread Guan Jun He
Hi, Do you think this is needed? The osdmap update need to hold this sem.As in function void ceph_osdc_handle_map(struct ceph_osd_client *osdc, struct ceph_msg *msg) and function static int __map_request(), etc. Thanks a lot for your reply! thanks, Guanjun On 6/15/2012 at 04:32 PM, in

[patch -next] libceph: fix NULL dereference in reset_connection()

2012-06-19 Thread Dan Carpenter
We dereference con-in_msg on the line after it was set to NULL. Signed-off-by: Dan Carpenter dan.carpen...@oracle.com diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c index 5e9f61d..6aa671c 100644 --- a/net/ceph/messenger.c +++ b/net/ceph/messenger.c @@ -437,10 +437,10 @@ static void

Re: Heavy speed difference between rbd and custom pool

2012-06-19 Thread Mark Nelson
On 06/19/2012 01:32 AM, Stefan Priebe - Profihost AG wrote: Am 19.06.2012 06:41, schrieb Alexandre DERUMIER: Hi Stephann recommandations are 30-50 PGS by osd if I remember. rbd, data and metadata have 2176 PGs with 12 OSD. This is 181,3 per OSD?! Stefan That's probably fine, it

Re: Heavy speed difference between rbd and custom pool

2012-06-19 Thread Stefan Priebe - Profihost AG
Am 19.06.2012 15:01, schrieb Mark Nelson: On 06/19/2012 01:32 AM, Stefan Priebe - Profihost AG wrote: Am 19.06.2012 06:41, schrieb Alexandre DERUMIER: Hi Stephann recommandations are 30-50 PGS by osd if I remember. rbd, data and metadata have 2176 PGs with 12 OSD. This is 181,3 per

Re: [patch -next] libceph: fix NULL dereference in reset_connection()

2012-06-19 Thread Dan Carpenter
On Tue, Jun 19, 2012 at 08:27:19AM -0500, Alex Elder wrote: On 06/19/2012 05:33 AM, Dan Carpenter wrote: We dereference con-in_msg on the line after it was set to NULL. Signed-off-by: Dan Carpenter dan.carpen...@oracle.com Yikes. Actually I think I prefer a different fix, which is

[PATCH] libceph: fix NULL dereference in reset_connection()

2012-06-19 Thread Alex Elder
I have already incorporated the following in the Ceph master branch (which is used for the -next build). We will also send this to Linus soon. -Alex = We dereference con-in_msg on the line after it was set to NULL. Signed-off-by: Dan Carpenter

bad performance fio random write - rados bench random write to compare?

2012-06-19 Thread Alexandre DERUMIER
Hi, Is it possible to do random write bench with rados bench command ? I have very base random write performance with 4K block size inside qemu-kvm, 1000 iops/s max with 3 nodes with 3x 5 disk 15k (Maybe it's related to my constant disk writes, like datas are not flushed sequentially to disk)

Re: [PATCH] net/ceph/osd_client.c add sem to osdmap destroy

2012-06-19 Thread Sage Weil
[Sorry for not responding earlier!] On Tue, 19 Jun 2012, Guan Jun He wrote: Hi, Do you think this is needed? The osdmap update need to hold this sem.As in function void ceph_osdc_handle_map(struct ceph_osd_client *osdc, struct ceph_msg *msg) and function static int __map_request(),

Re: Heavy speed difference between rbd and custom pool

2012-06-19 Thread Sage Weil
On Tue, 19 Jun 2012, Stefan Priebe - Profihost AG wrote: Am 19.06.2012 15:01, schrieb Mark Nelson: On 06/19/2012 01:32 AM, Stefan Priebe - Profihost AG wrote: Am 19.06.2012 06:41, schrieb Alexandre DERUMIER: Hi Stephann recommandations are 30-50 PGS by osd if I remember. rbd,

Re: Heavy speed difference between rbd and custom pool

2012-06-19 Thread Stefan Priebe
Am 19.06.2012 um 17:42 schrieb Sage Weil s...@inktank.com: But this number 2176 of PGs were set while doing mkcephfs - how is it calculated? num_pgs = num_osds osd_pg_bits which is configurable via --osd-pg-bits N or ceph.conf (at mkcephfs time). The default is 6. What happens if

Re: Heavy speed difference between rbd and custom pool

2012-06-19 Thread Sage Weil
On Tue, 19 Jun 2012, Stefan Priebe wrote: Am 19.06.2012 um 17:42 schrieb Sage Weil s...@inktank.com: But this number 2176 of PGs were set while doing mkcephfs - how is it calculated? num_pgs = num_osds osd_pg_bits which is configurable via --osd-pg-bits N or ceph.conf (at

Re: Heavy speed difference between rbd and custom pool

2012-06-19 Thread Dan Mick
The number doesn't change currently (and can't currently be set manually). On Jun 19, 2012, at 9:24 AM, Stefan Priebe s.pri...@profihost.ag wrote: Am 19.06.2012 um 17:42 schrieb Sage Weil s...@inktank.com: But this number 2176 of PGs were set while doing mkcephfs - how is it calculated?

Error creating journal during mkcephfs

2012-06-19 Thread Travis Rhoden
I almost posted this to http://tracker.newdream.net/issues/2595, but didn't want to piggy-back on an issue marked resolved. When I run mkcephfs, I get: 2012-06-19 09:36:29.211737 7fc7021d7780 -1 journal FileJournal::_open: unable to open journal: open() failed: (22) Invalid argument 2012-06-19

Re: Error creating journal during mkcephfs

2012-06-19 Thread Sage Weil
On Tue, 19 Jun 2012, Travis Rhoden wrote: I almost posted this to http://tracker.newdream.net/issues/2595, but didn't want to piggy-back on an issue marked resolved. When I run mkcephfs, I get: 2012-06-19 09:36:29.211737 7fc7021d7780 -1 journal FileJournal::_open: unable to open journal:

Re: Error creating journal during mkcephfs

2012-06-19 Thread Travis Rhoden
Great! Thanks, that was it. I did see mention of that param in the mailing list, and thought that might be it. But I failed to find that option in the docs here: http://ceph.com/docs/master/config-ref/osd-config/ So I wasn't sure where to put it. =) Thanks again. On Tue, Jun 19, 2012 at

Building a small Ceph development environment

2012-06-19 Thread Terrance Hutchinson
Hi all, I have an HP ProLiant ML350 G5 server that is currently sitting idle at the moment. Would it be possible to virtualize a Ceph cluster so I can mess around and begin contributing back to the community? Specs: ML350 G5 2 x Quad Core Xeon E5430 16GB RAM 7x 146GB SAS disks 1x 120 GB Intel

Re: Building a small Ceph development environment

2012-06-19 Thread Gregory Farnum
You don't need to virtualize anything — I'd recommend running Ubuntu 12.04 on it (you don't need to, but a lot of things will be more performant), building from source, and then setting up the daemons so everybody gets a separate disk. Check out http://ceph.com/docs/master/source/ and the other

kernel crash from RBD in Ubuntu 12.04

2012-06-19 Thread Travis Rhoden
Hey folks, Ran into this today. Not sure what I did wrong. =) I had an RBD successfully mounted and was done with it. Proceeded to do the following: root@spcnode2:~# ls /sys/bus/rbd/devices/ 0 root@spcnode2:~# echo 0 /sys/bus/rbd/remove root@spcnode2:~# ls /sys/bus/rbd/devices/ --- At

Re: kernel crash from RBD in Ubuntu 12.04

2012-06-19 Thread Alex Elder
On 06/19/2012 01:32 PM, Travis Rhoden wrote: Hey folks, Ran into this today. Not sure what I did wrong. =) It appears you are running Linux 3.2.0. This has symptoms that could be explained by a bug that has been fixed in newer Ceph code. Specifically, I think this is the fix that, without

Re: kernel crash from RBD in Ubuntu 12.04

2012-06-19 Thread Travis Rhoden
Awesome. Thanks Alex. I'll eagerly await 0.48 once it has finished QA. - Travis On Tue, Jun 19, 2012 at 2:45 PM, Alex Elder el...@dreamhost.com wrote: On 06/19/2012 01:32 PM, Travis Rhoden wrote: Hey folks, Ran into this today.  Not sure what I did wrong.  =) It appears you are running

Re: Release names survey

2012-06-19 Thread Yehuda Sadeh
Official results! 1. Cephalopods / Marine Animals 2. Oceans / Seas 3. Oceanic Trenches 4. Pirates 5. Ocean Currents It looks like the first Ceph stable release will be code-named 'Argonaut'. Yehuda On Tue, Jun 12, 2012 at 4:50 PM, Yehuda Sadeh yeh...@inktank.com wrote: When will there be 1.0?

Re: Building a small Ceph development environment

2012-06-19 Thread Dan Mick
Yeah, it's certainly doable to run all the daemons on one server; they don't even really need a separate disk, but that's generally a nice partitioning. You can usually get help on irc://irc.oftc.net/#ceph, too. On 06/19/2012 10:54 AM, Gregory Farnum wrote: You don't need to virtualize

Re: kernel crash from RBD in Ubuntu 12.04

2012-06-19 Thread Dan Mick
Actually it appears this fix is in the kernel (repo 'ceph-client'), so I don't think 0.48 will contain it (I could be wrong). You may need to grab that repo and build the kernel (or wait until that sha1 gets into your distro's kernel release) On 06/19/2012 11:50 AM, Travis Rhoden wrote:

Re: Building a small Ceph development environment

2012-06-19 Thread Gregory Farnum
Everything works out, but because RHEL6 lacks syncfs support your performance will be less predictable. If you have more than one OSD on a box without syncfs() support you'll certainly want to run btrfs if you can. (Ceph daemons are very concerned with their data integrity — for the obvious