Re: SimpleMessenger dispatching: cause of performance problems?

2012-09-04 Thread Andreas Bluemle
Hi, I have run tests with v0.50 and see the same symptom. However: I have also the impression that the problem is related to the frequency at which requests arrive at the OSD. I have run tests with rbd kernel client using 126 KByte and 512 KByte size for write requests. In the 2nd case, the

Re: OSD crash

2012-09-04 Thread Andrey Korolyov
Hi, Almost always one or more osd dies when doing overlapped recovery - e.g. add new crushmap and remove some newly added osds from cluster some minutes later during remap or inject two slightly different crushmaps after a short time(surely preserving at least one of replicas online). Seems that

Re: Inject configuration change into cluster

2012-09-04 Thread Wido den Hollander
On 09/04/2012 07:04 AM, Skowron Sławomir wrote: Is there any way now, to inject new configuration change, without restarting daemons ?? Yes, you can use the injectargs command. $ ceph osd tell 0 injectargs '--debug-osd 20' What do you want to change? Not everything can be changed while the

RE: Inject configuration change into cluster

2012-09-04 Thread Skowron Sławomir
Ok, thanks. Number of workers used for recover, or numer of disk threads. -Original Message- From: Wido den Hollander [mailto:w...@widodh.nl] Sent: Tuesday, September 04, 2012 10:18 AM To: Skowron Sławomir Cc: ceph-devel@vger.kernel.org Subject: Re: Inject configuration change into

Re: Inject configuration change into cluster

2012-09-04 Thread Wido den Hollander
On 09/04/2012 10:30 AM, Skowron Sławomir wrote: Ok, thanks. Number of workers used for recover, or numer of disk threads. I think those can be changed while the OSD is running. You could always give it a try. Wido -Original Message- From: Wido den Hollander

RE: Inject configuration change into cluster

2012-09-04 Thread Skowron Sławomir
Yes i will try :) thanks. -Original Message- From: Wido den Hollander [mailto:w...@widodh.nl] Sent: Tuesday, September 04, 2012 11:36 AM To: Skowron Sławomir Cc: ceph-devel@vger.kernel.org Subject: Re: Inject configuration change into cluster On 09/04/2012 10:30 AM, Skowron Sławomir

Re: e: mon memory issue

2012-09-04 Thread Sławomir Skowron
Valgrind returns nothing. valgrind --tool=massif --log-file=ceph_mon_valgrind ceph-mon -i 0 log.txt ==30491== Massif, a heap profiler ==30491== Copyright (C) 2003-2011, and GNU GPL'd, by Nicholas Nethercote ==30491== Using Valgrind-3.7.0 and LibVEX; rerun with -h for copyright info ==30491==

Re: OSD crash

2012-09-04 Thread Sage Weil
On Tue, 4 Sep 2012, Andrey Korolyov wrote: Hi, Almost always one or more osd dies when doing overlapped recovery - e.g. add new crushmap and remove some newly added osds from cluster some minutes later during remap or inject two slightly different crushmaps after a short time(surely

Re: Inject configuration change into cluster

2012-09-04 Thread Sage Weil
On Tue, 4 Sep 2012, Wido den Hollander wrote: On 09/04/2012 10:30 AM, Skowron S?awomir wrote: Ok, thanks. Number of workers used for recover, or numer of disk threads. I think those can be changed while the OSD is running. You could always give it a try. The thread pool sizes can't

Re: Integration work

2012-09-04 Thread Tommi Virtanen
On Fri, Aug 31, 2012 at 11:02 PM, Ryan Nicholson ryan.nichol...@kcrg.com wrote: Secondly: Through some trials, I've found that if one loses all of his Monitors in a way that they also lose their disks, one basically loses their cluster. I would like to recommend a lower priority shift in

Re: Very unbalanced storage

2012-09-04 Thread Tommi Virtanen
On Fri, Aug 31, 2012 at 11:58 PM, Andrew Thompson andre...@aktzero.com wrote: Looking at old archives, I found this thread which shows that to mount a pool as cephfs, it needs to be added to mds: http://permalink.gmane.org/gmane.comp.file-systems.ceph.devel/5685 I started a `rados cppool

Re: radosgw keeps dying

2012-09-04 Thread Tommi Virtanen
On Sun, Sep 2, 2012 at 6:36 AM, Nick Couchman nick.couch...@seakr.com wrote: One additional piece of info...I did find the -d flag (documented in the radosgw-admin man page, but not in the radosgw man page) that keeps the daemon in the foreground and prints messages to stderr. When I use

Re: Very unbalanced storage

2012-09-04 Thread Andrew Thompson
On 9/4/2012 11:59 AM, Tommi Virtanen wrote: On Fri, Aug 31, 2012 at 11:58 PM, Andrew Thompson andre...@aktzero.com wrote: Looking at old archives, I found this thread which shows that to mount a pool as cephfs, it needs to be added to mds:

Re: Very unbalanced storage

2012-09-04 Thread Tommi Virtanen
On Tue, Sep 4, 2012 at 9:19 AM, Andrew Thompson andre...@aktzero.com wrote: Yes, it was my `data` pool I was trying to grow. After renaming and removing the original data pool, I can `ls` my folders/files, but not access them. Yup, you're seeing ceph-mds being able to access the metadata pool,

[PATCH] rbd: add new snapshots at the tail

2012-09-04 Thread Alex Elder
This fixes a bug that went in with this commit: commit f6e0c99092cca7be00fca4080cfc7081739ca544 Author: Alex Elder el...@inktank.com Date: Thu Aug 2 11:29:46 2012 -0500 rbd: simplify __rbd_init_snaps_header() The problem is that a new rbd snapshot needs to go either after an

[PATCH] rbd: rename block_name - object_prefix

2012-09-04 Thread Alex Elder
In the on-disk image header structure there is a field block_name which represents what we now call the object prefix for an rbd image. Rename this field object_prefix to be consistent with modern usage. This appears to be the only remaining vestige of the use of block in symbols that represent

Re: [PATCH] rbd: rename block_name - object_prefix

2012-09-04 Thread Josh Durgin
Reviewed-by: Josh Durgin josh.dur...@inktank.com On 09/04/2012 11:08 AM, Alex Elder wrote: In the on-disk image header structure there is a field block_name which represents what we now call the object prefix for an rbd image. Rename this field object_prefix to be consistent with modern usage.

Re: [PATCH] rbd: rename block_name - object_prefix

2012-09-04 Thread Dan Mick
Reviewed-by: Dan Mick dan.m...@inktank.com On 09/04/2012 11:08 AM, Alex Elder wrote: In the on-disk image header structure there is a field block_name which represents what we now call the object prefix for an rbd image. Rename this field object_prefix to be consistent with modern usage. This

Re: [PATCH] rbd: add new snapshots at the tail

2012-09-04 Thread Josh Durgin
Reviewed-by: Josh Durgin josh.dur...@inktank.com On 09/04/2012 11:08 AM, Alex Elder wrote: This fixes a bug that went in with this commit: commit f6e0c99092cca7be00fca4080cfc7081739ca544 Author: Alex Elder el...@inktank.com Date: Thu Aug 2 11:29:46 2012 -0500 rbd:

Re: [PATCH] docs: Add CloudStack documentation

2012-09-04 Thread Sage Weil
Finally applied this one. Great work, Wido! sage On Wed, 8 Aug 2012, Wido den Hollander wrote: The basic documentation about how you can use RBD with CloudStack Signed-off-by: Wido den Hollander w...@widodh.nl --- doc/rbd/rbd-cloudstack.rst | 49

Re: e: mon memory issue

2012-09-04 Thread Sage Weil
On Tue, 4 Sep 2012, S?awomir Skowron wrote: Valgrind returns nothing. valgrind --tool=massif --log-file=ceph_mon_valgrind ceph-mon -i 0 log.txt The fork is probably confusing it. I usually pass -f to ceph-mon (or ceph-osd etc) to keep it in the foreground. Can you give that a go? e.g.,

rbd 0.48 storage support for kvm proxmox distribution available

2012-09-04 Thread Alexandre DERUMIER
Hi List, We have added rbd 0.48 support to the proxmox 2.1 kvm distribution http://www.proxmox.com/products/proxmox-ve Proxmox setup: edit the /etc/pve/storage.cfg and add the configuration (gui creation is not available yet) rbd: mycephcluster monhost