From: Wei Yongjun
Using list_move_tail() instead of list_del() + list_add_tail().
Signed-off-by: Wei Yongjun
---
net/ceph/pagelist.c | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/net/ceph/pagelist.c b/net/ceph/pagelist.c
index 665cd23..92866be 100644
--- a/net/ceph/p
Hi List,
We have added rbd 0.48 support to the proxmox 2.1 kvm distribution
http://www.proxmox.com/products/proxmox-ve
Proxmox setup:
edit the /etc/pve/storage.cfg and add the configuration (gui creation is not
available yet)
rbd: mycephcluster
monhost 192.168.0.1:6789;192.168.0.2:678
On Tue, 4 Sep 2012, S?awomir Skowron wrote:
> Valgrind returns nothing.
>
> valgrind --tool=massif --log-file=ceph_mon_valgrind ceph-mon -i 0 > log.txt
The fork is probably confusing it. I usually pass -f to ceph-mon (or
ceph-osd etc) to keep it in the foreground. Can you give that a go?
e.g
Finally applied this one. Great work, Wido!
sage
On Wed, 8 Aug 2012, Wido den Hollander wrote:
> The basic documentation about how you can use RBD with CloudStack
>
> Signed-off-by: Wido den Hollander
> ---
> doc/rbd/rbd-cloudstack.rst | 49
>
Reviewed-by: Josh Durgin
On 09/04/2012 11:08 AM, Alex Elder wrote:
This fixes a bug that went in with this commit:
commit f6e0c99092cca7be00fca4080cfc7081739ca544
Author: Alex Elder
Date: Thu Aug 2 11:29:46 2012 -0500
rbd: simplify __rbd_init_snaps_header()
The problem
Reviewed-by: Dan Mick
On 09/04/2012 11:08 AM, Alex Elder wrote:
In the on-disk image header structure there is a field "block_name"
which represents what we now call the "object prefix" for an rbd
image. Rename this field "object_prefix" to be consistent with
modern usage.
This appears to be
Reviewed-by: Josh Durgin
On 09/04/2012 11:08 AM, Alex Elder wrote:
In the on-disk image header structure there is a field "block_name"
which represents what we now call the "object prefix" for an rbd
image. Rename this field "object_prefix" to be consistent with
modern usage.
This appears to
In the on-disk image header structure there is a field "block_name"
which represents what we now call the "object prefix" for an rbd
image. Rename this field "object_prefix" to be consistent with
modern usage.
This appears to be the only remaining vestige of the use of "block"
in symbols that rep
This fixes a bug that went in with this commit:
commit f6e0c99092cca7be00fca4080cfc7081739ca544
Author: Alex Elder
Date: Thu Aug 2 11:29:46 2012 -0500
rbd: simplify __rbd_init_snaps_header()
The problem is that a new rbd snapshot needs to go either after an
existing snapshot en
On Tue, Sep 4, 2012 at 9:19 AM, Andrew Thompson wrote:
> Yes, it was my `data` pool I was trying to grow. After renaming and removing
> the original data pool, I can `ls` my folders/files, but not access them.
Yup, you're seeing ceph-mds being able to access the "metadata" pool,
but all the direc
On Tue, 4 Sep 2012, Andrew Thompson wrote:
> On 9/4/2012 11:59 AM, Tommi Virtanen wrote:
> > On Fri, Aug 31, 2012 at 11:58 PM, Andrew Thompson
> > wrote:
> > > Looking at old archives, I found this thread which shows that to mount a
> > > pool as cephfs, it needs to be added to mds:
> > >
> > > h
On 9/4/2012 11:59 AM, Tommi Virtanen wrote:
On Fri, Aug 31, 2012 at 11:58 PM, Andrew Thompson wrote:
Looking at old archives, I found this thread which shows that to mount a
pool as cephfs, it needs to be added to mds:
http://permalink.gmane.org/gmane.comp.file-systems.ceph.devel/5685
I start
On Sun, Sep 2, 2012 at 6:36 AM, Nick Couchman wrote:
> One additional piece of info...I did find the "-d" flag (documented in the
> radosgw-admin man page, but not in the radosgw man page) that keeps the
> daemon in the foreground and prints messages to stderr. When I use this flag
> I get the
On Fri, Aug 31, 2012 at 11:58 PM, Andrew Thompson wrote:
> Looking at old archives, I found this thread which shows that to mount a
> pool as cephfs, it needs to be added to mds:
>
> http://permalink.gmane.org/gmane.comp.file-systems.ceph.devel/5685
>
> I started a `rados cppool data tempstore` a
On Fri, Aug 31, 2012 at 11:02 PM, Ryan Nicholson
wrote:
> Secondly: Through some trials, I've found that if one loses all of his
> Monitors in a way that they also lose their disks, one basically loses their
> cluster. I would like to recommend a lower priority shift in design that
> allows for
On Fri, Aug 31, 2012 at 1:36 PM, Sage Weil wrote:
> Okay, it's trivial to change 'pool' to 'root' in the default generated
> crush map, and update all the docs accordingly. The problem is that some
> stuff built on top of ceph has 'pool=default' in there, including our chef
> cookbooks and those
On Tue, 4 Sep 2012, Wido den Hollander wrote:
> On 09/04/2012 10:30 AM, Skowron S?awomir wrote:
> > Ok, thanks.
> >
> > Number of workers used for recover, or numer of disk threads.
> >
>
> I think those can be changed while the OSD is running. You could always give
> it a try.
The thread pool
On Tue, 4 Sep 2012, Andrey Korolyov wrote:
> Hi,
>
> Almost always one or more osd dies when doing overlapped recovery -
> e.g. add new crushmap and remove some newly added osds from cluster
> some minutes later during remap or inject two slightly different
> crushmaps after a short time(surely pr
Valgrind returns nothing.
valgrind --tool=massif --log-file=ceph_mon_valgrind ceph-mon -i 0 > log.txt
==30491== Massif, a heap profiler
==30491== Copyright (C) 2003-2011, and GNU GPL'd, by Nicholas Nethercote
==30491== Using Valgrind-3.7.0 and LibVEX; rerun with -h for copyright info
==30491== Co
Yes i will try :) thanks.
-Original Message-
From: Wido den Hollander [mailto:w...@widodh.nl]
Sent: Tuesday, September 04, 2012 11:36 AM
To: Skowron Sławomir
Cc: ceph-devel@vger.kernel.org
Subject: Re: Inject configuration change into cluster
On 09/04/2012 10:30 AM, Skowron Sławomir wrot
On 09/04/2012 10:30 AM, Skowron Sławomir wrote:
Ok, thanks.
Number of workers used for recover, or numer of disk threads.
I think those can be changed while the OSD is running. You could always
give it a try.
Wido
-Original Message-
From: Wido den Hollander [mailto:w...@widodh.nl
Ok, thanks.
Number of workers used for recover, or numer of disk threads.
-Original Message-
From: Wido den Hollander [mailto:w...@widodh.nl]
Sent: Tuesday, September 04, 2012 10:18 AM
To: Skowron Sławomir
Cc: ceph-devel@vger.kernel.org
Subject: Re: Inject configuration change into clust
On 09/04/2012 07:04 AM, Skowron Sławomir wrote:
Is there any way now, to inject new configuration change, without restarting
daemons ??
Yes, you can use the injectargs command.
$ ceph osd tell 0 injectargs '--debug-osd 20'
What do you want to change? Not everything can be changed while the
Hi,
Almost always one or more osd dies when doing overlapped recovery -
e.g. add new crushmap and remove some newly added osds from cluster
some minutes later during remap or inject two slightly different
crushmaps after a short time(surely preserving at least one of
replicas online). Seems that o
24 matches
Mail list logo