Hi guys,
Today looking at my graphs I noticed that one over 4 ceph nodes used a
lot of memory. It keeps growing and growing.
See the graph attached to this mail.
I run 0.48.2 on Ubuntu 12.04.
The other nodes also grow, but slowly than the first one.
I'm not quite sure about the information that
From: Yan, Zheng zheng.z@intel.com
Server::_rename_prepare() adds null dest dentry to the EMetaBlob if
the rename operation is overwriting remote linkage. This is incorrect
because null dentry are processed after primary and remote dentries
during journal replay. The erroneous null dentry
On 12/14/2012 07:43 PM, Sage Weil wrote:
We should drop this one, I think. See upstream commit
4c199a93a2d36b277a9fd209a0f2793f8460a215. When we added the similar call
on teh request tree it caused some noise in linux-next and then got
removed.
Well, we need to initialize it. In
On 12/14/2012 11:17 PM, Sage Weil wrote:
Most of the code uses int64_t/__s64 for the pool id, although in a few
cases we screwed up and limited it to 32 bits. In reality, that's way
overkill anyway; we could have left it at 32 bits to begin with.
The differing types representing the same
Hi Jens,
Seems that the RPM packager likes to keep the latest and greatest
versions in http://ceph.com/rpm-testing/ but this path isn't defined
in the ceph yum repository.
Dino
On Sun, Dec 16, 2012 at 5:11 PM, Jens Kristian Søgaard
j...@mermaidconsulting.dk wrote:
Hi Sage,
v0.52 is also
On 12/13/2012 11:46 PM, norbi wrote:
hm...
now the MOD and MON doesn't start...
the problems seems the configuration differences in ceph.conf
http://ceph.com/docs/master/rados/configuration/ceph-conf/#the-ceph-conf-file
or its a failure in the documentation?
It's a type in the docs, which
On 12/16/2012 11:43 AM, Eric Renfro wrote:
Hello.
I just recently started using Ceph FS, and by recommendation by the
developers of it in the IRC channel, I decided to start off with 0.55,
or rather whatever's closest to that in the latest git checkout from
git's master on 12/12/2012.
So
On Mon, 17 Dec 2012, Alex Elder wrote:
On 12/14/2012 07:43 PM, Sage Weil wrote:
We should drop this one, I think. See upstream commit
4c199a93a2d36b277a9fd209a0f2793f8460a215. When we added the similar call
on teh request tree it caused some noise in linux-next and then got
removed.
On Mon, 17 Dec 2012, Alex Elder wrote:
On 12/14/2012 11:17 PM, Sage Weil wrote:
Most of the code uses int64_t/__s64 for the pool id, although in a few
cases we screwed up and limited it to 32 bits. In reality, that's way
overkill anyway; we could have left it at 32 bits to begin with.
On 12/17/2012 10:45 AM, Sage Weil wrote:
On Mon, 17 Dec 2012, Alex Elder wrote:
On 12/14/2012 07:43 PM, Sage Weil wrote:
We should drop this one, I think. See upstream commit
4c199a93a2d36b277a9fd209a0f2793f8460a215. When we added the similar call
on teh request tree it caused some noise
On 12/17/2012 10:49 AM, Sage Weil wrote:
On Mon, 17 Dec 2012, Alex Elder wrote:
On 12/14/2012 11:17 PM, Sage Weil wrote:
Most of the code uses int64_t/__s64 for the pool id, although in a few
cases we screwed up and limited it to 32 bits. In reality, that's way
overkill anyway; we could
Would be nice indeed, but IIRC, nilfs does not support xattrs, yet...
On Fri, Dec 14, 2012 at 1:47 AM, Sage Weil s...@inktank.com wrote:
nilfs2 has a 'continuous snapshotting' architecture that the ceph-osd can
take advantage of for making fully-consistent checkpoints of state for
recovering
On 12/13/2012 07:37 AM, Stratos Psomadakis wrote:
Signed-off-by: Stratos Psomadakis pso...@grnet.gr
---
Hi Josh,
This patch adds the '--json' flag to enable dumping the showmapped output in
json format (as you suggested). I'm not sure if any other rbd subcommands could
make use of this flag (so
On 12/14/2012 12:57 AM, Stratos Psomadakis wrote:
On 12/13/2012 07:17 PM, Yehuda Sadeh wrote:
On Thu, Dec 13, 2012 at 7:37 AM, Stratos Psomadakis pso...@grnet.gr wrote:
Signed-off-by: Stratos Psomadakis pso...@grnet.gr
---
Hi Josh,
This patch adds the '--json' flag to enable dumping the
On Mon, Dec 17, 2012 at 9:32 AM, Josh Durgin josh.dur...@inktank.com wrote:
On 12/14/2012 12:57 AM, Stratos Psomadakis wrote:
On 12/13/2012 07:17 PM, Yehuda Sadeh wrote:
On Thu, Dec 13, 2012 at 7:37 AM, Stratos Psomadakis pso...@grnet.gr
wrote:
Signed-off-by: Stratos Psomadakis
For those interested; I've updated my branch at github. I've
made the old config/new config switchable using autoconf.
I think that lowers the time needed to get it up running.
I've added an option to do_autogen to use the new config;
./do_autogen.sh -c
Should Generally Build (tm), but could be
On Thu, Dec 13, 2012 at 4:47 PM, Sage Weil s...@inktank.com wrote:
nilfs2 has a 'continuous snapshotting' architecture that the ceph-osd can
take advantage of for making fully-consistent checkpoints of state for
recovering from a crash. If these checkpoints are efficient enough, in
fact, you
The first one of these is an update based on a previous post.
The other two are new, but basically address the same issue
in two other spots in the osd client code.
-Alex
[PATCH 1/3] libceph: init osd-o_node in create_osd()
[PATCH 2/3] libceph: init
The red-black node node in the ceph osd structure is not initialized
in create_osd(). Because this node can be the subject of a
RB_EMPTY_NODE() call later on, we should ensure the node is
initialized properly for that. Add a call to RB_CLEAR_NODE()
initialize it.
Signed-off-by: Alex Elder
The red-black node node in the ceph osd event structure is not
initialized in create_osdc_create_event(). Because this node can
be the subject of a RB_EMPTY_NODE() call later on, we should ensure
the node is initialized properly for that.
Signed-off-by: Alex Elder el...@inktank.com
---
The red-black node in the ceph osd request structure is initialized
in ceph_osdc_alloc_request() using rbd_init_node(). We do need to
initialize this, because in __unregister_request() we call
RB_EMPTY_NODE(), which expects the node it's checking to have
been initialized. But rb_init_node() is
On 12/17/2012 09:35 AM, Yehuda Sadeh wrote:
On Mon, Dec 17, 2012 at 9:32 AM, Josh Durgin josh.dur...@inktank.com wrote:
On 12/14/2012 12:57 AM, Stratos Psomadakis wrote:
On 12/13/2012 07:17 PM, Yehuda Sadeh wrote:
On Thu, Dec 13, 2012 at 7:37 AM, Stratos Psomadakis pso...@grnet.gr
wrote:
On 12/17/2012 03:28 PM, Alex Elder wrote:
On 12/17/2012 11:09 AM, Alex Elder wrote:
On 12/17/2012 10:49 AM, Sage Weil wrote:
On Mon, 17 Dec 2012, Alex Elder wrote:
On 12/14/2012 11:17 PM, Sage Weil wrote:
Most of the code uses int64_t/__s64 for the pool id, although in a few
cases we
Hi,
No, I don't see nothing abnormal in the network stats. I don't see
anything in the logs... :(
The weird thing is that one node over 4 seems to take way more memory
than the others...
--
Regards,
Sébastien Han.
On Mon, Dec 17, 2012 at 11:31 PM, Sébastien Han han.sebast...@gmail.com wrote:
Hi,
Format 2 images (and attendant layering support) are not yet
supported by the kernel rbd client, according to:
http://ceph.com/docs/master/rbd/rbd-snapshot/#layering
When might this support be available?
Cheers,
Chris
--
To unsubscribe from this list: send the line unsubscribe ceph-devel
On 12/14/2012 03:41 PM, Jim Schutt wrote:
Hi,
I'm looking at commit e3ed28eb2 in the next branch,
and I have a question.
Shouldn't the limit be pg_num 65536, because
PGs are numbered 0 thru pg_num-1?
If not, what am I missing?
FWIW, up through yesterday I've been using the next branch and
26 matches
Mail list logo