Am 01.08.2013 23:23, schrieb Samuel Just: Can you dump your osd settings?
sudo ceph --admin-daemon ceph-osd.osdid.asok config show
Sure.
{ name: osd.0,
cluster: ceph,
none: 0\/5,
lockdep: 0\/0,
context: 0\/0,
crush: 0\/0,
mds: 0\/0,
mds_balancer: 0\/0,
mds_locker: 0\/0,
Hi Noah
Thank you for your comments.
My companies policy states that all software needs to go through a
security assessment to be allowed to use in production. As all our tools
are focused on java, a native implementation would be far easier to
handle than a java binding of librados (which
On Thu, Aug 1, 2013 at 11:19 PM, Yan, Zheng uker...@gmail.com wrote:
On Thu, Aug 1, 2013 at 7:51 PM, Sha Zhengju handai@gmail.com wrote:
From: Sha Zhengju handai@taobao.com
Following we will begin to add memcg dirty page accounting around
__set_page_dirty_
{buffers,nobuffers} in vfs
Func ceph_calc_ceph_pg maybe failed.So add check for returned value.
Signed-off-by: Jianpeng Ma majianp...@gmail.com
---
fs/ceph/ioctl.c | 8 ++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/fs/ceph/ioctl.c b/fs/ceph/ioctl.c
index e0b4ef3..8c463dd 100644
---
On Fri, Aug 2, 2013 at 2:27 AM, Sage Weil s...@inktank.com wrote:
On Thu, 1 Aug 2013, Yan, Zheng wrote:
On Thu, Aug 1, 2013 at 7:51 PM, Sha Zhengju handai@gmail.com wrote:
From: Sha Zhengju handai@taobao.com
Following we will begin to add memcg dirty page accounting around
I want to mount ceph fs (using fuse) but /etc/fstab treats it as a local
filesystem and so tries to mount it before ceph is started, or indeed before
the network is even up.
Also, ceph tries to start before the network is up and fails because it can't
bind to an address. I think this is
Hi
I have asked this question in ceph-users, but did not get any
response, so I'll test my luck again, but with ceph-devel =)
Is there any way to copy part of one object into another one if they
reside in different pgs?
There is rados_clone_range, but it requires both objects to be inside one
On Fri, Aug 2, 2013 at 5:04 PM, Sha Zhengju handai@gmail.com wrote:
On Thu, Aug 1, 2013 at 11:19 PM, Yan, Zheng uker...@gmail.com wrote:
On Thu, Aug 1, 2013 at 7:51 PM, Sha Zhengju handai@gmail.com wrote:
From: Sha Zhengju handai@taobao.com
Following we will begin to add
Hi Oleg,
On Fri, 2 Aug 2013, Oleg Krasnianskiy wrote:
Hi
I have asked this question in ceph-users, but did not get any
response, so I'll test my luck again, but with ceph-devel =)
Sorry about that!
Is there any way to copy part of one object into another one if they
reside in different
Hi Sam,
- coll_t needs to include a chunk_id_t.
https://github.com/athanatos/ceph/blob/2234bdf7fc30738363160d598ae8b4d6f75e1dd1/doc/dev/osd_internals/erasure_coding.rst#distinguished-acting-set-positions
That would be for sanity check ? Since the rank of the chunk ( chunk_id_t )
matches the
There are two short sessions for MDS blueprints:
mds: dumpability
mds: reduce memory usage
These are both reasonably self-contained projects that are easy for people
to get involved in but have relatively high payoff in terms of MDS
performance and debuggability. If anyone is interested
The reason for the chunk_id_t in the coll_t is to handle a tricky edge case:
[0,1,2]
[3,1,2]
..time passes..
[3,0,2]
This should be exceedingly rare, but a single osd might end up with
copies of two different chunks of the same pg.
When an osd joins an acting set with a preexisting copy of the
You might try turning osd_max_backfills to 2 or 1.
-Sam
On Fri, Aug 2, 2013 at 12:44 AM, Stefan Priebe s.pri...@profihost.ag wrote:
Am 01.08.2013 23:23, schrieb Samuel Just: Can you dump your osd settings?
sudo ceph --admin-daemon ceph-osd.osdid.asok config show
Sure.
{ name: osd.0,
Created #5844.
On Thu, Aug 1, 2013 at 10:38 PM, Samuel Just sam.j...@inktank.com wrote:
Is there a bug open for this? I suspect we don't sufficiently
throttle the snapshot removal work.
-Sam
On Thu, Aug 1, 2013 at 7:50 AM, Andrey Korolyov and...@xdel.ru wrote:
Second this. Also for
There is a session at CDS scheduled to discuss ceph-deploy (4:40pm PDT on
Monday). We'll be going over what we currently have in backlog for
improvements, but if you have any opinions about what else ceph-deploy
should or should not do or areas where it is problematic, please reply to
this
I already tried both values this makes no difference. The drives are not
the bottleneck.
Am 02.08.2013 19:35, schrieb Samuel Just:
You might try turning osd_max_backfills to 2 or 1.
-Sam
On Fri, Aug 2, 2013 at 12:44 AM, Stefan Priebe s.pri...@profihost.ag wrote:
Am 01.08.2013 23:23, schrieb
Also, you have osd_recovery_op_priority at 50. That is close to the
priority of client IO. You want it below 10 (defaults to 10), perhaps
at 1. You can also adjust down osd_recovery_max_active.
-Sam
On Fri, Aug 2, 2013 at 11:16 AM, Stefan Priebe s.pri...@profihost.ag wrote:
I already tried
Hi,
osd recovery max active = 1
osd max backfills = 1
osd recovery op priority = 5
still no difference...
Stefan
Am 02.08.2013 20:21, schrieb Samuel Just:
Also, you have osd_recovery_op_priority at 50. That is close to the
priority of client IO. You want it below 10
Hi,
First I would like to state that with all its limitiation, I have
managed to build multiple
clusters with ceph-deploy and without it, I would have been totally
lost. Things
that I feel would improve it include:
A debug mode where it lists everything it is doing. This will be
helpful
Applied, thanks!
On Fri, 2 Aug 2013, majianpeng wrote:
Func ceph_calc_ceph_pg maybe failed.So add check for returned value.
Signed-off-by: Jianpeng Ma majianp...@gmail.com
---
fs/ceph/ioctl.c | 8 ++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/fs/ceph/ioctl.c
BTW, I was going to add this to the testing branch but it doesn't apply to
the current tree. Can you rebase on top of ceph-client.git #testing?
Thanks!
sage
On Fri, 2 Aug 2013, majianpeng wrote:
cephfs . show_layout
layyout.data_pool: 0
layout.object_size: 4194304
On Fri, 2 Aug 2013, Sha Zhengju wrote:
On Fri, Aug 2, 2013 at 2:27 AM, Sage Weil s...@inktank.com wrote:
On Thu, 1 Aug 2013, Yan, Zheng wrote:
On Thu, Aug 1, 2013 at 7:51 PM, Sha Zhengju handai@gmail.com wrote:
From: Sha Zhengju handai@taobao.com
Following we will begin to
I'm running ceph 0.61.7-1~bpo70+1 and I think there is a bug in /etc/init.d/ceph
The heartbeat RA expects that the init.d script will return 3 for not
running, but if there is no agent (eg mds) defined for that host it will
return 0 instead, so pacemaker thinks the agent is running on a node
23 matches
Mail list logo