Am 22.08.2013 05:34, schrieb Samuel Just:
It's not really possible at this time to control that limit because
changing the primary is actually fairly expensive and doing it
unnecessarily would probably make the situation much worse
I'm sorry but remapping or backfilling is far less expensive
Have you tried setting osd_recovery_clone_overlap to false? That
seemed to help with Stefan's issue.
-Sam
On Wed, Aug 21, 2013 at 8:28 AM, Mike Dawson mike.daw...@cloudapt.com wrote:
Sam/Josh,
We upgraded from 0.61.7 to 0.67.1 during a maintenance window this morning,
hoping it would improve
; josh.dur...@inktank.com;
ceph-devel@vger.kernel.org
Subject: Re: still recovery issues with cuttlefish
Have you tried setting osd_recovery_clone_overlap to false? That seemed to
help with Stefan's issue.
-Sam
On Wed, Aug 21, 2013 at 8:28 AM, Mike Dawson mike.daw...@cloudapt.com wrote:
Sam/Josh,
We
: ceph-devel-ow...@vger.kernel.org
[mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of Samuel Just
Sent: mercredi 21 août 2013 17:33
To: Mike Dawson
Cc: Stefan Priebe - Profihost AG; josh.dur...@inktank.com;
ceph-devel@vger.kernel.org
Subject: Re: still recovery issues with cuttlefish
Have you
17:33
To: Mike Dawson
Cc: Stefan Priebe - Profihost AG; josh.dur...@inktank.com;
ceph-devel@vger.kernel.org
Subject: Re: still recovery issues with cuttlefish
Have you tried setting osd_recovery_clone_overlap to false? That seemed
to help with Stefan's issue.
-Sam
On Wed, Aug 21, 2013
Am 21.08.2013 17:32, schrieb Samuel Just:
Have you tried setting osd_recovery_clone_overlap to false? That
seemed to help with Stefan's issue.
This might sound a bug harsh but maybe due to my limited english skills ;-)
I still think that Cephs recovery system is broken by design. If an OSD
As long as the request is for an object which is up to date on the
primary, the request will be served without waiting for recovery. A
request only waits on recovery if the particular object being read or
written must be recovered. Your issue was that recovering the
particular object being
Hi Sam,
Am 21.08.2013 21:13, schrieb Samuel Just:
As long as the request is for an object which is up to date on the
primary, the request will be served without waiting for recovery.
Sure but remember if you have VM random 4K workload a lot of objects go
out of date pretty soon.
A request
It's not really possible at this time to control that limit because
changing the primary is actually fairly expensive and doing it
unnecessarily would probably make the situation much worse (it's
mostly necessary for backfilling, which is expensive anyway). It
seems like forwarding IO on an
the same problem still occours. Will need to check when i've time to
gather logs again.
Am 14.08.2013 01:11, schrieb Samuel Just:
I'm not sure, but your logs did show that you had 16 recovery ops in
flight, so it's worth a try. If it doesn't help, you should collect
the same set of logs I'll
I just backported a couple of patches from next to fix a bug where we
weren't respecting the osd_recovery_max_active config in some cases
(1ea6b56170fc9e223e7c30635db02fa2ad8f4b4e). You can either try the
current cuttlefish branch or wait for a 61.8 release.
-Sam
On Mon, Aug 12, 2013 at 10:34
Am 13.08.2013 um 22:43 schrieb Samuel Just sam.j...@inktank.com:
I just backported a couple of patches from next to fix a bug where we
weren't respecting the osd_recovery_max_active config in some cases
(1ea6b56170fc9e223e7c30635db02fa2ad8f4b4e). You can either try the
current cuttlefish
I'm not sure, but your logs did show that you had 16 recovery ops in
flight, so it's worth a try. If it doesn't help, you should collect
the same set of logs I'll look again. Also, there are a few other
patches between 61.7 and current cuttlefish which may help.
-Sam
On Tue, Aug 13, 2013 at
Did you take a look?
Stefan
Am 11.08.2013 um 05:50 schrieb Samuel Just sam.j...@inktank.com:
Great! I'll take a look on Monday.
-Sam
On Sat, Aug 10, 2013 at 12:08 PM, Stefan Priebe s.pri...@profihost.ag wrote:
Hi Samual,
Am 09.08.2013 23:44, schrieb Samuel Just:
I think Stefan's
I got swamped today. I should be able to look tomorrow. Sorry!
-Sam
On Mon, Aug 12, 2013 at 9:39 PM, Stefan Priebe - Profihost AG
s.pri...@profihost.ag wrote:
Did you take a look?
Stefan
Am 11.08.2013 um 05:50 schrieb Samuel Just sam.j...@inktank.com:
Great! I'll take a look on Monday.
Hi Samual,
Am 09.08.2013 23:44, schrieb Samuel Just:
I think Stefan's problem is probably distinct from Mike's.
Stefan: Can you reproduce the problem with
debug osd = 20
debug filestore = 20
debug ms = 1
debug optracker = 20
on a few osds (including the restarted osd), and upload those osd
I think Stefan's problem is probably distinct from Mike's.
Stefan: Can you reproduce the problem with
debug osd = 20
debug filestore = 20
debug ms = 1
debug optracker = 20
on a few osds (including the restarted osd), and upload those osd logs
along with the ceph.log from before killing the osd
Stefan,
I see the same behavior and I theorize it is linked to an issue detailed
in another thread [0]. Do your VM guests ever hang while your cluster is
HEALTH_OK like described in that other thread?
[0] http://comments.gmane.org/gmane.comp.file-systems.ceph.user/2982
A few observations:
Hi Mike,
Am 08.08.2013 16:05, schrieb Mike Dawson:
Stefan,
I see the same behavior and I theorize it is linked to an issue detailed
in another thread [0]. Do your VM guests ever hang while your cluster is
HEALTH_OK like described in that other thread?
[0]
Am 01.08.2013 23:23, schrieb Samuel Just: Can you dump your osd settings?
sudo ceph --admin-daemon ceph-osd.osdid.asok config show
Sure.
{ name: osd.0,
cluster: ceph,
none: 0\/5,
lockdep: 0\/0,
context: 0\/0,
crush: 0\/0,
mds: 0\/0,
mds_balancer: 0\/0,
mds_locker: 0\/0,
You might try turning osd_max_backfills to 2 or 1.
-Sam
On Fri, Aug 2, 2013 at 12:44 AM, Stefan Priebe s.pri...@profihost.ag wrote:
Am 01.08.2013 23:23, schrieb Samuel Just: Can you dump your osd settings?
sudo ceph --admin-daemon ceph-osd.osdid.asok config show
Sure.
{ name: osd.0,
Created #5844.
On Thu, Aug 1, 2013 at 10:38 PM, Samuel Just sam.j...@inktank.com wrote:
Is there a bug open for this? I suspect we don't sufficiently
throttle the snapshot removal work.
-Sam
On Thu, Aug 1, 2013 at 7:50 AM, Andrey Korolyov and...@xdel.ru wrote:
Second this. Also for
I already tried both values this makes no difference. The drives are not
the bottleneck.
Am 02.08.2013 19:35, schrieb Samuel Just:
You might try turning osd_max_backfills to 2 or 1.
-Sam
On Fri, Aug 2, 2013 at 12:44 AM, Stefan Priebe s.pri...@profihost.ag wrote:
Am 01.08.2013 23:23, schrieb
Also, you have osd_recovery_op_priority at 50. That is close to the
priority of client IO. You want it below 10 (defaults to 10), perhaps
at 1. You can also adjust down osd_recovery_max_active.
-Sam
On Fri, Aug 2, 2013 at 11:16 AM, Stefan Priebe s.pri...@profihost.ag wrote:
I already tried
Hi,
osd recovery max active = 1
osd max backfills = 1
osd recovery op priority = 5
still no difference...
Stefan
Am 02.08.2013 20:21, schrieb Samuel Just:
Also, you have osd_recovery_op_priority at 50. That is close to the
priority of client IO. You want it below 10
Second this. Also for long-lasting snapshot problem and related
performance issues I may say that cuttlefish improved things greatly,
but creation/deletion of large snapshot (hundreds of gigabytes of
commited data) still can bring down cluster for a minutes, despite
usage of every possible
Can you reproduce and attach the ceph.log from before you stop the osd
until after you have started the osd and it has recovered?
-Sam
On Thu, Aug 1, 2013 at 1:22 AM, Stefan Priebe - Profihost AG
s.pri...@profihost.ag wrote:
Hi,
i still have recovery issues with cuttlefish. After the OSD comes
m 01.08.2013 20:34, schrieb Samuel Just:
Can you reproduce and attach the ceph.log from before you stop the osd
until after you have started the osd and it has recovered?
-Sam
Sure which log levels?
On Thu, Aug 1, 2013 at 1:22 AM, Stefan Priebe - Profihost AG
s.pri...@profihost.ag wrote:
For now, just the main ceph.log.
-Sam
On Thu, Aug 1, 2013 at 11:34 AM, Stefan Priebe s.pri...@profihost.ag wrote:
m 01.08.2013 20:34, schrieb Samuel Just:
Can you reproduce and attach the ceph.log from before you stop the osd
until after you have started the osd and it has recovered?
-Sam
It doesn't have log levels, should be in /var/log/ceph/ceph.log.
-Sam
On Thu, Aug 1, 2013 at 11:36 AM, Samuel Just sam.j...@inktank.com wrote:
For now, just the main ceph.log.
-Sam
On Thu, Aug 1, 2013 at 11:34 AM, Stefan Priebe s.pri...@profihost.ag wrote:
m 01.08.2013 20:34, schrieb Samuel
I am also seeing recovery issues with 0.61.7. Here's the process:
- ceph osd set noout
- Reboot one of the nodes hosting OSDs
- VMs mounted from RBD volumes work properly
- I see the OSD's boot messages as they re-join the cluster
- Start seeing active+recovery_wait, peering, and
Mike we already have the async patch running. Yes it helps but only
helps it does not solve. It just hides the issue ...
Am 01.08.2013 20:54, schrieb Mike Dawson:
I am also seeing recovery issues with 0.61.7. Here's the process:
- ceph osd set noout
- Reboot one of the nodes hosting OSDs
Can you dump your osd settings?
sudo ceph --admin-daemon ceph-osd.osdid.asok config show
-Sam
On Thu, Aug 1, 2013 at 12:07 PM, Stefan Priebe s.pri...@profihost.ag wrote:
Mike we already have the async patch running. Yes it helps but only helps it
does not solve. It just hides the issue ...
Am
33 matches
Mail list logo