Keep an eye on the new thread OSD (and probably other settings) not
being picked up outside of the [global] section. You may be running
into something similar.
Regards
Mark
On 17/10/14 11:52, lakshmi k s wrote:
Thank you Mark. Strangely, Icehouse install that I have didn't seem to
have one.
Maybe will try to update repository and see if this resolves the issue? Thanks
for the help guys
James
-Original Message-
From: Ian Colle [mailto:ico...@redhat.com]
Sent: 16 October 2014 18:14
To: Loic Dachary
Cc: Support - Avantek; ceph-users
Subject: Re: [ceph-users] Error deploying
samuel samu60@... writes:
Hi all,This issue is also affecting us (centos6.5 based icehouse) and,
as far as I could read,
comes from the fact that the path /var/lib/nova/instances (or whatever
configuration path you have in nova.conf) is not shared. Nova does not
see this shared path and
I assume you added more clients and checked that it didn't scale past
that?
Yes, correct.
You might look through the list archives; there are a number of
discussions about how and how far you can scale SSD-backed cluster
performance.
I have look at those discussions before, particular the one
Hi,
With 0.86, the following options and disabling debugging can improve
obviously.
osd enable op tracker = false
I think this one has been optimized by Somnath
https://github.com/ceph/ceph/commit/184773d67aed7470d167c954e786ea57ab0ce74b
- Mail original -
De: Mark Wu
Decrease the rbd debug level from 5 to 0, and almost all the debugging and
logging are disabled. It doesn't help.
ceph tell osd.* injectargs '--debug_rbd 0\/0'
ceph tell osd.* injectargs '--debug_objectcacher 0\/0'
ceph tell osd.* injectargs '--debug_rbd_replay 0\/0'
2014-10-17 8:45 GMT+08:00
The client doesn't hit any bottleneck. I also tried to run multiple
clients on different host. There's no change.
2014-10-17 14:36 GMT+08:00 Alexandre DERUMIER aderum...@odiso.com:
Hi,
Thanks for the detailed information. but I am already using fio with rbd
engine. Almost 4 volumes can reach
At least historically, high CPU usage and likely context switching and
lock contention have been the limiting factor during high IOPS workloads
on the test hardware at Inktank (and now RH). I ran benchmarks with a
parametric sweep of ceph parameters a while back on SSDs to see if
changing any
Sure Mark, I saw that thread last night. It will be interesting to see the
resolution.
Thanks,
Lakshmi.
On Friday, October 17, 2014 12:21 AM, Mark Kirkwood
mark.kirkw...@catalyst.net.nz wrote:
Keep an eye on the new thread OSD (and probably other settings) not
being picked up outside of
I haven't used the libvirt pools too much. To me, they are fairly
confusing as support for them seems to vary based on what you are doing.
On 10/16/2014 2:45 PM, Dan Geist wrote:
Thanks, Brian. That helps a lot. I suspect that wasn't needed if the MON hosts
were defined within ceph.conf, but
On 17/10/2014 00:39, Support - Avantek wrote:
Maybe will try to update repository and see if this resolves the issue?
Thanks for the help guys
James
-Original Message-
From: Ian Colle [mailto:ico...@redhat.com]
Sent: 16 October 2014 18:14
To: Loic Dachary
Cc: Support -
11 matches
Mail list logo