Refer to my previous post for data you can gather that will help
narrow this down.
On Mon, Feb 20, 2017 at 6:36 PM, Jay Linux wrote:
> Hello John,
>
> Created tracker for this issue Refer-- >
> http://tracker.ceph.com/issues/18994
>
> Thanks
>
> On Fri, Feb 17, 2017 at
Hello John,
Created tracker for this issue Refer-- >
http://tracker.ceph.com/issues/18994
Thanks
On Fri, Feb 17, 2017 at 6:15 PM, John Spray wrote:
> On Fri, Feb 17, 2017 at 6:27 AM, Muthusamy Muthiah
> wrote:
> > On one our platform mgr uses 3
On Fri, Feb 17, 2017 at 6:27 AM, Muthusamy Muthiah
wrote:
> On one our platform mgr uses 3 CPU cores . Is there a ticket available for
> this issue ?
Not that I'm aware of, you could go ahead and open one.
Cheers,
John
> Thanks,
> Muthu
>
> On 14 February 2017 at
On one our platform mgr uses 3 CPU cores . Is there a ticket available for
this issue ?
Thanks,
Muthu
On 14 February 2017 at 03:13, Brad Hubbard wrote:
> Could one of the reporters open a tracker for this issue and attach
> the requested debugging data?
>
> On Mon, Feb 13,
Could one of the reporters open a tracker for this issue and attach
the requested debugging data?
On Mon, Feb 13, 2017 at 11:18 PM, Donny Davis wrote:
> I am having the same issue. When I looked at my idle cluster this morning,
> one of the nodes had 400% cpu utilization,
I am having the same issue. When I looked at my idle cluster this morning,
one of the nodes had 400% cpu utilization, and ceph-mgr was 300% of that.
I have 3 AIO nodes, and only one of them seemed to be affected.
On Sat, Jan 14, 2017 at 12:18 AM, Brad Hubbard wrote:
> Want
Want to install debuginfo packages and use something like this to try
and find out where it is spending most of its time?
https://poormansprofiler.org/
Note that you may need to do multiple runs to get a "feel" for where
it is spending most of its time. Also not that likely only one or two
FYI, I'm seeing this as well on the latest Kraken 11.1.1 RPMs on CentOS 7
w/ elrepo kernel 4.8.10. ceph-mgr is currently tearing through CPU and has
allocated ~11GB of RAM after a single day of usage. Only the active manager
is performing this way. The growth is linear and reproducible.
The
On Tue, Jan 10, 2017 at 12:59 PM, Samuel Just wrote:
> Mm, maybe the tag didn't get pushed. Alfredo, is there supposed to be
> a v11.1.1 tag?
Yep. You can see there is one here: https://github.com/ceph/ceph/releases
Specifically:
Mm, maybe the tag didn't get pushed. Alfredo, is there supposed to be
a v11.1.1 tag?
-Sam
On Tue, Jan 10, 2017 at 9:57 AM, Stillwell, Bryan J
wrote:
> That's strange, I installed that version using packages from here:
>
>
That's strange, I installed that version using packages from here:
http://download.ceph.com/debian-kraken/pool/main/c/ceph/
Bryan
On 1/10/17, 10:51 AM, "Samuel Just" wrote:
>Can you push that branch somewhere? I don't have a v11.1.1 or that sha1.
>-Sam
>
>On Tue, Jan 10,
Can you push that branch somewhere? I don't have a v11.1.1 or that sha1.
-Sam
On Tue, Jan 10, 2017 at 9:41 AM, Stillwell, Bryan J
wrote:
> This is from:
>
> ceph version 11.1.1 (87597971b371d7f497d7eabad3545d72d18dd755)
>
> On 1/10/17, 10:23 AM, "Samuel Just"
This is from:
ceph version 11.1.1 (87597971b371d7f497d7eabad3545d72d18dd755)
On 1/10/17, 10:23 AM, "Samuel Just" wrote:
>What ceph sha1 is that? Does it include
>6c3d015c6854a12cda40673848813d968ff6afae which fixed the messenger
>spin?
>-Sam
>
>On Tue, Jan 10, 2017 at 9:00
What ceph sha1 is that? Does it include
6c3d015c6854a12cda40673848813d968ff6afae which fixed the messenger
spin?
-Sam
On Tue, Jan 10, 2017 at 9:00 AM, Stillwell, Bryan J
wrote:
> On 1/10/17, 5:35 AM, "John Spray" wrote:
>
>>On Mon, Jan 9, 2017 at
On 1/10/17, 5:35 AM, "John Spray" wrote:
>On Mon, Jan 9, 2017 at 11:46 PM, Stillwell, Bryan J
> wrote:
>> Last week I decided to play around with Kraken (11.1.1-1xenial) on a
>> single node, two OSD cluster, and after a while I noticed that the new
On Mon, Jan 9, 2017 at 11:46 PM, Stillwell, Bryan J
wrote:
> Last week I decided to play around with Kraken (11.1.1-1xenial) on a
> single node, two OSD cluster, and after a while I noticed that the new
> ceph-mgr daemon is frequently using a lot of the CPU:
>
> 17519
16 matches
Mail list logo