It just came to my attention that Intel has advised Red Hat never to
lock in C0 as it may affect the life expectancy of server components
such as fans and the CPUs themselves.
FYI, YMMV.
On Fri, May 19, 2017 at 5:53 PM, xiaoguang fan
wrote:
> I have done a test about
Sounds good, but could also have a config option to set it before dropping
root?
On 4 May 2017 20:28, "Brad Hubbard" wrote:
On Thu, May 4, 2017 at 10:58 AM, Haomai Wang wrote:
> refer to https://github.com/ceph/ceph/pull/5013
How about we issue a warning
On Thu, May 4, 2017 at 10:58 AM, Haomai Wang wrote:
> refer to https://github.com/ceph/ceph/pull/5013
How about we issue a warning about possible performance implications
if we detect this is not set to 1 *or* 0 at startup?
>
> On Thu, May 4, 2017 at 7:56 AM, Brad Hubbard
refer to https://github.com/ceph/ceph/pull/5013
On Thu, May 4, 2017 at 7:56 AM, Brad Hubbard wrote:
> +ceph-devel to get input on whether we want/need to check the value of
> /dev/cpu_dma_latency (platform dependant) at startup and issue a
> warning, or whether documenting
+ceph-devel to get input on whether we want/need to check the value of
/dev/cpu_dma_latency (platform dependant) at startup and issue a
warning, or whether documenting this would suffice?
Any doc contribution would be welcomed.
On Wed, May 3, 2017 at 7:18 PM, Blair Bethwaite
On 3 May 2017 at 19:07, Dan van der Ster wrote:
> Whether cpu_dma_latency should be 0 or 1, I'm not sure yet. I assume
> your 30% boost was when going from throughput-performance to
> dma_latency=0, right? I'm trying to understand what is the incremental
> improvement from 1
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Blair Bethwaite
> Sent: 03 May 2017 09:53
> To: Dan van der Ster <d...@vanderster.com>
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] Intel po
On Wed, May 3, 2017 at 10:52 AM, Blair Bethwaite
wrote:
> On 3 May 2017 at 18:38, Dan van der Ster wrote:
>> Seems to work for me, or?
>
> Yeah now that I read the code more I see it is opening and
> manipulating /dev/cpu_dma_latency in response to
On 3 May 2017 at 18:38, Dan van der Ster wrote:
> Seems to work for me, or?
Yeah now that I read the code more I see it is opening and
manipulating /dev/cpu_dma_latency in response to that option, so the
TODO comment seems to be outdated. I verified tuned
latency-performance
On Wed, May 3, 2017 at 10:32 AM, Blair Bethwaite
wrote:
> On 3 May 2017 at 18:15, Dan van der Ster wrote:
>> It looks like el7's tuned natively supports the pmqos interface in
>> plugins/plugin_cpu.py.
>
> Ahha, you are right, but I'm sure I tested
On 3 May 2017 at 18:15, Dan van der Ster wrote:
> It looks like el7's tuned natively supports the pmqos interface in
> plugins/plugin_cpu.py.
Ahha, you are right, but I'm sure I tested tuned and it did not help.
Thanks for pointing out this script, I had not noticed it
Hi Dan,
On 3 May 2017 at 17:43, Dan van der Ster wrote:
> We use cpu_dma_latency=1, because it was in the latency-performance profile.
> And indeed by setting cpu_dma_latency=0 on one of our OSD servers,
> powertop now shows the package as 100% in turbo mode.
I tried both 0
On Wed, May 3, 2017 at 9:13 AM, Blair Bethwaite
wrote:
> We did the latter using the pmqos_static.py, which was previously part of
> the RHEL6 tuned latency-performance profile, but seems to have been dropped
> in RHEL7 (don't yet know why),
It looks like el7's tuned
One of the things I've noticed in the latest (3+ years) batch of CPUs
is that they ignore more the cpu scaler drivers and do what they want.
More than that interfaces like the /proc/cpuinfo are completely
incorrect.
I keep checking the real frequencies using applications like the
"i7z", and it
Hi all,
We recently noticed that despite having BIOS power profiles set to
performance on our RHEL7 Dell R720 Ceph OSD nodes, that CPU frequencies
never seemed to be getting into the top of the range, and in fact spent a
lot of time in low C-states despite that BIOS option supposedly disabling
15 matches
Mail list logo