It just came to my attention that Intel has advised Red Hat never to
lock in C0 as it may affect the life expectancy of server components
such as fans and the CPUs themselves.
FYI, YMMV.
On Fri, May 19, 2017 at 5:53 PM, xiaoguang fan
wrote:
> I have done a test about close c-stat , but performan
Sounds good, but could also have a config option to set it before dropping
root?
On 4 May 2017 20:28, "Brad Hubbard" wrote:
On Thu, May 4, 2017 at 10:58 AM, Haomai Wang wrote:
> refer to https://github.com/ceph/ceph/pull/5013
How about we issue a warning about possible performance implications
On Thu, May 4, 2017 at 10:58 AM, Haomai Wang wrote:
> refer to https://github.com/ceph/ceph/pull/5013
How about we issue a warning about possible performance implications
if we detect this is not set to 1 *or* 0 at startup?
>
> On Thu, May 4, 2017 at 7:56 AM, Brad Hubbard wrote:
>> +ceph-devel
refer to https://github.com/ceph/ceph/pull/5013
On Thu, May 4, 2017 at 7:56 AM, Brad Hubbard wrote:
> +ceph-devel to get input on whether we want/need to check the value of
> /dev/cpu_dma_latency (platform dependant) at startup and issue a
> warning, or whether documenting this would suffice?
>
>
+ceph-devel to get input on whether we want/need to check the value of
/dev/cpu_dma_latency (platform dependant) at startup and issue a
warning, or whether documenting this would suffice?
Any doc contribution would be welcomed.
On Wed, May 3, 2017 at 7:18 PM, Blair Bethwaite
wrote:
> On 3 May 20
On 3 May 2017 at 19:07, Dan van der Ster wrote:
> Whether cpu_dma_latency should be 0 or 1, I'm not sure yet. I assume
> your 30% boost was when going from throughput-performance to
> dma_latency=0, right? I'm trying to understand what is the incremental
> improvement from 1 to 0.
Probably minima
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Blair Bethwaite
> Sent: 03 May 2017 09:53
> To: Dan van der Ster
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] Intel power tuning - 30% throughput pe
On Wed, May 3, 2017 at 10:52 AM, Blair Bethwaite
wrote:
> On 3 May 2017 at 18:38, Dan van der Ster wrote:
>> Seems to work for me, or?
>
> Yeah now that I read the code more I see it is opening and
> manipulating /dev/cpu_dma_latency in response to that option, so the
> TODO comment seems to be o
On 3 May 2017 at 18:38, Dan van der Ster wrote:
> Seems to work for me, or?
Yeah now that I read the code more I see it is opening and
manipulating /dev/cpu_dma_latency in response to that option, so the
TODO comment seems to be outdated. I verified tuned
latency-performance _is_ doing this prope
On Wed, May 3, 2017 at 10:32 AM, Blair Bethwaite
wrote:
> On 3 May 2017 at 18:15, Dan van der Ster wrote:
>> It looks like el7's tuned natively supports the pmqos interface in
>> plugins/plugin_cpu.py.
>
> Ahha, you are right, but I'm sure I tested tuned and it did not help.
> Thanks for pointing
On 3 May 2017 at 18:15, Dan van der Ster wrote:
> It looks like el7's tuned natively supports the pmqos interface in
> plugins/plugin_cpu.py.
Ahha, you are right, but I'm sure I tested tuned and it did not help.
Thanks for pointing out this script, I had not noticed it before and I
can see now wh
Hi Dan,
On 3 May 2017 at 17:43, Dan van der Ster wrote:
> We use cpu_dma_latency=1, because it was in the latency-performance profile.
> And indeed by setting cpu_dma_latency=0 on one of our OSD servers,
> powertop now shows the package as 100% in turbo mode.
I tried both 0 and 1 and didn't noti
On Wed, May 3, 2017 at 9:13 AM, Blair Bethwaite
wrote:
> We did the latter using the pmqos_static.py, which was previously part of
> the RHEL6 tuned latency-performance profile, but seems to have been dropped
> in RHEL7 (don't yet know why),
It looks like el7's tuned natively supports the pmqos i
On 3 May 2017 at 17:24, Wido den Hollander wrote:
> Is this a HDD or SSD cluster? I assume the latter? Since usually HDDs are
> 100% busy during heavy recovery.
HDD with SSD journals. Our experience at this scale, ~900 OSDs over 33
hosts, is that it takes a fair percentage of PGs to be involved
One of the things I've noticed in the latest (3+ years) batch of CPUs
is that they ignore more the cpu scaler drivers and do what they want.
More than that interfaces like the /proc/cpuinfo are completely
incorrect.
I keep checking the real frequencies using applications like the
"i7z", and it sho
Hi Blair,
We use cpu_dma_latency=1, because it was in the latency-performance profile.
And indeed by setting cpu_dma_latency=0 on one of our OSD servers,
powertop now shows the package as 100% in turbo mode.
So I suppose we'll pay for this performance boost in energy.
But more importantly, can th
> Op 3 mei 2017 om 9:13 schreef Blair Bethwaite :
>
>
> Hi all,
>
> We recently noticed that despite having BIOS power profiles set to
> performance on our RHEL7 Dell R720 Ceph OSD nodes, that CPU frequencies
> never seemed to be getting into the top of the range, and in fact spent a
> lot of t
Hi all,
We recently noticed that despite having BIOS power profiles set to
performance on our RHEL7 Dell R720 Ceph OSD nodes, that CPU frequencies
never seemed to be getting into the top of the range, and in fact spent a
lot of time in low C-states despite that BIOS option supposedly disabling
C-s
18 matches
Mail list logo