> > - if (cpuidle_not_available(drv, dev)) {
> > + if (cpuidle_not_available(drv, dev) || this_is_a_fast_idle) {
> > default_idle_call();
> > goto exit_idle;
> > }
>
> No, that's wrong. We want to fix the normal C state selection process to
> pick the right C
> > - if (cpuidle_not_available(drv, dev)) {
> > + if (cpuidle_not_available(drv, dev) || this_is_a_fast_idle) {
> > default_idle_call();
> > goto exit_idle;
> > }
>
> No, that's wrong. We want to fix the normal C state selection process to
> pick the right C
On 7/14/2017 8:38 AM, Peter Zijlstra wrote:
No, that's wrong. We want to fix the normal C state selection process to
pick the right C state.
The fast-idle criteria could cut off a whole bunch of available C
states. We need to understand why our current C state pick is wrong and
amend the
On 7/14/2017 8:38 AM, Peter Zijlstra wrote:
No, that's wrong. We want to fix the normal C state selection process to
pick the right C state.
The fast-idle criteria could cut off a whole bunch of available C
states. We need to understand why our current C state pick is wrong and
amend the
On Fri, Jul 14, 2017 at 11:56:33AM +0800, Li, Aubrey wrote:
> On 2017/7/14 2:28, Peter Zijlstra wrote:
> > On Thu, Jul 13, 2017 at 11:13:28PM +0800, Li, Aubrey wrote:
> >> On 2017/7/13 22:53, Peter Zijlstra wrote:
> >
> >>> Fixing C-state selection by creating an alternative idle path sounds so
>
On Fri, Jul 14, 2017 at 11:56:33AM +0800, Li, Aubrey wrote:
> On 2017/7/14 2:28, Peter Zijlstra wrote:
> > On Thu, Jul 13, 2017 at 11:13:28PM +0800, Li, Aubrey wrote:
> >> On 2017/7/13 22:53, Peter Zijlstra wrote:
> >
> >>> Fixing C-state selection by creating an alternative idle path sounds so
>
On Fri, Jul 14, 2017 at 11:47:32AM +0800, Li, Aubrey wrote:
> On 2017/7/13 23:20, Paul E. McKenney wrote:
> > On Thu, Jul 13, 2017 at 04:53:11PM +0200, Peter Zijlstra wrote:
> >> On Thu, Jul 13, 2017 at 10:48:55PM +0800, Li, Aubrey wrote:
> >>
> >>> - totally from arch_cpu_idle_enter entry to
On Fri, Jul 14, 2017 at 11:47:32AM +0800, Li, Aubrey wrote:
> On 2017/7/13 23:20, Paul E. McKenney wrote:
> > On Thu, Jul 13, 2017 at 04:53:11PM +0200, Peter Zijlstra wrote:
> >> On Thu, Jul 13, 2017 at 10:48:55PM +0800, Li, Aubrey wrote:
> >>
> >>> - totally from arch_cpu_idle_enter entry to
On 2017/7/14 2:28, Peter Zijlstra wrote:
> On Thu, Jul 13, 2017 at 11:13:28PM +0800, Li, Aubrey wrote:
>> On 2017/7/13 22:53, Peter Zijlstra wrote:
>
>>> Fixing C-state selection by creating an alternative idle path sounds so
>>> very wrong.
>>
>> This only happens on the arch which has multiple
On 2017/7/14 2:28, Peter Zijlstra wrote:
> On Thu, Jul 13, 2017 at 11:13:28PM +0800, Li, Aubrey wrote:
>> On 2017/7/13 22:53, Peter Zijlstra wrote:
>
>>> Fixing C-state selection by creating an alternative idle path sounds so
>>> very wrong.
>>
>> This only happens on the arch which has multiple
On 2017/7/13 23:20, Paul E. McKenney wrote:
> On Thu, Jul 13, 2017 at 04:53:11PM +0200, Peter Zijlstra wrote:
>> On Thu, Jul 13, 2017 at 10:48:55PM +0800, Li, Aubrey wrote:
>>
>>> - totally from arch_cpu_idle_enter entry to arch_cpu_idle_exit return costs
>>> 9122ns - 15318ns.
>>> In this
On 2017/7/13 23:20, Paul E. McKenney wrote:
> On Thu, Jul 13, 2017 at 04:53:11PM +0200, Peter Zijlstra wrote:
>> On Thu, Jul 13, 2017 at 10:48:55PM +0800, Li, Aubrey wrote:
>>
>>> - totally from arch_cpu_idle_enter entry to arch_cpu_idle_exit return costs
>>> 9122ns - 15318ns.
>>> In this
On Thu, Jul 13, 2017 at 11:13:28PM +0800, Li, Aubrey wrote:
> On 2017/7/13 22:53, Peter Zijlstra wrote:
> > Fixing C-state selection by creating an alternative idle path sounds so
> > very wrong.
>
> This only happens on the arch which has multiple hardware idle cstates, like
> Intel's
On Thu, Jul 13, 2017 at 11:13:28PM +0800, Li, Aubrey wrote:
> On 2017/7/13 22:53, Peter Zijlstra wrote:
> > Fixing C-state selection by creating an alternative idle path sounds so
> > very wrong.
>
> This only happens on the arch which has multiple hardware idle cstates, like
> Intel's
On Thu, Jul 13, 2017 at 04:53:11PM +0200, Peter Zijlstra wrote:
> On Thu, Jul 13, 2017 at 10:48:55PM +0800, Li, Aubrey wrote:
>
> > - totally from arch_cpu_idle_enter entry to arch_cpu_idle_exit return costs
> > 9122ns - 15318ns.
> > In this period(arch idle), rcu_idle_enter costs 1985ns -
On Thu, Jul 13, 2017 at 04:53:11PM +0200, Peter Zijlstra wrote:
> On Thu, Jul 13, 2017 at 10:48:55PM +0800, Li, Aubrey wrote:
>
> > - totally from arch_cpu_idle_enter entry to arch_cpu_idle_exit return costs
> > 9122ns - 15318ns.
> > In this period(arch idle), rcu_idle_enter costs 1985ns -
On 2017/7/13 22:53, Peter Zijlstra wrote:
> On Thu, Jul 13, 2017 at 10:48:55PM +0800, Li, Aubrey wrote:
>
>> - totally from arch_cpu_idle_enter entry to arch_cpu_idle_exit return costs
>> 9122ns - 15318ns.
>> In this period(arch idle), rcu_idle_enter costs 1985ns - 2262ns,
>>
On 2017/7/13 22:53, Peter Zijlstra wrote:
> On Thu, Jul 13, 2017 at 10:48:55PM +0800, Li, Aubrey wrote:
>
>> - totally from arch_cpu_idle_enter entry to arch_cpu_idle_exit return costs
>> 9122ns - 15318ns.
>> In this period(arch idle), rcu_idle_enter costs 1985ns - 2262ns,
>>
On Thu, Jul 13, 2017 at 10:48:55PM +0800, Li, Aubrey wrote:
> - totally from arch_cpu_idle_enter entry to arch_cpu_idle_exit return costs
> 9122ns - 15318ns.
> In this period(arch idle), rcu_idle_enter costs 1985ns - 2262ns,
> rcu_idle_exit
> costs 1813ns - 3507ns
>
> Besides RCU,
On Thu, Jul 13, 2017 at 10:48:55PM +0800, Li, Aubrey wrote:
> - totally from arch_cpu_idle_enter entry to arch_cpu_idle_exit return costs
> 9122ns - 15318ns.
> In this period(arch idle), rcu_idle_enter costs 1985ns - 2262ns,
> rcu_idle_exit
> costs 1813ns - 3507ns
>
> Besides RCU,
On 2017/7/13 16:36, Peter Zijlstra wrote:
> On Wed, Jul 12, 2017 at 02:32:40PM -0700, Andi Kleen wrote:
>
>>> It uses the normal idle path, it just makes the NOHZ enter fail.
>>
>> Which is only a small part of the problem.
>
> Given the data so far provided it was by far the biggest problem. If
On 2017/7/13 16:36, Peter Zijlstra wrote:
> On Wed, Jul 12, 2017 at 02:32:40PM -0700, Andi Kleen wrote:
>
>>> It uses the normal idle path, it just makes the NOHZ enter fail.
>>
>> Which is only a small part of the problem.
>
> Given the data so far provided it was by far the biggest problem. If
On Wed, Jul 12, 2017 at 02:32:40PM -0700, Andi Kleen wrote:
> On Wed, Jul 12, 2017 at 10:34:10AM +0200, Peter Zijlstra wrote:
> > On Wed, Jul 12, 2017 at 12:15:08PM +0800, Li, Aubrey wrote:
> > > Okay, the difference is that Mike's patch uses a very simple algorithm to
> > > make the decision.
>
On Wed, Jul 12, 2017 at 02:32:40PM -0700, Andi Kleen wrote:
> On Wed, Jul 12, 2017 at 10:34:10AM +0200, Peter Zijlstra wrote:
> > On Wed, Jul 12, 2017 at 12:15:08PM +0800, Li, Aubrey wrote:
> > > Okay, the difference is that Mike's patch uses a very simple algorithm to
> > > make the decision.
>
On Wed, Jul 12, 2017 at 11:46:17AM -0700, Paul E. McKenney wrote:
> So please let me know if rcu_needs_cpu() or rcu_prepare_for_idle() are
> prominent contributors to to-idle latency.
Right, some actual data would be good.
On Wed, Jul 12, 2017 at 11:46:17AM -0700, Paul E. McKenney wrote:
> So please let me know if rcu_needs_cpu() or rcu_prepare_for_idle() are
> prominent contributors to to-idle latency.
Right, some actual data would be good.
On Wed, Jul 12, 2017 at 10:34:10AM +0200, Peter Zijlstra wrote:
> On Wed, Jul 12, 2017 at 12:15:08PM +0800, Li, Aubrey wrote:
> > Okay, the difference is that Mike's patch uses a very simple algorithm to
> > make the decision.
>
> No, the difference is that we don't end up with duplication of a
On Wed, Jul 12, 2017 at 10:34:10AM +0200, Peter Zijlstra wrote:
> On Wed, Jul 12, 2017 at 12:15:08PM +0800, Li, Aubrey wrote:
> > Okay, the difference is that Mike's patch uses a very simple algorithm to
> > make the decision.
>
> No, the difference is that we don't end up with duplication of a
On Wed, Jul 12, 2017 at 11:53:06AM -0700, Paul E. McKenney wrote:
> On Wed, Jul 12, 2017 at 07:46:42PM +0200, Peter Zijlstra wrote:
> > On Wed, Jul 12, 2017 at 08:56:51AM -0700, Paul E. McKenney wrote:
> > > Very good, I have queued the patch below. I left out the removal of
> > > the export as I
On Wed, Jul 12, 2017 at 11:53:06AM -0700, Paul E. McKenney wrote:
> On Wed, Jul 12, 2017 at 07:46:42PM +0200, Peter Zijlstra wrote:
> > On Wed, Jul 12, 2017 at 08:56:51AM -0700, Paul E. McKenney wrote:
> > > Very good, I have queued the patch below. I left out the removal of
> > > the export as I
On Wed, Jul 12, 2017 at 07:46:42PM +0200, Peter Zijlstra wrote:
> On Wed, Jul 12, 2017 at 08:56:51AM -0700, Paul E. McKenney wrote:
> > Very good, I have queued the patch below. I left out the removal of
> > the export as I need to work out why the export was there. If it turns
> > out not to be
On Wed, Jul 12, 2017 at 07:46:42PM +0200, Peter Zijlstra wrote:
> On Wed, Jul 12, 2017 at 08:56:51AM -0700, Paul E. McKenney wrote:
> > Very good, I have queued the patch below. I left out the removal of
> > the export as I need to work out why the export was there. If it turns
> > out not to be
On Wed, Jul 12, 2017 at 07:57:32PM +0200, Peter Zijlstra wrote:
> On Wed, Jul 12, 2017 at 07:17:56PM +0200, Peter Zijlstra wrote:
> > Could be I'm just not remembering how all that works.. But I was
> > wondering if we can do the expensive bits if we've decided to actually
> > go NOHZ and avoid
On Wed, Jul 12, 2017 at 07:57:32PM +0200, Peter Zijlstra wrote:
> On Wed, Jul 12, 2017 at 07:17:56PM +0200, Peter Zijlstra wrote:
> > Could be I'm just not remembering how all that works.. But I was
> > wondering if we can do the expensive bits if we've decided to actually
> > go NOHZ and avoid
On Wed, Jul 12, 2017 at 07:17:56PM +0200, Peter Zijlstra wrote:
> On Wed, Jul 12, 2017 at 08:54:58AM -0700, Paul E. McKenney wrote:
> > On Wed, Jul 12, 2017 at 02:22:49PM +0200, Peter Zijlstra wrote:
> > > On Tue, Jul 11, 2017 at 11:09:31AM -0700, Paul E. McKenney wrote:
> > > > On Tue, Jul 11,
On Wed, Jul 12, 2017 at 07:17:56PM +0200, Peter Zijlstra wrote:
> On Wed, Jul 12, 2017 at 08:54:58AM -0700, Paul E. McKenney wrote:
> > On Wed, Jul 12, 2017 at 02:22:49PM +0200, Peter Zijlstra wrote:
> > > On Tue, Jul 11, 2017 at 11:09:31AM -0700, Paul E. McKenney wrote:
> > > > On Tue, Jul 11,
On Tue, Jul 11, 2017 at 06:09:27PM +0200, Frederic Weisbecker wrote:
> So I'd rather put that on can_stop_idle_tick().
That function needs a fix.. That's not in fact an identity (although it
turns out it is for the 4 default HZ values).
diff --git a/kernel/time/tick-sched.c
On Tue, Jul 11, 2017 at 06:09:27PM +0200, Frederic Weisbecker wrote:
> So I'd rather put that on can_stop_idle_tick().
That function needs a fix.. That's not in fact an identity (although it
turns out it is for the 4 default HZ values).
diff --git a/kernel/time/tick-sched.c
On Wed, Jul 12, 2017 at 07:17:56PM +0200, Peter Zijlstra wrote:
> Could be I'm just not remembering how all that works.. But I was
> wondering if we can do the expensive bits if we've decided to actually
> go NOHZ and avoid doing it on every idle entry.
>
> IIRC the RCU fast NOHZ bits try and
On Wed, Jul 12, 2017 at 07:17:56PM +0200, Peter Zijlstra wrote:
> Could be I'm just not remembering how all that works.. But I was
> wondering if we can do the expensive bits if we've decided to actually
> go NOHZ and avoid doing it on every idle entry.
>
> IIRC the RCU fast NOHZ bits try and
On Wed, Jul 12, 2017 at 08:56:51AM -0700, Paul E. McKenney wrote:
> Very good, I have queued the patch below. I left out the removal of
> the export as I need to work out why the export was there. If it turns
> out not to be needed, I will remove the related ones as well.
'git grep
On Wed, Jul 12, 2017 at 08:56:51AM -0700, Paul E. McKenney wrote:
> Very good, I have queued the patch below. I left out the removal of
> the export as I need to work out why the export was there. If it turns
> out not to be needed, I will remove the related ones as well.
'git grep
On Wed, Jul 12, 2017 at 08:54:58AM -0700, Paul E. McKenney wrote:
> On Wed, Jul 12, 2017 at 02:22:49PM +0200, Peter Zijlstra wrote:
> > On Tue, Jul 11, 2017 at 11:09:31AM -0700, Paul E. McKenney wrote:
> > > On Tue, Jul 11, 2017 at 06:34:22PM +0200, Peter Zijlstra wrote:
> > > > Also,
On Wed, Jul 12, 2017 at 08:54:58AM -0700, Paul E. McKenney wrote:
> On Wed, Jul 12, 2017 at 02:22:49PM +0200, Peter Zijlstra wrote:
> > On Tue, Jul 11, 2017 at 11:09:31AM -0700, Paul E. McKenney wrote:
> > > On Tue, Jul 11, 2017 at 06:34:22PM +0200, Peter Zijlstra wrote:
> > > > Also,
On Wed, Jul 12, 2017 at 01:54:51PM +0200, Peter Zijlstra wrote:
> On Tue, Jul 11, 2017 at 11:09:31AM -0700, Paul E. McKenney wrote:
> > On Tue, Jul 11, 2017 at 06:34:22PM +0200, Peter Zijlstra wrote:
>
> > > But I think we can at the very least do this; it only gets called from
> > >
On Wed, Jul 12, 2017 at 01:54:51PM +0200, Peter Zijlstra wrote:
> On Tue, Jul 11, 2017 at 11:09:31AM -0700, Paul E. McKenney wrote:
> > On Tue, Jul 11, 2017 at 06:34:22PM +0200, Peter Zijlstra wrote:
>
> > > But I think we can at the very least do this; it only gets called from
> > >
On Wed, Jul 12, 2017 at 02:22:49PM +0200, Peter Zijlstra wrote:
> On Tue, Jul 11, 2017 at 11:09:31AM -0700, Paul E. McKenney wrote:
> > On Tue, Jul 11, 2017 at 06:34:22PM +0200, Peter Zijlstra wrote:
> > > Also, RCU_FAST_NO_HZ will make a fairly large difference here.. Paul
> > > what's the state
On Wed, Jul 12, 2017 at 02:22:49PM +0200, Peter Zijlstra wrote:
> On Tue, Jul 11, 2017 at 11:09:31AM -0700, Paul E. McKenney wrote:
> > On Tue, Jul 11, 2017 at 06:34:22PM +0200, Peter Zijlstra wrote:
> > > Also, RCU_FAST_NO_HZ will make a fairly large difference here.. Paul
> > > what's the state
On Tue, Jul 11, 2017 at 11:09:31AM -0700, Paul E. McKenney wrote:
> On Tue, Jul 11, 2017 at 06:34:22PM +0200, Peter Zijlstra wrote:
> > Also, RCU_FAST_NO_HZ will make a fairly large difference here.. Paul
> > what's the state of that thing, do we actually want that or not?
>
> If you are battery
On Tue, Jul 11, 2017 at 11:09:31AM -0700, Paul E. McKenney wrote:
> On Tue, Jul 11, 2017 at 06:34:22PM +0200, Peter Zijlstra wrote:
> > Also, RCU_FAST_NO_HZ will make a fairly large difference here.. Paul
> > what's the state of that thing, do we actually want that or not?
>
> If you are battery
On Wed, Jul 12, 2017 at 12:15:08PM +0800, Li, Aubrey wrote:
> While my proposal is trying to leverage the prediction functionality
> of the existing idle menu governor, which works very well for a long
> time.
Oh, so you've missed the emails where people say its shit? ;-)
Look for the emails of
On Wed, Jul 12, 2017 at 12:15:08PM +0800, Li, Aubrey wrote:
> While my proposal is trying to leverage the prediction functionality
> of the existing idle menu governor, which works very well for a long
> time.
Oh, so you've missed the emails where people say its shit? ;-)
Look for the emails of
On Tue, Jul 11, 2017 at 11:09:31AM -0700, Paul E. McKenney wrote:
> On Tue, Jul 11, 2017 at 06:34:22PM +0200, Peter Zijlstra wrote:
> > But I think we can at the very least do this; it only gets called from
> > kernel/sched/idle.c and both callsites have IRQs explicitly disabled by
> > that
On Tue, Jul 11, 2017 at 11:09:31AM -0700, Paul E. McKenney wrote:
> On Tue, Jul 11, 2017 at 06:34:22PM +0200, Peter Zijlstra wrote:
> > But I think we can at the very least do this; it only gets called from
> > kernel/sched/idle.c and both callsites have IRQs explicitly disabled by
> > that
On Wed, Jul 12, 2017 at 12:15:08PM +0800, Li, Aubrey wrote:
> Okay, the difference is that Mike's patch uses a very simple algorithm to
> make the decision.
No, the difference is that we don't end up with duplication of a metric
ton of code.
It uses the normal idle path, it just makes the NOHZ
On Wed, Jul 12, 2017 at 12:15:08PM +0800, Li, Aubrey wrote:
> Okay, the difference is that Mike's patch uses a very simple algorithm to
> make the decision.
No, the difference is that we don't end up with duplication of a metric
ton of code.
It uses the normal idle path, it just makes the NOHZ
On 2017/7/12 0:34, Peter Zijlstra wrote:
> On Tue, Jul 11, 2017 at 06:09:27PM +0200, Frederic Weisbecker wrote:
>
- tick_nohz_idle_enter costs 7058ns - 10726ns
- tick_nohz_idle_exit costs 8372ns - 20850ns
>>>
>>> Right, those are horrible expensive, but skipping them isn't 'hard', the
On 2017/7/12 0:34, Peter Zijlstra wrote:
> On Tue, Jul 11, 2017 at 06:09:27PM +0200, Frederic Weisbecker wrote:
>
- tick_nohz_idle_enter costs 7058ns - 10726ns
- tick_nohz_idle_exit costs 8372ns - 20850ns
>>>
>>> Right, those are horrible expensive, but skipping them isn't 'hard', the
On 2017/7/12 0:09, Frederic Weisbecker wrote:
> On Tue, Jul 11, 2017 at 11:41:57AM +0200, Peter Zijlstra wrote:
>>
>>> - totally from arch_cpu_idle_enter entry to arch_cpu_idle_exit return costs
>>> 9122ns - 15318ns.
>>> --In this period, rcu_idle_enter costs 1985ns - 2262ns, rcu_idle_exit
On 2017/7/12 0:09, Frederic Weisbecker wrote:
> On Tue, Jul 11, 2017 at 11:41:57AM +0200, Peter Zijlstra wrote:
>>
>>> - totally from arch_cpu_idle_enter entry to arch_cpu_idle_exit return costs
>>> 9122ns - 15318ns.
>>> --In this period, rcu_idle_enter costs 1985ns - 2262ns, rcu_idle_exit
On 2017/7/12 1:58, Christoph Lameter wrote:
> On Tue, 11 Jul 2017, Frederic Weisbecker wrote:
>
>>> --- a/kernel/time/tick-sched.c
>>> +++ b/kernel/time/tick-sched.c
>>> @@ -787,6 +787,7 @@ static ktime_t tick_nohz_stop_sched_tick(struct
>>> tick_sched *ts,
>>> if (!ts->tick_stopped) {
>>>
On 2017/7/12 1:58, Christoph Lameter wrote:
> On Tue, 11 Jul 2017, Frederic Weisbecker wrote:
>
>>> --- a/kernel/time/tick-sched.c
>>> +++ b/kernel/time/tick-sched.c
>>> @@ -787,6 +787,7 @@ static ktime_t tick_nohz_stop_sched_tick(struct
>>> tick_sched *ts,
>>> if (!ts->tick_stopped) {
>>>
On Tue, Jul 11, 2017 at 06:34:22PM +0200, Peter Zijlstra wrote:
> On Tue, Jul 11, 2017 at 06:09:27PM +0200, Frederic Weisbecker wrote:
>
> > > > - tick_nohz_idle_enter costs 7058ns - 10726ns
> > > > - tick_nohz_idle_exit costs 8372ns - 20850ns
> > >
> > > Right, those are horrible expensive, but
On Tue, Jul 11, 2017 at 06:34:22PM +0200, Peter Zijlstra wrote:
> On Tue, Jul 11, 2017 at 06:09:27PM +0200, Frederic Weisbecker wrote:
>
> > > > - tick_nohz_idle_enter costs 7058ns - 10726ns
> > > > - tick_nohz_idle_exit costs 8372ns - 20850ns
> > >
> > > Right, those are horrible expensive, but
On Tue, 11 Jul 2017, Frederic Weisbecker wrote:
> > --- a/kernel/time/tick-sched.c
> > +++ b/kernel/time/tick-sched.c
> > @@ -787,6 +787,7 @@ static ktime_t tick_nohz_stop_sched_tick(struct
> > tick_sched *ts,
> > if (!ts->tick_stopped) {
> > calc_load_nohz_start();
> >
On Tue, 11 Jul 2017, Frederic Weisbecker wrote:
> > --- a/kernel/time/tick-sched.c
> > +++ b/kernel/time/tick-sched.c
> > @@ -787,6 +787,7 @@ static ktime_t tick_nohz_stop_sched_tick(struct
> > tick_sched *ts,
> > if (!ts->tick_stopped) {
> > calc_load_nohz_start();
> >
On Tue, Jul 11, 2017 at 06:09:27PM +0200, Frederic Weisbecker wrote:
> > > - tick_nohz_idle_enter costs 7058ns - 10726ns
> > > - tick_nohz_idle_exit costs 8372ns - 20850ns
> >
> > Right, those are horrible expensive, but skipping them isn't 'hard', the
> > only tricky bit is finding a condition
On Tue, Jul 11, 2017 at 06:09:27PM +0200, Frederic Weisbecker wrote:
> > > - tick_nohz_idle_enter costs 7058ns - 10726ns
> > > - tick_nohz_idle_exit costs 8372ns - 20850ns
> >
> > Right, those are horrible expensive, but skipping them isn't 'hard', the
> > only tricky bit is finding a condition
On Tue, Jul 11, 2017 at 11:41:57AM +0200, Peter Zijlstra wrote:
> On Tue, Jul 11, 2017 at 12:40:06PM +0800, Li, Aubrey wrote:
> > > On Mon, Jul 10, 2017 at 06:42:06PM +0200, Peter Zijlstra wrote:
>
> > >> Data to indicate what hurts how much would be a very good addition to
> > >> the Changelogs.
On Tue, Jul 11, 2017 at 11:41:57AM +0200, Peter Zijlstra wrote:
> On Tue, Jul 11, 2017 at 12:40:06PM +0800, Li, Aubrey wrote:
> > > On Mon, Jul 10, 2017 at 06:42:06PM +0200, Peter Zijlstra wrote:
>
> > >> Data to indicate what hurts how much would be a very good addition to
> > >> the Changelogs.
On Tue, Jul 11, 2017 at 12:40:06PM +0800, Li, Aubrey wrote:
> > On Mon, Jul 10, 2017 at 06:42:06PM +0200, Peter Zijlstra wrote:
> >> Data to indicate what hurts how much would be a very good addition to
> >> the Changelogs. Clearly you have some, you really should have shared.
> In the idle
On Tue, Jul 11, 2017 at 12:40:06PM +0800, Li, Aubrey wrote:
> > On Mon, Jul 10, 2017 at 06:42:06PM +0200, Peter Zijlstra wrote:
> >> Data to indicate what hurts how much would be a very good addition to
> >> the Changelogs. Clearly you have some, you really should have shared.
> In the idle
On Mon, Jul 10, 2017 at 10:27:05AM -0700, Andi Kleen wrote:
> On Mon, Jul 10, 2017 at 06:42:06PM +0200, Peter Zijlstra wrote:
> > I have, and last time I did the actual poking at the LAPIC (to make NOHZ
> > happen) was by far the slowest thing happening.
>
> That must have been a long time ago
On Mon, Jul 10, 2017 at 10:27:05AM -0700, Andi Kleen wrote:
> On Mon, Jul 10, 2017 at 06:42:06PM +0200, Peter Zijlstra wrote:
> > I have, and last time I did the actual poking at the LAPIC (to make NOHZ
> > happen) was by far the slowest thing happening.
>
> That must have been a long time ago
On 2017/7/11 1:27, Andi Kleen wrote:
> On Mon, Jul 10, 2017 at 06:42:06PM +0200, Peter Zijlstra wrote:
>> On Mon, Jul 10, 2017 at 07:46:09AM -0700, Andi Kleen wrote:
So how much of the gain is simply due to skipping NOHZ? Mike used to
carry a patch that would throttle NOHZ. And that is a
On 2017/7/11 1:27, Andi Kleen wrote:
> On Mon, Jul 10, 2017 at 06:42:06PM +0200, Peter Zijlstra wrote:
>> On Mon, Jul 10, 2017 at 07:46:09AM -0700, Andi Kleen wrote:
So how much of the gain is simply due to skipping NOHZ? Mike used to
carry a patch that would throttle NOHZ. And that is a
On Mon, Jul 10, 2017 at 06:42:06PM +0200, Peter Zijlstra wrote:
> On Mon, Jul 10, 2017 at 07:46:09AM -0700, Andi Kleen wrote:
> > > So how much of the gain is simply due to skipping NOHZ? Mike used to
> > > carry a patch that would throttle NOHZ. And that is a _far_ smaller and
> > > simpler patch
On Mon, Jul 10, 2017 at 06:42:06PM +0200, Peter Zijlstra wrote:
> On Mon, Jul 10, 2017 at 07:46:09AM -0700, Andi Kleen wrote:
> > > So how much of the gain is simply due to skipping NOHZ? Mike used to
> > > carry a patch that would throttle NOHZ. And that is a _far_ smaller and
> > > simpler patch
On Mon, Jul 10, 2017 at 07:46:09AM -0700, Andi Kleen wrote:
> > So how much of the gain is simply due to skipping NOHZ? Mike used to
> > carry a patch that would throttle NOHZ. And that is a _far_ smaller and
> > simpler patch to do.
>
> Have you ever looked at a ftrace or PT trace of the idle
On Mon, Jul 10, 2017 at 07:46:09AM -0700, Andi Kleen wrote:
> > So how much of the gain is simply due to skipping NOHZ? Mike used to
> > carry a patch that would throttle NOHZ. And that is a _far_ smaller and
> > simpler patch to do.
>
> Have you ever looked at a ftrace or PT trace of the idle
On Mon, Jul 10, 2017 at 10:46:47AM +0200, Peter Zijlstra wrote:
> On Mon, Jul 10, 2017 at 09:38:30AM +0800, Aubrey Li wrote:
> > We measured 3%~5% improvemnt in disk IO workload, and 8%~20% improvement in
> > network workload.
>
> Argh, what a mess :/
The mess is really the current idle entry
On Mon, Jul 10, 2017 at 10:46:47AM +0200, Peter Zijlstra wrote:
> On Mon, Jul 10, 2017 at 09:38:30AM +0800, Aubrey Li wrote:
> > We measured 3%~5% improvemnt in disk IO workload, and 8%~20% improvement in
> > network workload.
>
> Argh, what a mess :/
The mess is really the current idle entry
On 2017/7/10 16:46, Peter Zijlstra wrote:
> On Mon, Jul 10, 2017 at 09:38:30AM +0800, Aubrey Li wrote:
>> We measured 3%~5% improvemnt in disk IO workload, and 8%~20% improvement in
>> network workload.
>
> Argh, what a mess :/
>
> So how much of the gain is simply due to skipping NOHZ?
On 2017/7/10 16:46, Peter Zijlstra wrote:
> On Mon, Jul 10, 2017 at 09:38:30AM +0800, Aubrey Li wrote:
>> We measured 3%~5% improvemnt in disk IO workload, and 8%~20% improvement in
>> network workload.
>
> Argh, what a mess :/
>
> So how much of the gain is simply due to skipping NOHZ?
2017-07-10 16:46 GMT+08:00 Peter Zijlstra :
> On Mon, Jul 10, 2017 at 09:38:30AM +0800, Aubrey Li wrote:
>> We measured 3%~5% improvemnt in disk IO workload, and 8%~20% improvement in
>> network workload.
>
> Argh, what a mess :/
Agreed, this patchset is a variant of
2017-07-10 16:46 GMT+08:00 Peter Zijlstra :
> On Mon, Jul 10, 2017 at 09:38:30AM +0800, Aubrey Li wrote:
>> We measured 3%~5% improvemnt in disk IO workload, and 8%~20% improvement in
>> network workload.
>
> Argh, what a mess :/
Agreed, this patchset is a variant of
On Mon, Jul 10, 2017 at 09:38:30AM +0800, Aubrey Li wrote:
> We measured 3%~5% improvemnt in disk IO workload, and 8%~20% improvement in
> network workload.
Argh, what a mess :/
So how much of the gain is simply due to skipping NOHZ? Mike used to
carry a patch that would throttle NOHZ. And that
On Mon, Jul 10, 2017 at 09:38:30AM +0800, Aubrey Li wrote:
> We measured 3%~5% improvemnt in disk IO workload, and 8%~20% improvement in
> network workload.
Argh, what a mess :/
So how much of the gain is simply due to skipping NOHZ? Mike used to
carry a patch that would throttle NOHZ. And that
101 - 188 of 188 matches
Mail list logo