On 05/09/2013 05:30 PM, Paul Turner wrote:
>> > With sched_slice, we need to set the runnable avg sum/period after new
>> > task assigned to a specific CPU.
>> > So, set them __sched_fork is meaningless. and then
> This is still a reasonable choice.
>
> Assuming the system is well balanced,
On 05/09/2013 05:24 PM, Paul Turner wrote:
>> >
>> > +#ifdef CONFIG_SMP
>> > +void set_task_runnable_avg(struct task_struct *p)
>> > +{
>> > + u64 slice;
>> > +
>> > + slice = sched_slice(task_cfs_rq(p), >se);
>> > + p->se.avg.runnable_avg_sum = slice;
>> > +
On Wed, May 08, 2013 at 01:00:34PM +0100, Paul Turner wrote:
> On Wed, May 8, 2013 at 4:34 AM, Peter Zijlstra wrote:
> > On Tue, May 07, 2013 at 04:20:55AM -0700, Paul Turner wrote:
> >> Yes, 1024 was only intended as a starting point. We could also
> >> arbitrarily pick something larger, the
On Mon, May 6, 2013 at 8:24 PM, Alex Shi wrote:
> On 05/07/2013 11:06 AM, Paul Turner wrote:
>>> > Thanks Paul!
>>> > It seems work with this change if new __sched_fork move after the
>>> > p->sched_reset_on_fork setting.
>>> >
>>> > But why we initial avg sum to 1024? new task may goes to sleep,
On Thu, May 9, 2013 at 1:31 AM, Alex Shi wrote:
>
>>
>> Here is the patch according to Paul's opinions.
>> just refer the __update_task_entity_contrib in sched.h looks ugly.
>> comments are appreciated!
>
> Paul,
>
> With sched_slice, we need to set the runnable avg sum/period after new
> task
On Thu, May 9, 2013 at 1:22 AM, Alex Shi wrote:
> On 05/08/2013 07:34 PM, Peter Zijlstra wrote:
>>> > If we wanted to be more exacting about it we could just give them a
>>> > sched_slice() worth; this would have a few obvious "nice" properties
>>> > (pun intended).
>> Oh I see I misunderstood
>
> Here is the patch according to Paul's opinions.
> just refer the __update_task_entity_contrib in sched.h looks ugly.
> comments are appreciated!
Paul,
With sched_slice, we need to set the runnable avg sum/period after new
task assigned to a specific CPU.
So, set them __sched_fork is
On 05/08/2013 07:34 PM, Peter Zijlstra wrote:
>> > If we wanted to be more exacting about it we could just give them a
>> > sched_slice() worth; this would have a few obvious "nice" properties
>> > (pun intended).
> Oh I see I misunderstood again :/ Its not about the effective load but weight
> of
On 05/08/2013 07:34 PM, Peter Zijlstra wrote:
If we wanted to be more exacting about it we could just give them a
sched_slice() worth; this would have a few obvious nice properties
(pun intended).
Oh I see I misunderstood again :/ Its not about the effective load but weight
of the initial
Here is the patch according to Paul's opinions.
just refer the __update_task_entity_contrib in sched.h looks ugly.
comments are appreciated!
Paul,
With sched_slice, we need to set the runnable avg sum/period after new
task assigned to a specific CPU.
So, set them __sched_fork is
On Thu, May 9, 2013 at 1:22 AM, Alex Shi alex@intel.com wrote:
On 05/08/2013 07:34 PM, Peter Zijlstra wrote:
If we wanted to be more exacting about it we could just give them a
sched_slice() worth; this would have a few obvious nice properties
(pun intended).
Oh I see I misunderstood
On Thu, May 9, 2013 at 1:31 AM, Alex Shi alex@intel.com wrote:
Here is the patch according to Paul's opinions.
just refer the __update_task_entity_contrib in sched.h looks ugly.
comments are appreciated!
Paul,
With sched_slice, we need to set the runnable avg sum/period after new
On Mon, May 6, 2013 at 8:24 PM, Alex Shi alex@intel.com wrote:
On 05/07/2013 11:06 AM, Paul Turner wrote:
Thanks Paul!
It seems work with this change if new __sched_fork move after the
p-sched_reset_on_fork setting.
But why we initial avg sum to 1024? new task may goes to sleep, the
On Wed, May 08, 2013 at 01:00:34PM +0100, Paul Turner wrote:
On Wed, May 8, 2013 at 4:34 AM, Peter Zijlstra pet...@infradead.org wrote:
On Tue, May 07, 2013 at 04:20:55AM -0700, Paul Turner wrote:
Yes, 1024 was only intended as a starting point. We could also
arbitrarily pick something
On 05/09/2013 05:24 PM, Paul Turner wrote:
+#ifdef CONFIG_SMP
+void set_task_runnable_avg(struct task_struct *p)
+{
+ u64 slice;
+
+ slice = sched_slice(task_cfs_rq(p), p-se);
+ p-se.avg.runnable_avg_sum = slice;
+ p-se.avg.runnable_avg_period = slice;
On 05/09/2013 05:30 PM, Paul Turner wrote:
With sched_slice, we need to set the runnable avg sum/period after new
task assigned to a specific CPU.
So, set them __sched_fork is meaningless. and then
This is still a reasonable choice.
Assuming the system is well balanced, sched_slice() on
On Wed, May 8, 2013 at 4:34 AM, Peter Zijlstra wrote:
> On Tue, May 07, 2013 at 04:20:55AM -0700, Paul Turner wrote:
>> Yes, 1024 was only intended as a starting point. We could also
>> arbitrarily pick something larger, the key is that we pick
>> _something_.
>>
>> If we wanted to be more
On Tue, May 07, 2013 at 04:20:55AM -0700, Paul Turner wrote:
> Yes, 1024 was only intended as a starting point. We could also
> arbitrarily pick something larger, the key is that we pick
> _something_.
>
> If we wanted to be more exacting about it we could just give them a
> sched_slice() worth;
On Tue, May 07, 2013 at 11:24:52AM +0800, Alex Shi wrote:
> It will give new forked task 1 ms extra running time. That will bring
> incorrect info if the new forked goes to sleep a while.
> But this info should benefit to some benchmarks like aim7,
> pthread_cond_broadcast. So I am convinced. :)
>
On Tue, May 07, 2013 at 11:24:52AM +0800, Alex Shi wrote:
It will give new forked task 1 ms extra running time. That will bring
incorrect info if the new forked goes to sleep a while.
But this info should benefit to some benchmarks like aim7,
pthread_cond_broadcast. So I am convinced. :)
On Tue, May 07, 2013 at 04:20:55AM -0700, Paul Turner wrote:
Yes, 1024 was only intended as a starting point. We could also
arbitrarily pick something larger, the key is that we pick
_something_.
If we wanted to be more exacting about it we could just give them a
sched_slice() worth; this
On Wed, May 8, 2013 at 4:34 AM, Peter Zijlstra pet...@infradead.org wrote:
On Tue, May 07, 2013 at 04:20:55AM -0700, Paul Turner wrote:
Yes, 1024 was only intended as a starting point. We could also
arbitrarily pick something larger, the key is that we pick
_something_.
If we wanted to be
Yes, 1024 was only intended as a starting point. We could also
arbitrarily pick something larger, the key is that we pick
_something_.
If we wanted to be more exacting about it we could just give them a
sched_slice() worth; this would have a few obvious "nice" properties
(pun intended).
On Tue,
On 05/07/2013 05:57 PM, Morten Rasmussen wrote:
> Agree. The tracked load of new tasks is often very unstable during first
> couple of ms on loaded systems. I'm not sure if 1024 is the right
> initial value. It may need to be larger.
Maybe Peter can give a better value according to the experience
On Tue, May 07, 2013 at 04:06:33AM +0100, Paul Turner wrote:
> On Mon, May 6, 2013 at 7:18 PM, Alex Shi wrote:
> > On 05/06/2013 06:17 PM, Paul Turner wrote:
> >> Rather than exposing the representation of load_avg_contrib to
> >> __sched_fork it might also be better to call:
> >>
On Tue, May 07, 2013 at 04:06:33AM +0100, Paul Turner wrote:
On Mon, May 6, 2013 at 7:18 PM, Alex Shi alex@intel.com wrote:
On 05/06/2013 06:17 PM, Paul Turner wrote:
Rather than exposing the representation of load_avg_contrib to
__sched_fork it might also be better to call:
On 05/07/2013 05:57 PM, Morten Rasmussen wrote:
Agree. The tracked load of new tasks is often very unstable during first
couple of ms on loaded systems. I'm not sure if 1024 is the right
initial value. It may need to be larger.
Maybe Peter can give a better value according to the experience on
Yes, 1024 was only intended as a starting point. We could also
arbitrarily pick something larger, the key is that we pick
_something_.
If we wanted to be more exacting about it we could just give them a
sched_slice() worth; this would have a few obvious nice properties
(pun intended).
On Tue,
On 05/07/2013 11:24 AM, Alex Shi wrote:
>> > The reason to store a small initial "observation" here is so that as
>> > when we reach our next period edge our load converges (presumably
>> > down) towards its true target more smoothly; as well as providing a
>> > task additional protection from
On 05/07/2013 11:06 AM, Paul Turner wrote:
>> > Thanks Paul!
>> > It seems work with this change if new __sched_fork move after the
>> > p->sched_reset_on_fork setting.
>> >
>> > But why we initial avg sum to 1024? new task may goes to sleep, the
>> > initial 1024 give a unreasonable initial
On Mon, May 6, 2013 at 7:18 PM, Alex Shi wrote:
> On 05/06/2013 06:17 PM, Paul Turner wrote:
>> Rather than exposing the representation of load_avg_contrib to
>> __sched_fork it might also be better to call:
>> __update_task_entity_contrib(>se)
>> After the initialization
On 05/06/2013 09:45 AM, Alex Shi wrote:
> We need initialize the se.avg.{decay_count, load_avg_contrib} for a
> new forked task.
> Otherwise random values of above variables cause mess when do new task
> enqueue:
> enqueue_task_fair
> enqueue_entity
>
On 05/06/2013 06:17 PM, Paul Turner wrote:
>>> >> Rather than exposing the representation of load_avg_contrib to
>>> >> __sched_fork it might also be better to call:
>>> >> __update_task_entity_contrib(>se)
>>> >> After the initialization above; this would also avoid potential bugs
>>> >> like
On Mon, May 06, 2013 at 11:26:06PM +0800, Alex Shi wrote:
> On 05/06/2013 06:22 PM, Paul Turner wrote:
> >>> >> This is missing a scale_load() right? Further: Why not put this in
> >>> >> __sched_fork?
> >> >
> >> > scale_load is not working now. Anyway I can add this.
> > I believe someone
On 05/06/2013 06:22 PM, Paul Turner wrote:
>>> >> This is missing a scale_load() right? Further: Why not put this in
>>> >> __sched_fork?
>> >
>> > scale_load is not working now. Anyway I can add this.
> I believe someone tracked down a plausible cause for this:
> A governor was examining the
On Mon, May 6, 2013 at 2:21 AM, Alex Shi wrote:
> On 05/06/2013 04:19 PM, Paul Turner wrote:
>> On Sun, May 5, 2013 at 6:45 PM, Alex Shi wrote:
>>> We need initialize the se.avg.{decay_count, load_avg_contrib} for a
>>> new forked task.
>>> Otherwise random values of above variables cause mess
On Mon, May 6, 2013 at 2:21 AM, Alex Shi wrote:
> On 05/06/2013 04:19 PM, Paul Turner wrote:
>> On Sun, May 5, 2013 at 6:45 PM, Alex Shi wrote:
>>> We need initialize the se.avg.{decay_count, load_avg_contrib} for a
>>> new forked task.
>>> Otherwise random values of above variables cause mess
On 05/06/2013 04:19 PM, Paul Turner wrote:
> On Sun, May 5, 2013 at 6:45 PM, Alex Shi wrote:
>> We need initialize the se.avg.{decay_count, load_avg_contrib} for a
>> new forked task.
>> Otherwise random values of above variables cause mess when do new task
>> enqueue:
>> enqueue_task_fair
>>
On Sun, May 5, 2013 at 6:45 PM, Alex Shi wrote:
> We need initialize the se.avg.{decay_count, load_avg_contrib} for a
> new forked task.
> Otherwise random values of above variables cause mess when do new task
> enqueue:
> enqueue_task_fair
> enqueue_entity
>
On 05/06/2013 06:22 PM, Paul Turner wrote:
This is missing a scale_load() right? Further: Why not put this in
__sched_fork?
scale_load is not working now. Anyway I can add this.
I believe someone tracked down a plausible cause for this:
A governor was examining the values and making a
On Mon, May 06, 2013 at 11:26:06PM +0800, Alex Shi wrote:
On 05/06/2013 06:22 PM, Paul Turner wrote:
This is missing a scale_load() right? Further: Why not put this in
__sched_fork?
scale_load is not working now. Anyway I can add this.
I believe someone tracked down a plausible
On 05/06/2013 06:17 PM, Paul Turner wrote:
Rather than exposing the representation of load_avg_contrib to
__sched_fork it might also be better to call:
__update_task_entity_contrib(p-se)
After the initialization above; this would also avoid potential bugs
like the missing scale_load()
On 05/06/2013 09:45 AM, Alex Shi wrote:
We need initialize the se.avg.{decay_count, load_avg_contrib} for a
new forked task.
Otherwise random values of above variables cause mess when do new task
enqueue:
enqueue_task_fair
enqueue_entity
enqueue_entity_load_avg
On Mon, May 6, 2013 at 7:18 PM, Alex Shi alex@intel.com wrote:
On 05/06/2013 06:17 PM, Paul Turner wrote:
Rather than exposing the representation of load_avg_contrib to
__sched_fork it might also be better to call:
__update_task_entity_contrib(p-se)
After the initialization above;
On 05/07/2013 11:24 AM, Alex Shi wrote:
The reason to store a small initial observation here is so that as
when we reach our next period edge our load converges (presumably
down) towards its true target more smoothly; as well as providing a
task additional protection from being considered
On Sun, May 5, 2013 at 6:45 PM, Alex Shi alex@intel.com wrote:
We need initialize the se.avg.{decay_count, load_avg_contrib} for a
new forked task.
Otherwise random values of above variables cause mess when do new task
enqueue:
enqueue_task_fair
enqueue_entity
On 05/06/2013 04:19 PM, Paul Turner wrote:
On Sun, May 5, 2013 at 6:45 PM, Alex Shi alex@intel.com wrote:
We need initialize the se.avg.{decay_count, load_avg_contrib} for a
new forked task.
Otherwise random values of above variables cause mess when do new task
enqueue:
On Mon, May 6, 2013 at 2:21 AM, Alex Shi alex@intel.com wrote:
On 05/06/2013 04:19 PM, Paul Turner wrote:
On Sun, May 5, 2013 at 6:45 PM, Alex Shi alex@intel.com wrote:
We need initialize the se.avg.{decay_count, load_avg_contrib} for a
new forked task.
Otherwise random values of
On Mon, May 6, 2013 at 2:21 AM, Alex Shi alex@intel.com wrote:
On 05/06/2013 04:19 PM, Paul Turner wrote:
On Sun, May 5, 2013 at 6:45 PM, Alex Shi alex@intel.com wrote:
We need initialize the se.avg.{decay_count, load_avg_contrib} for a
new forked task.
Otherwise random values of
49 matches
Mail list logo