On Wed, Mar 09, 2016 at 11:00:42AM -0800, Andrew Hunter wrote:
> At Google, we essentially reverted 88ec22d and the subsequent tweaks
> to it, keeping vruntime absolute always , instead using
> task_move_group_fair to change the basis of relative min_vruntime
> between cpus. We found this made it
On Wed, Mar 09, 2016 at 11:00:42AM -0800, Andrew Hunter wrote:
> At Google, we essentially reverted 88ec22d and the subsequent tweaks
> to it, keeping vruntime absolute always , instead using
> task_move_group_fair to change the basis of relative min_vruntime
> between cpus. We found this made it
On Wed, Mar 09, 2016 at 11:00:42AM -0800, Andrew Hunter wrote:
> At Google, we essentially reverted 88ec22d and the subsequent tweaks
> to it, keeping vruntime absolute always , instead using
> task_move_group_fair to change the basis of relative min_vruntime
Hello, Andrew
I am curious about how
On Wed, Mar 09, 2016 at 11:00:42AM -0800, Andrew Hunter wrote:
> At Google, we essentially reverted 88ec22d and the subsequent tweaks
> to it, keeping vruntime absolute always , instead using
> task_move_group_fair to change the basis of relative min_vruntime
Hello, Andrew
I am curious about how
On Wed, Mar 09, 2016 at 11:00:42AM -0800, Andrew Hunter wrote:
> At Google, we essentially reverted 88ec22d and the subsequent tweaks
> to it, keeping vruntime absolute always , instead using
> task_move_group_fair to change the basis of relative min_vruntime
> between cpus. We found this made it
On Wed, Mar 09, 2016 at 11:00:42AM -0800, Andrew Hunter wrote:
> At Google, we essentially reverted 88ec22d and the subsequent tweaks
> to it, keeping vruntime absolute always , instead using
> task_move_group_fair to change the basis of relative min_vruntime
> between cpus. We found this made it
At Google, we essentially reverted 88ec22d and the subsequent tweaks
to it, keeping vruntime absolute always , instead using
task_move_group_fair to change the basis of relative min_vruntime
between cpus. We found this made it a lot easier to reason about and
work with corss-cpu computations. I
At Google, we essentially reverted 88ec22d and the subsequent tweaks
to it, keeping vruntime absolute always , instead using
task_move_group_fair to change the basis of relative min_vruntime
between cpus. We found this made it a lot easier to reason about and
work with corss-cpu computations. I
Hi Peter,
On Wed, Mar 9, 2016 at 5:34 PM, Peter Zijlstra wrote:
> On Wed, Mar 09, 2016 at 02:52:57PM +0530, Pavan Kondeti wrote:
>
>> When a CFS task is enqueued during migration (load balance or change in
>> affinity), its vruntime is normalized before updating the current
Hi Peter,
On Wed, Mar 9, 2016 at 5:34 PM, Peter Zijlstra wrote:
> On Wed, Mar 09, 2016 at 02:52:57PM +0530, Pavan Kondeti wrote:
>
>> When a CFS task is enqueued during migration (load balance or change in
>> affinity), its vruntime is normalized before updating the current and
>>
On Wed, Mar 09, 2016 at 02:52:57PM +0530, Pavan Kondeti wrote:
> When a CFS task is enqueued during migration (load balance or change in
> affinity), its vruntime is normalized before updating the current and
> cfs_rq->min_vruntime.
static void
enqueue_entity(struct cfs_rq *cfs_rq, struct
On Wed, Mar 09, 2016 at 02:52:57PM +0530, Pavan Kondeti wrote:
> When a CFS task is enqueued during migration (load balance or change in
> affinity), its vruntime is normalized before updating the current and
> cfs_rq->min_vruntime.
static void
enqueue_entity(struct cfs_rq *cfs_rq, struct
Hi
When a CFS task is enqueued during migration (load balance or change in
affinity), its vruntime is normalized before updating the current and
cfs_rq->min_vruntime. If the current entity is a low priority task or
belongs to a cgroup that has lower cpu.shares and it is the only entity
queued,
Hi
When a CFS task is enqueued during migration (load balance or change in
affinity), its vruntime is normalized before updating the current and
cfs_rq->min_vruntime. If the current entity is a low priority task or
belongs to a cgroup that has lower cpu.shares and it is the only entity
queued,
14 matches
Mail list logo