Maybe we can skip local group since it's a bottom-up search so we know
there's no idle cpu in the lower domain from the prior iteration.
I did this change but seems results are worse on my machines, guess start
seeking idle cpu bottom up is a bad idea.
The following is full version with
On 01/24/2013 01:15 PM, Viresh Kumar wrote:
On 24 January 2013 09:00, Alex Shi alex@intel.com wrote:
This patchset can be used, but causes burst waking benchmark aim9 drop 5~7%
on my 2 sockets machine. The reason is too light runnable load in early stage
of waked tasks cause imbalance
On 01/09/2013 11:14 AM, Preeti U Murthy wrote:
Here comes the point of making both load balancing and wake up
balance(select_idle_sibling) co operative. How about we always schedule
the woken up task on the prev_cpu? This seems more sensible considering
load balancing considers blocked load as
The blocked load of a cluster will be high if the blocked tasks have
run recently. The contribution of a blocked task will be divided by 2
each 32ms, so it means that a high blocked load will be made of recent
running tasks and the long sleeping tasks will not influence the load
balancing.
patchset with runnable_load_avg+blocked_load_avg in
weighted_cpu_load().
Are the above two what you are comparing? And in the above two versions
have you included your [PATCH] sched: use instant load weight in burst
regular load balance?
no this patch.
On 01/20/2013 09:22 PM, Alex Shi wrote
On 01/17/2013 01:17 PM, Namhyung Kim wrote:
On Wed, 16 Jan 2013 22:08:21 +0800, Alex Shi wrote:
On 01/08/2013 04:41 PM, Preeti U Murthy wrote:
Hi Mike,
Thank you very much for such a clear and comprehensive explanation.
So when I put together the problem and the proposed solution pieces
apply this. Just to show the logical.
===
From 145ff27744c8ac04eda056739fe5aa907a00877e Mon Sep 17 00:00:00 2001
From: Alex Shi alex@intel.com
Date: Fri, 11 Jan 2013 16:49:03 +0800
Subject: [PATCH 3/7] sched: select_idle_sibling optimization
Current logical in this function will insist
On Tue, Dec 18, 2012 at 5:53 PM, Vincent Guittot
vincent.guit...@linaro.org wrote:
On 17 December 2012 16:24, Alex Shi alex@intel.com wrote:
The scheme below tries to summaries the idea:
Socket | socket 0 | socket 1 | socket 2 | socket 3 |
LCPU| 0 | 1-15 | 16 | 17-31
The scheme below tries to summaries the idea:
Socket | socket 0 | socket 1 | socket 2 | socket 3 |
LCPU| 0 | 1-15 | 16 | 17-31 | 32 | 33-47 | 48 | 49-63 |
buddy conf0 | 0 | 0| 1 | 16| 2 | 32| 3 | 48|
buddy conf1 | 0 | 0| 0 | 16| 16 | 32| 32
to be removed.
From 96bee9a03b2048f2686fbd7de0e2aee458dbd917 Mon Sep 17 00:00:00 2001
From: Alex Shi alex@intel.com
Date: Mon, 17 Dec 2012 09:42:57 +0800
Subject: [PATCH 01/18] sched: remove SD_PERFER_SIBLING flag
The flag was introduced in commit b5d978e0c7e79a. Its purpose seems
trying
On 12/14/2012 05:33 PM, Vincent Guittot wrote:
On 14 December 2012 02:46, Alex Shi alex@intel.com wrote:
On 12/13/2012 11:48 PM, Vincent Guittot wrote:
On 13 December 2012 15:53, Vincent Guittot vincent.guit...@linaro.org
wrote:
On 13 December 2012 15:25, Alex Shi alex@intel.com
On 12/13/2012 06:11 PM, Vincent Guittot wrote:
On 13 December 2012 03:17, Alex Shi alex@intel.com wrote:
On 12/12/2012 09:31 PM, Vincent Guittot wrote:
During the creation of sched_domain, we define a pack buddy CPU for each CPU
when one is available. We want to pack at all levels where
On 12/13/2012 11:48 PM, Vincent Guittot wrote:
On 13 December 2012 15:53, Vincent Guittot vincent.guit...@linaro.org wrote:
On 13 December 2012 15:25, Alex Shi alex@intel.com wrote:
On 12/13/2012 06:11 PM, Vincent Guittot wrote:
On 13 December 2012 03:17, Alex Shi alex@intel.com wrote
On 12/14/2012 12:45 PM, Mike Galbraith wrote:
Do you have further ideas for buddy cpu on such example?
Which kind of sched_domain configuration have you for such system ?
and how many sched_domain level have you ?
it is general X86 domain configuration. with 4 levels,
On 12/14/2012 03:45 PM, Mike Galbraith wrote:
On Fri, 2012-12-14 at 14:36 +0800, Alex Shi wrote:
On 12/14/2012 12:45 PM, Mike Galbraith wrote:
Do you have further ideas for buddy cpu on such example?
Which kind of sched_domain configuration have you for such system ?
and how many
On 12/12/2012 09:31 PM, Vincent Guittot wrote:
During the creation of sched_domain, we define a pack buddy CPU for each CPU
when one is available. We want to pack at all levels where a group of CPU can
be power gated independently from others.
On a system that can't power gate a group of CPUs
On 12/12/2012 09:31 PM, Vincent Guittot wrote:
This new flag SD_SHARE_POWERDOMAIN is used to reflect whether groups of CPU in
a sched_domain level can or not reach a different power state. If clusters can
be power gated independently, as an example, the flag should be cleared at CPU
level.
On 12/13/2012 10:17 AM, Alex Shi wrote:
On 12/12/2012 09:31 PM, Vincent Guittot wrote:
During the creation of sched_domain, we define a pack buddy CPU for each CPU
when one is available. We want to pack at all levels where a group of CPU can
be power gated independently from others
18 matches
Mail list logo