00 R9:
R10: R11: R12:
R13: R14: R15:
ORIG_RAX: CS: 0010 SS: 0018
Signed-off-by: Mike Galbraith
Cc: sta...@vger.kernel.org
kernel/wor
On Tue, 2013-03-19 at 23:16 -0400, Chen Gong wrote:
> On Tue, Mar 19, 2013 at 06:44:08PM -0400, Dave Jones wrote:
> > Date: Tue, 19 Mar 2013 18:44:08 -0400
> > From: Dave Jones
> > To: Linux Kernel
> > Cc: x...@kernel.org
> > Subject: cpu offline causes backtrace from cmci_rediscover
> > U
On Thu, 2013-02-07 at 19:14 +, Christoph Lameter wrote:
> On Thu, 7 Feb 2013, Frederic Weisbecker wrote:
>
> > Not with hrtick.
>
> hrtick? Did we not already try that a couple of years back and it turned
> out that the overhead of constantly reprogramming a timer via the PCI bus
> was causi
On Tue, 2013-02-12 at 01:31 +0400, Kirill Tkhai wrote:
> It's possible a situation when rq->rt is throttled or
> it has no child entities and there are RT tasks ready
> for execution in the rq which are the only tasks
> of TASK_RUNNING state. In this case pick_next_task
> takes idle tasks and idle
On Tue, 2013-02-12 at 09:12 +0100, Stanislav Meduna wrote:
> On 12.02.2013 08:06, Mike Galbraith wrote:
>
> >> In this case pick_next_task takes idle tasks and idle wastes cpu
> >> time.
>
> > That's not a waste of CPU time, that's utilization enf
On Tue, 2013-04-02 at 15:23 +0800, Michael Wang wrote:
> On 04/02/2013 11:23 AM, Alex Shi wrote:
> [snip]
> >
> > [patch v3 1/8] Revert "sched: Introduce temporary FAIR_GROUP_SCHED
> > [patch v3 2/8] sched: set initial value of runnable avg for new
> > [patch v3 3/8] sched: only count runnable av
On Tue, 2013-03-26 at 16:00 -0400, Rik van Riel wrote:
> On Tue, 26 Mar 2013 14:07:14 -0400
> Sasha Levin wrote:
>
> > > Not necessarily, we do release everything at the end of the function:
> > > out_unlock_free:
> > > sem_unlock(sma, locknum);
> >
> > Ow, there's a rcu_read_unlock() in
On Fri, 2013-04-05 at 09:21 -0400, Rik van Riel wrote:
> On 04/05/2013 12:38 AM, Mike Galbraith wrote:
> > On Tue, 2013-03-26 at 16:00 -0400, Rik van Riel wrote:
>
> >> The ipc semaphore code has a nasty RCU locking tangle, with both
> >> find_alloc_undo and semti
On Tue, 2013-01-29 at 09:45 +0800, Alex Shi wrote:
> On 01/28/2013 11:47 PM, Mike Galbraith wrote:
> > monteverdi:/abuild/mike/:[0]# echo 1 > /sys/devices/system/cpu/cpufreq/boost
> > monteverdi:/abuild/mike/:[0]# massive_intr 10 60
> > 014635 00058160
> > 014633
On Thu, 2013-01-24 at 14:01 +0800, Michael Wang wrote:
> I've enabled WAKE flag on my box like you did, but still can't see
> regression, and I've just tested on a power server with 64 cpu, also
> failed to reproduce the issue (not compared with virgin yet, but can't
> see collapse).
I'm not surp
On Thu, 2013-01-24 at 15:15 +0800, Michael Wang wrote:
> On 01/24/2013 02:51 PM, Mike Galbraith wrote:
> > On Thu, 2013-01-24 at 14:01 +0800, Michael Wang wrote:
> >
> >> I've enabled WAKE flag on my box like you did, but still can't see
> >> regress
On Thu, 2013-01-24 at 16:14 +0800, Michael Wang wrote:
> Now it's time to work on v3 I think, let's see what we could get this time.
Maybe v3 can try to not waste so much ram on affine map?
Even better would be if it could just go away, along with relic of the
bad old days wake_affine(), and we
On Thu, 2013-01-24 at 17:26 +0800, Michael Wang wrote:
> On 01/24/2013 05:07 PM, Mike Galbraith wrote:
> > On Thu, 2013-01-24 at 16:14 +0800, Michael Wang wrote:
> >
> >> Now it's time to work on v3 I think, let's see what we could get this time.
> >
&
On Fri, 2013-01-25 at 14:05 -0500, Rik van Riel wrote:
> The performance issue observed with AIM7 is still a mystery.
Hm. AIM7 mystery _may_ be the same crud I see on a 4 node 40 core box.
Stock scheduler knobs are too preempt happy, produce unstable results.
I twiddle them as below to stabilize
On Sat, 2013-01-26 at 13:05 +0100, Ingo Molnar wrote:
> * Mike Galbraith wrote:
>
> > On Fri, 2013-01-25 at 14:05 -0500, Rik van Riel wrote:
> >
> > > The performance issue observed with AIM7 is still a mystery.
> >
> > Hm. AIM7 mystery _may_ be the s
alancing
> >>> doesn't do well for many tasks burst waking. After talking with Mike
> >>> Galbraith, we are agree to just use runnable avg in power friendly
> >>> scheduling and keep current instant load in performance scheduling for
> >>> low l
If the previous CPU is cache affine and idle, select it.
Signed-off-by: Mike Galbraith
---
kernel/sched/fair.c | 21 +++--
1 file changed, 7 insertions(+), 14 deletions(-)
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -3270,25 +3270,18 @@ find_idlest_cpu(struct
On Sun, 2013-01-27 at 21:25 +0800, Alex Shi wrote:
> On 01/27/2013 06:35 PM, Borislav Petkov wrote:
> > On Sun, Jan 27, 2013 at 05:36:25AM +0100, Mike Galbraith wrote:
> >> With aim7 compute on 4 node 40 core box, I see stable throughput
> >> improvement at tasks = nr
On Sun, 2013-01-27 at 16:51 +0100, Mike Galbraith wrote:
> On Sun, 2013-01-27 at 21:25 +0800, Alex Shi wrote:
> > On 01/27/2013 06:35 PM, Borislav Petkov wrote:
> > > On Sun, Jan 27, 2013 at 05:36:25AM +0100, Mike Galbraith wrote:
> > >> With aim7 compute on 4
On Mon, 2013-01-28 at 13:51 +0800, Alex Shi wrote:
> On 01/28/2013 01:17 PM, Mike Galbraith wrote:
> > On Sun, 2013-01-27 at 16:51 +0100, Mike Galbraith wrote:
> >> On Sun, 2013-01-27 at 21:25 +0800, Alex Shi wrote:
> >>> On 01/27/2013 06:35 PM, Borislav Petko
On Mon, 2013-01-28 at 07:15 +0100, Mike Galbraith wrote:
> On Mon, 2013-01-28 at 13:51 +0800, Alex Shi wrote:
> > On 01/28/2013 01:17 PM, Mike Galbraith wrote:
> > > On Sun, 2013-01-27 at 16:51 +0100, Mike Galbraith wrote:
> > >> On Sun, 2013-01-27 at 21:25 +0800,
On Mon, 2013-01-28 at 13:19 +0800, Alex Shi wrote:
> On 01/27/2013 06:40 PM, Borislav Petkov wrote:
> > On Sun, Jan 27, 2013 at 10:41:40AM +0800, Alex Shi wrote:
> >> Just rerun some benchmarks: kbuild, specjbb2005, oltp, tbench, aim9,
> >> hackbench, fileio-cfq of sysbench, dbench, aiostress, mul
On Mon, 2013-01-28 at 07:42 +0100, Mike Galbraith wrote:
> Back to original 1ms sleep, 8ms work, turning NUMA box into a single
> node 10 core box with numactl.
(aim7 in one 10 core node.. so spread, no delta.)
Benchmark Version Machine Run Date
AIM Multiuser Benchmark - Sui
On Mon, 2013-01-28 at 15:17 +0800, Alex Shi wrote:
> On 01/28/2013 02:49 PM, Mike Galbraith wrote:
> > On Mon, 2013-01-28 at 13:19 +0800, Alex Shi wrote:
> >> On 01/27/2013 06:40 PM, Borislav Petkov wrote:
> >>> On Sun, Jan 27, 2013 at 10:41:40AM +0800, Alex Shi
On Mon, 2013-01-28 at 10:55 +0100, Borislav Petkov wrote:
> On Mon, Jan 28, 2013 at 06:17:46AM +0100, Mike Galbraith wrote:
> > Zzzt. Wish I could turn turbo thingy off.
>
> Try setting /sys/devices/system/cpu/cpufreq/boost to 0.
How convenient (test) works too.
So much for tur
On Mon, 2013-01-28 at 11:53 +0100, Ingo Molnar wrote:
> * Mike Galbraith wrote:
>
> > If the previous CPU is cache affine and idle, select it.
>
> No objections in principle - but would be nice to have a
> changelog with numbers, % of improvement included and so?
W
On Mon, 2013-01-28 at 12:21 +0100, Ingo Molnar wrote:
> * Mike Galbraith wrote:
>
> > On Mon, 2013-01-28 at 11:53 +0100, Ingo Molnar wrote:
> > > * Mike Galbraith wrote:
> > >
> > > > If the previous CPU is cache affine and idle, select it.
> >
On Mon, 2013-01-28 at 12:29 +0100, Borislav Petkov wrote:
> On Mon, Jan 28, 2013 at 11:44:44AM +0100, Mike Galbraith wrote:
> > On Mon, 2013-01-28 at 10:55 +0100, Borislav Petkov wrote:
> > > On Mon, Jan 28, 2013 at 06:17:46AM +0100, Mike Galbraith wrote:
> > > >
On Mon, 2013-01-28 at 12:32 +0100, Mike Galbraith wrote:
> On Mon, 2013-01-28 at 12:29 +0100, Borislav Petkov wrote:
> > On Mon, Jan 28, 2013 at 11:44:44AM +0100, Mike Galbraith wrote:
> > > On Mon, 2013-01-28 at 10:55 +0100, Borislav Petkov wrote:
> > > > On M
On Mon, 2013-01-28 at 06:17 +0100, Mike Galbraith wrote:
Ok damnit.
> monteverdi:/abuild/mike/:[0]# echo powersaving >
> /sys/devices/system/cpu/sched_policy/current_sched_policy
> monteverdi:/abuild/mike/:[0]# massive_intr 10 60
> 043321 00058616
> 043313 00058616
> 0433
On Mon, 2013-01-28 at 16:22 +0100, Borislav Petkov wrote:
> On Mon, Jan 28, 2013 at 12:40:46PM +0100, Mike Galbraith wrote:
> > > No no, that's not restricted to one node. It's just overloaded because
> > > I turned balancing off at the NODE domain level.
>
On Mon, 2013-02-25 at 10:23 +0800, Alex Shi wrote:
> One of problem is the how to decide the criteria of the burst? If we set
> 5 waking up/ms is burst, we will lose 4 waking up/ms.
> another problem is the burst detection cost, we need tracking a period
> history info of the waking up, better on
On Sun, 2013-02-24 at 22:59 +0100, Mario Giammarco wrote:
> I have searched on internet and I see that is a problem common to
> several AMD platforms.
> Even in Microsoft Windows several people have the same problem and
> they solved it disabling cool&quiet on bios. It seems that some amd
> cpus o
On Mon, 2013-02-25 at 17:53 +0800, Alex Shi wrote:
> On 02/25/2013 11:23 AM, Mike Galbraith wrote:
> > On Mon, 2013-02-25 at 10:23 +0800, Alex Shi wrote:
> >
> >> One of problem is the how to decide the criteria of the burst? If we set
> >> 5 waking up/ms is b
On Thu, 2013-02-28 at 14:38 +0800, Michael Wang wrote:
> + /*
> + * current is the only task on rq and it is
> + * going to sleep, current cpu will be a nice
> + * candidate for p to
On Thu, 2013-02-28 at 15:40 +0800, Michael Wang wrote:
> Hi, Mike
>
> Thanks for your reply.
>
> On 02/28/2013 03:18 PM, Mike Galbraith wrote:
> > On Thu, 2013-02-28 at 14:38 +0800, Michael Wang wrote:
> >
> >> + /*
> >>
On Thu, 2013-02-28 at 15:42 +0800, Michael Wang wrote:
> I mean could we say that more ops/sec means more works has been done?
Sure. But it's fairly meaningless, it's all scheduler. Real tasks do
more than schedule.
-Mike
--
To unsubscribe from this list: send the line "unsubscribe linux-kern
On Thu, 2013-02-28 at 16:14 +0800, Michael Wang wrote:
> On 02/28/2013 04:04 PM, Mike Galbraith wrote:
> > It would be nice if it _were_ a promise, but it is not, it's a hint.
>
> Bad to know :(
>
> Should we fix it or this is by designed? The comments after WF_SYNC
On Thu, 2013-02-28 at 16:49 +0800, Michael Wang wrote:
> On 02/28/2013 04:24 PM, Mike Galbraith wrote:
> > On Thu, 2013-02-28 at 16:14 +0800, Michael Wang wrote:
> >> On 02/28/2013 04:04 PM, Mike Galbraith wrote:
> >
> >>> It would be nice if it _were_ a
On Thu, 2013-02-28 at 18:25 +0900, Namhyung Kim wrote:
> Not sure if it should require bidirectional relationship. Looks like
> just for benchmarks. Isn't there a one-way relationship that could get
> a benefit from this? I don't know ;-)
?? Meaningful relationships are bare minimum bidirecti
On Mon, 2013-02-18 at 23:06 -0500, Steven Rostedt wrote:
> On Tue, 2013-02-19 at 09:56 +0800, Li Zefan wrote:
>
> > Oh ignore me. Just saw the patchset for 3.4-rt.
>
> Note, I already have part of the 3.4-feature (softirq backport) tested
> and ready. What I'm waiting on is trying to figure out
On Wed, 2013-02-20 at 13:55 +0800, Alex Shi wrote:
> Joonsoo Kim suggests not packing exec task, since the old task utils is
> possibly unuseable.
(I'm stumbling around in rtmutex PI land, all dazed and confused, so
forgive me if my peripheral following of this thread is off target;)
Hm, possibl
On Wed, 2013-02-20 at 16:11 +0800, Alex Shi wrote:
> On 02/20/2013 03:40 PM, Mike Galbraith wrote:
> > On Wed, 2013-02-20 at 13:55 +0800, Alex Shi wrote:
> >
> >> Joonsoo Kim suggests not packing exec task, since the old task utils is
> >> possibly unuseable.
On Wed, 2013-02-20 at 14:32 +0100, Peter Zijlstra wrote:
> On Wed, 2013-02-20 at 11:49 +0100, Ingo Molnar wrote:
>
> > The changes look clean and reasoable,
>
> I don't necessarily agree, note that O(n^2) storage requirement that
> Michael failed to highlight ;-)
(yeah, I mentioned that needs
On Thu, 2013-02-21 at 12:51 +0800, Michael Wang wrote:
> On 02/20/2013 06:49 PM, Ingo Molnar wrote:
> [snip]
> >
> > The changes look clean and reasoable, any ideas exactly *why* it
> > speeds up?
> >
> > I.e. are there one or two key changes in the before/after logic
> > and scheduling patter
On Thu, 2013-02-21 at 15:00 +0800, Michael Wang wrote:
> On 02/21/2013 02:11 PM, Mike Galbraith wrote:
> > On Thu, 2013-02-21 at 12:51 +0800, Michael Wang wrote:
> >> On 02/20/2013 06:49 PM, Ingo Molnar wrote:
> >> [snip]
> [snip]
> >>
> >
On Thu, 2013-02-21 at 17:08 +0800, Michael Wang wrote:
> But is this patch set really cause regression on your Q6600? It may
> sacrificed some thing, but I still think it will benefit far more,
> especially on huge systems.
We spread on FORK/EXEC, and will no longer will pull communicating tasks
On Fri, 2013-02-22 at 10:36 +0800, Michael Wang wrote:
> On 02/21/2013 05:43 PM, Mike Galbraith wrote:
> > On Thu, 2013-02-21 at 17:08 +0800, Michael Wang wrote:
> >
> >> But is this patch set really cause regression on your Q6600? It may
> >> sacrificed some
On Fri, 2013-02-22 at 10:37 +0800, Michael Wang wrote:
> According to the testing result, I could not agree this purpose of
> wake_affine() benefit us, but I'm sure that wake_affine() is a terrible
> performance killer when system is busy.
(hm, result is singular.. pgbench in 1:N mode only?)
--
On Fri, 2013-02-22 at 13:26 +0800, Michael Wang wrote:
> Just confirm that I'm not on the wrong way, did the 1:N mode here means
> 1 task forked N threads, and child always talk with father?
Yes, one server, many clients.
-Mike
--
To unsubscribe from this list: send the line "unsubscribe linux-
On Fri, 2013-02-22 at 14:06 +0800, Michael Wang wrote:
> On 02/22/2013 01:08 PM, Mike Galbraith wrote:
> > On Fri, 2013-02-22 at 10:37 +0800, Michael Wang wrote:
> >
> >> According to the testing result, I could not agree this purpose of
> >> wake_affin
On Fri, 2013-02-22 at 14:42 +0800, Michael Wang wrote:
> So this is trying to take care the condition when curr_cpu(local) and
> prev_cpu(remote) are on different nodes, which in the old world,
> wake_affine() won't be invoked, correct?
It'll be called any time this_cpu and prev_cpu aren't one an
On Fri, 2013-02-22 at 09:36 +0100, Peter Zijlstra wrote:
> On Fri, 2013-02-22 at 10:37 +0800, Michael Wang wrote:
> > But that's really some benefit hardly to be estimate, especially when
> > the workload is heavy, the cost of wake_affine() is very high to
> > calculated se one by one, is that wor
On Fri, 2013-02-22 at 10:54 +0100, Ingo Molnar wrote:
> * Mike Galbraith wrote:
>
> > On Fri, 2013-02-22 at 09:36 +0100, Peter Zijlstra wrote:
> > > On Fri, 2013-02-22 at 10:37 +0800, Michael Wang wrote:
> > > > But that's really some benefit hardly to be
On Fri, 2013-02-22 at 13:11 +0100, Ingo Molnar wrote:
> * Mike Galbraith wrote:
>
> > On Fri, 2013-02-22 at 10:54 +0100, Ingo Molnar wrote:
> > > * Mike Galbraith wrote:
> > >
> > > > On Fri, 2013-02-22 at 09:36 +0100, Peter Zijlstra wrote:
&
On Fri, 2013-02-22 at 14:06 +0100, Ingo Molnar wrote:
> * Mike Galbraith wrote:
>
> > > > No, that's too high, you loose too much of the pretty
> > > > face. [...]
> > >
> > > Then a logical proportion of it - such as half of it?
> >
On Fri, 2013-02-22 at 15:30 +0100, Mike Galbraith wrote:
> On Fri, 2013-02-22 at 14:06 +0100, Ingo Molnar wrote:
> > I think it might be better to measure the scheduling rate all
> > the time, and save the _shortest_ cross-cpu-wakeup and
> > same-cpu-wakeup latencie
On Thu, 2012-09-13 at 06:11 +0200, Vincent Guittot wrote:
> On tickless system, one CPU runs load balance for all idle CPUs.
> The cpu_load of this CPU is updated before starting the load balance
> of each other idle CPUs. We should instead update the cpu_load of the
> balance_cpu.
>
> Signed-of
On Thu, 2012-09-13 at 10:19 +0200, Peter Zijlstra wrote:
> On Thu, 2012-09-13 at 08:49 +0200, Mike Galbraith wrote:
> > On Thu, 2012-09-13 at 06:11 +0200, Vincent Guittot wrote:
> > > On tickless system, one CPU runs load balance for all idle CPUs.
> > > The cpu_
On Thu, 2012-09-13 at 13:58 -0700, Tejun Heo wrote:
> 7. Misc issues
>
* Extract synchronize_rcu() from user interface? Exporting grace
periods to userspace isn't wonderful for dynamic launchers.
-Mike
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of
On Sun, 2008-02-03 at 14:23 -0400, Kevin Winchester wrote:
> And git blame lead me to the following commit:
>
> commit 266b9f8727976769e2ed2dad77ac9295f37e321e
> Author: Thomas Gleixner <[EMAIL PROTECTED]>
> Date: Wed Jan 30 13:34:06 2008 +0100
>
> x86: fix ioremap RAM check
>
> Sig
On Tue, 2008-01-22 at 06:47 +0100, Mike Galbraith wrote:
> On Tue, 2008-01-22 at 16:25 +1100, Nick Piggin wrote:
> > On Tuesday 22 January 2008 16:03, Mike Galbraith wrote:
>
> > > I've hit same twice recently (not pan, and not repeatable).
> >
> > Nasty.
On Tue, 2012-10-02 at 07:51 +0100, Mel Gorman wrote:
> I'm going through old test results to see could I find any leftover
> performance regressions that have not yet been fixed (most have at this point
> or at least changed in such a way to make a plain revert impossible). One
> major regression
On Tue, 2012-10-02 at 09:45 +0100, Mel Gorman wrote:
> On Tue, Oct 02, 2012 at 09:49:36AM +0200, Mike Galbraith wrote:
> > Hm, 518cd623 fixed up the troubles I saw. How exactly are you running
> > this?
> >
>
> You saw problems with TCP_RR where as this is UDP_STR
On Tue, 2012-10-02 at 14:14 +0100, Mel Gorman wrote:
> On Tue, Oct 02, 2012 at 11:31:22AM +0200, Mike Galbraith wrote:
> > On Tue, 2012-10-02 at 09:45 +0100, Mel Gorman wrote:
> > > On Tue, Oct 02, 2012 at 09:49:36AM +0200, Mike Galbraith wrote:
> >
> > > >
On Tue, 2012-10-02 at 14:14 +0100, Mel Gorman wrote:
> On Tue, Oct 02, 2012 at 11:31:22AM +0200, Mike Galbraith wrote:
> > On Tue, 2012-10-02 at 09:45 +0100, Mel Gorman wrote:
> > > On Tue, Oct 02, 2012 at 09:49:36AM +0200, Mike Galbraith wrote:
> >
> > > >
On Wed, 2012-10-03 at 08:50 +0200, Mike Galbraith wrote:
> On Tue, 2012-10-02 at 14:14 +0100, Mel Gorman wrote:
> > On Tue, Oct 02, 2012 at 11:31:22AM +0200, Mike Galbraith wrote:
> > > On Tue, 2012-10-02 at 09:45 +0100, Mel Gorman wrote:
> > > > On Tue, Oct 02,
On Wed, 2012-10-03 at 10:13 +0200, Mike Galbraith wrote:
> Watching all cores instead.
>
> switch rate ~890KHzswitch rate ~570KHz
> NO_TTWU_QUEUE nohz=off TTWU_QUEUE nohz=off
> 5.38% [kernel] [k] __schedule
On Mon, 2008-02-11 at 16:45 -0500, Bill Davidsen wrote:
> I think the moving to another CPU gets really dependent on the CPU type.
> On a P4+HT the caches are shared, and moving costs almost nothing for
> cache hits, while on CPUs which have other cache layouts the migration
> cost is higher.
On Mon, 2008-02-11 at 11:26 -0600, Olof Johansson wrote:
> On Mon, Feb 11, 2008 at 09:15:55AM +0100, Mike Galbraith wrote:
> > Piddling around with your testcase, it still looks to me like things
> > improved considerably in latest greatest git. Hopefully that means
> > hap
On Sun, 2008-02-10 at 01:00 -0600, Olof Johansson wrote:
> On Sun, Feb 10, 2008 at 07:15:58AM +0100, Willy Tarreau wrote:
>
> > > I agree that the testcase is highly artificial. Unfortunately, it's
> > > not uncommon to see these kind of weird testcases from customers tring
> > > to evaluate new
On Mon, 2008-02-11 at 14:31 -0600, Olof Johansson wrote:
> On Mon, Feb 11, 2008 at 08:58:46PM +0100, Mike Galbraith wrote:
> > It shouldn't matter if you yield or not really, that should reduce the
> > number of non-work spin cycles wasted awaiting preemption as threads
&
On Tue, 2008-02-12 at 10:23 +0100, Mike Galbraith wrote:
> If you plunk a usleep(1) in prior to calling thread_func() does your
> testcase performance change radically? If so, I wonder if the real
> application has the same kind of dependency.
The answer is yes for 2.6.22, and no f
With CONFIG_MARKERS set, modpost griped about unknown option -- K.
-Mike
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http
On Sat, 2008-02-16 at 02:23 +0100, Gabriel C wrote:
> Pavel Machek wrote:
> > Hi!
>
> Hi ,
>
> >> I'd not really done any real wor under 2.6.25 yet, but now while
> >> running
> >> a kernel compile with -j4 (single processor, dual core Pentium D), I see
> >> this behavior. The mouse curso
On Sat, 2008-02-16 at 11:09 +0100, Gabriel C wrote:
> Turning CONFIG_GROUP_SCHED off on this box fixes the mouse and keyboard
> problems.
> ( and some other , eg: box doesn't feel 'slow' and 'laggy' anymore )
Yeah, looks like it is the same issue. I'd suggest that folks who are
hitting this d
On Fri, 2012-11-02 at 21:09 +0100, Michal Zatloukal wrote:
> On the new kernel, the nice processes are never starved - even when
> starting a tab-laden chromium session, the processes for BOINC keep
> about 20% CPU each (that is normalized to all CPUs, ie 40% nice load
> on each core). The problem
On Sat, 2012-11-03 at 04:33 -0700, Mike Galbraith wrote:
> On Fri, 2012-11-02 at 21:09 +0100, Michal Zatloukal wrote:
>
> > On the new kernel, the nice processes are never starved - even when
> > starting a tab-laden chromium session, the processes for BOINC keep
> > abo
On Sun, 2012-11-04 at 10:20 +0100, Uwaysi Bin Kareem wrote:
> Ok, anyway realtime processes did not work quite as expected.
> ("overloaded" machine, even though cpu-time is only 10%). So I guess I
> have to enable cgroups and live with the overhead then.
>
> If I set cpu-limits there, does th
On Thu, 2012-07-26 at 13:27 +0800, Alex Shi wrote:
> if (affine_sd) {
> - if (cpu == prev_cpu || wake_affine(affine_sd, p, sync))
> + if (wake_affine(affine_sd, p, sync))
> prev_cpu = cpu;
>
> new_cpu = select_idle_sibling(p, prev
On Thu, 2012-07-26 at 17:02 +0400, Alexey Vlasov wrote:
> On Wed, Jul 25, 2012 at 03:57:47PM +0200, Mike Galbraith wrote:
> >
> > I'd profile it with perf, and expect to find a large pile of cycles.
>
> I did it the as following:
> # perf stat cat /proc/self/cgrou
On Fri, 2012-07-27 at 09:47 +0800, Alex Shi wrote:
> On 07/26/2012 05:37 PM, Mike Galbraith wrote:
>
> > On Thu, 2012-07-26 at 13:27 +0800, Alex Shi wrote:
> >
> >>if (affine_sd) {
> >> - if (cpu == prev_cpu || wake_affine(affine_sd, p, sy
On Mon, 2012-09-24 at 17:30 +0200, Peter Zijlstra wrote:
> Anyway, does anybody have any clue as to why AMD and Intel machine
> behave significantly different here? Does an Intel box with HT disabled
> behave similar to AMD? or is it something about the micro-architecture?
If you mean pgbench, it
On Mon, 2012-09-24 at 18:54 +0200, Peter Zijlstra wrote:
> On Mon, 2012-09-24 at 09:30 -0700, Linus Torvalds wrote:
> > Also, do we really want to spread things out that aggressively?
> > How/why do we know that we don't want to share L2 caches, for example?
> > It sounds like a bad idea from a p
On Mon, 2012-09-24 at 12:12 -0700, Linus Torvalds wrote:
> On Mon, Sep 24, 2012 at 11:26 AM, Mike Galbraith wrote:
> >
> > Aside from the cache pollution I recall having been mentioned, on my
> > E5620, cross core is a tbench win over affine, cross thread is not.
>
>
On Mon, 2012-09-24 at 21:20 +0200, Borislav Petkov wrote:
> On Mon, Sep 24, 2012 at 12:12:18PM -0700, Linus Torvalds wrote:
> > On Mon, Sep 24, 2012 at 11:26 AM, Mike Galbraith wrote:
> > >
> > > Aside from the cache pollution I recall having been mentioned, on my
&
On Mon, 2012-09-24 at 19:11 -0700, Linus Torvalds wrote:
> In the not-so-distant past, we had the intel "Dunnington" Xeon, which
> was iirc basically three Core 2 duo's bolted together (ie three
> clusters of two cores sharing L2, and a fully shared L3). So that was
> a true multi-core with fairly
On Mon, 2012-09-24 at 20:10 -0700, Linus Torvalds wrote:
> On Mon, Sep 24, 2012 at 7:49 PM, Mike Galbraith wrote:
> >
> > Ah. That's what I did to select_idle_sibling() in a nutshell, converted
> > the problematic large L3 packages into multiple ~core2duo pairs, mo
On Mon, 2012-09-24 at 20:32 -0700, Linus Torvalds wrote:
> On Mon, Sep 24, 2012 at 8:20 PM, Mike Galbraith wrote:
> >
> > Yes. Cross wiring traverse _start_ points should eliminate (well, damp)
> > bounce as well without killing the 1:N latency/preempt benefits of
On Mon, 2012-09-24 at 16:00 +0100, Mel Gorman wrote:
> On Fri, Sep 14, 2012 at 02:42:44PM -0700, Linus Torvalds wrote:
> > On Fri, Sep 14, 2012 at 2:27 PM, Borislav Petkov wrote:
> > >
> > > as Nikolay says below, we have a regression in 3.6 with pgbench's
> > > benchmark in postgresql.
> > >
> >
On Tue, 2012-09-25 at 10:21 -0700, Linus Torvalds wrote:
> On Tue, Sep 25, 2012 at 10:00 AM, Borislav Petkov wrote:
> >
> > 3.6-rc6+tip/auto-latest-kill select_idle_sibling()
>
> Is this literally just removing it entirely? Because apart from the
> latency spike at 4 procs (and the latency numbe
On Tue, 2012-09-25 at 20:42 +0200, Borislav Petkov wrote:
> Right, so why did we need it all, in the first place? There has to be
> some reason for it.
Easy. Take two communicating tasks. Is an affine wakeup a good idea?
It depends on how much execution overlap there is. Wake affine when
there
On Tue, 2012-09-25 at 19:22 -0700, Linus Torvalds wrote:
> On Tue, Sep 25, 2012 at 7:00 PM, Mike Galbraith wrote:
> >
> > Yes. On AMD, the best thing you can do for fast switchers AFAIKT is
> > turn it off. Different story on Intel.
>
> I doubt it's a
On Fri, 2012-10-26 at 10:42 +0800, Qiang Gao wrote:
> On Thu, Oct 25, 2012 at 5:57 PM, Michal Hocko wrote:
> > On Wed 24-10-12 11:44:17, Qiang Gao wrote:
> >> On Wed, Oct 24, 2012 at 1:43 AM, Balbir Singh
> >> wrote:
> >> > On Tue, Oct 23, 2012 at 3:45 PM, Michal Hocko wrote:
> >> >> On Tue 23
On Fri, 2012-10-26 at 10:03 -0700, Mike Galbraith wrote:
> The bug is in the patch that used sched_setscheduler_nocheck(). Plain
> sched_setscheduler() would have replied -EGOAWAY.
sched_setscheduler_nocheck() should say go away too methinks. This
isn't about permissions, it's
On Sat, 2012-10-20 at 08:38 -0400, Mike Galbraith wrote:
> So what I would do is either let the user decide once at boot, in which
> case if off, creating groups would be stupid), or, just rip autogroup
> completely out, since systemd is taking over the known universe anyway.
I'm
On Fri, 2012-10-26 at 13:29 -0700, Mike Galbraith wrote:
> On Sat, 2012-10-20 at 08:38 -0400, Mike Galbraith wrote:
>
> > So what I would do is either let the user decide once at boot, in which
> > case if off, creating groups would be stupid), or, just rip autogroup
> &g
On Sun, 2012-10-28 at 11:25 +0100, Ingo Molnar wrote:
> * Mike Galbraith wrote:
>
> > No knobs, no glitz, nada, just a cute little thing folks can turn
> > on if they don't want to muck about with cgroups and/or systemd.
>
> Please also keep the Kconfig switc
On Sun, 2012-10-28 at 14:19 +0100, Ingo Molnar wrote:
> * Mike Galbraith wrote:
>
> > On Sun, 2012-10-28 at 11:25 +0100, Ingo Molnar wrote:
> > > * Mike Galbraith wrote:
> > >
> >
> > > > No knobs, no glitz, nada, just a cute little thing folk
On Sun, 2012-10-28 at 15:05 +0100, Ingo Molnar wrote:
> I'd also suggest to still expose the state of autosched in
> /proc/sys, read-only, so that its status can be checked.
(Aw poo, less pretty minus signs;)
Ok, will do.
-Mike
--
To unsubscribe from this list: send the line "unsubscribe linu
1 - 100 of 3005 matches
Mail list logo