On Fri, 2009-03-20 at 10:43 +0100, Miklos Szeredi wrote:
Ingo,
I tested this one, and I think it makes sense in any case as an
optimization. It should also be good for -stable kernels.
Does it look OK?
The idea is good, but there is a risk of preemption latencies here. Some
code paths
On Fri, 2010-10-15 at 10:02 +0300, Pekka Enberg wrote:
On Fri, Oct 15, 2010 at 2:44 AM, richard -rw- weinberger
richard.weinber...@gmail.com wrote:
On Thu, Oct 14, 2010 at 9:50 PM, Arjan van de Ven
ar...@linux.intel.com wrote:
On 10/14/2010 11:27 AM, richard -rw- weinberger wrote:
Hi
On Mon, 2011-01-17 at 11:26 +, Russell King - ARM Linux wrote:
On Mon, Jan 17, 2011 at 12:07:13PM +0100, Peter Zijlstra wrote:
diff --git a/arch/alpha/kernel/smp.c b/arch/alpha/kernel/smp.c
index 42aa078..c4a570b 100644
--- a/arch/alpha/kernel/smp.c
+++ b/arch/alpha/kernel/smp.c
On Mon, 2011-01-17 at 12:31 +0100, Peter Zijlstra wrote:
On Mon, 2011-01-17 at 11:26 +, Russell King - ARM Linux wrote:
Maybe remove the comment everything is done on the interrupt return path
as with this function call, that is no longer the case.
(Removed am33, m32r-ka, m32r, arm
when low on memory.
Signed-off-by: Peter Zijlstra a.p.zijls...@chello.nl
---
arch/alpha/kernel/smp.c |1 +
arch/arm/kernel/smp.c |1 +
arch/blackfin/mach-common/smp.c |3 ++-
arch/cris/arch-v32/kernel/smp.c | 13 -
arch/ia64/kernel/irq_ia64.c |2
On Mon, 2011-01-17 at 14:49 -0500, Mike Frysinger wrote:
On Mon, Jan 17, 2011 at 06:07, Peter Zijlstra wrote:
Also, while reading through all this, I noticed the blackfin SMP code
looks to be broken, it simply discards any IPI when low on memory.
not really. see changelog of commit
On Mon, 2011-02-07 at 10:26 +1100, Benjamin Herrenschmidt wrote:
You missed:
diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c
index 9813605..467d122 100644
--- a/arch/powerpc/kernel/smp.c
+++ b/arch/powerpc/kernel/smp.c
@@ -98,6 +98,7 @@ void smp_message_recv(int msg)
On Wed, 2011-02-09 at 17:14 +1100, Benjamin Herrenschmidt wrote:
On Mon, 2011-02-07 at 14:54 +0100, Peter Zijlstra wrote:
On Mon, 2011-02-07 at 10:26 +1100, Benjamin Herrenschmidt wrote:
You missed:
diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c
index 9813605
t...@linutronix.de
Date: Wed Jan 26 21:32:01 2011 +0100
rwsem: Remove redundant asmregparm annotation
Peter Zijlstra pointed out, that the only user of asmregparm (x86) is
compiling the kernel already with -mregparm=3. So the annotation of
the rwsem functions
On Sat, 2012-03-24 at 14:27 +0400, Anton Vorontsov wrote:
+void clear_tasks_mm_cpumask(int cpu)
+{
+ struct task_struct *p;
+
+ read_lock(tasklist_lock);
+ for_each_process(p) {
+ struct task_struct *t;
+
+ t = find_lock_task_mm(p);
+
:17: warning: context imbalance in
'task_in_mem_cgroup' - unexpected unlock
p.s. I know Peter Zijlstra detest the __cond_lock() stuff, but untill
we have anything better in sparse, let's use it. This particular
patch helped me to detect one bug that I myself made during
task-mm
On Sat, 2012-03-24 at 14:30 +0400, Anton Vorontsov wrote:
Traversing the tasks requires holding tasklist_lock, otherwise it
is unsafe.
No it doesn't, it either requires tasklist_lock or rcu_read_lock().
--
This SF
On Sat, 2012-03-24 at 20:21 +0400, Anton Vorontsov wrote:
Just wonder how do you see the feature implemented?
Something like this?
#define __ret_cond_locked(l, c) __attribute__((ret_cond_locked(l, c)))
#define __ret_value __attribute__((ret_value))
#define
On Sun, 2012-03-25 at 19:42 +0200, Oleg Nesterov wrote:
__cpu_disable() is called by __stop_machine(), we know that nobody
can preempt us and other CPUs can do nothing.
It would be very good to not rely on that though, I would love to get
rid of the stop_machine usage in cpu hotplug some day.
On Mon, 2012-03-26 at 19:04 +0200, Oleg Nesterov wrote:
Interesting... Why? I mean, why do you dislike stop_machine() in
_cpu_down() ? Just curious.
It disturbs all cpus, the -rt people don't like that their FIFO tasks
don't get to run, the trading people don't like their RDMA poll loops to
be
:
WARN_ON(cpu_online(cpu));
Ought to cure that worry, no? :-)
so its not like new tasks will ever get this cpu set in
+ * their mm mask. -- Peter Zijlstra
+ * Thus, we may use rcu_read_lock() here, instead of grabbing
+ * full-fledged tasklist_lock
16 matches
Mail list logo