This patch allows RCU usage in do_notify_resume, e.g. signal handling.
It corresponds to
[PATCH] x86: Exit RCU extended QS on notify resume
commit edf55fda35c7dc7f2d9241c3abaddaf759b457c6
Signed-off-by: Li Zhong
---
arch/powerpc/kernel/signal.c |5 +
1 file changed, 5 insertions
for userspace RCU extended QS
commit 6ba3c97a38803883c2eee489505796cb0a727122
Signed-off-by: Li Zhong
---
arch/powerpc/include/asm/context_tracking.h | 20 +++
arch/powerpc/kernel/exceptions-64s.S|4 +-
arch/powerpc/kernel/traps.c | 79
. TIF_NOHZ is added to _TIF_SYCALL_T_OR_A, so it is
better for it to be in the same 16 bits with others in the group, so in the
asm code, andi. with this group could work.
Signed-off-by: Li Zhong
---
arch/powerpc/include/asm/thread_info.h |7 +--
arch/powerpc/kernel/ptrace.c |5
These patches try to support context tracking for Power arch, beginning with
64-bit pSeries. The codes are ported from that of the x86_64, and in each
patch, I listed the corresponding patch for x86.
Would you please help review and give your comments?
Thanks, Zhong
Li Zhong (5):
powerpc
Use macros in vpa calls.
Signed-off-by: Li Zhong
---
arch/powerpc/platforms/pseries/plpar_wrappers.h | 15 +++
1 file changed, 7 insertions(+), 8 deletions(-)
diff --git a/arch/powerpc/platforms/pseries/plpar_wrappers.h
b/arch/powerpc/platforms/pseries/plpar_wrappers.h
index
Use local_paca directly in macro SHARED_PROCESSOR, as all processors
have the same value for the field shared_proc, so we don't need care
racy here.
Reported-by: Paul E. McKenney
Signed-off-by: Li Zhong
---
arch/powerpc/include/asm/spinlock.h |2 +-
1 files changed, 1 insertions(
On Thu, 2013-01-24 at 14:47 +1100, Benjamin Herrenschmidt wrote:
> On Thu, 2013-01-10 at 17:00 +0800, Li Zhong wrote:
> > Use local_paca directly in arch_spin_unlock_wait(), as all processors have
> > the
> > same value for the field shared_proc, so we don't need care
On Sat, 2013-01-12 at 12:43 -0800, Christian Kujau wrote:
> On Wed, 28 Nov 2012 at 16:41, Li Zhong wrote:
> > On Tue, 2012-11-27 at 19:22 -0800, Christian Kujau wrote:
> > > On Tue, 27 Nov 2012 at 19:06, Christian Kujau wrote:
> > > > the same thing[0] happened again
Use local_paca directly in arch_spin_unlock_wait(), as all processors have the
same value for the field shared_proc, so we don't need care racy here.
Reported-by: Paul E. McKenney
Signed-off-by: Li Zhong
---
arch/powerpc/lib/locks.c |2 +-
1 file changed, 1 insertion(+), 1 del
On Thu, 2013-01-10 at 17:02 +1100, Benjamin Herrenschmidt wrote:
> On Mon, 2012-11-19 at 14:16 +0800, Li Zhong wrote:
> > This patch tries to disable preemption for using smp_processor_id() in
> > arch_spin_unlock_wait(),
> > to avoid following report:
>
> .../.
.}, at: []
> console_callback+0x20/0x194
> [ 97.803112]
> [ 97.803112] which lock already depends on the new lock.
>
> ...and on it goes. Please see the URL above for the whole dmesg and
> .config.
>
> @Li Zhong: I have applied your fix for the "MAX_STACK_TRACE_EN
ttp://patchwork.ozlabs.org/patch/202385/
And as it is very similar to that of commit 12660b17, so I think it's
not needed to repeat that in the change log.
Thanks, Zhong
>
> Thanks.
>
> On 12/3/12, Li Zhong wrote:
> > This patch fixes MAX_STACK_TRACE_ENTRIES too low warning for ppc3
This patch fixes MAX_STACK_TRACE_ENTRIES too low warning for ppc32,
which is similar to commit 12660b17.
Reported-by: Christian Kujau
Signed-off-by: Li Zhong
Tested-by: Christian Kujau
---
arch/powerpc/kernel/entry_32.S |2 ++
1 file changed, 2 insertions(+)
diff --git a/arch/powerpc
On Sat, 2012-12-01 at 12:32 -0800, Christian Kujau wrote:
> On Wed, 28 Nov 2012 at 16:41, Li Zhong wrote:
> > On Tue, 2012-11-27 at 19:22 -0800, Christian Kujau wrote:
> > > On Tue, 27 Nov 2012 at 19:06, Christian Kujau wrote:
> > > > the same thing[0] happened again
On Tue, 2012-11-27 at 19:22 -0800, Christian Kujau wrote:
> On Tue, 27 Nov 2012 at 19:06, Christian Kujau wrote:
> > the same thing[0] happened again in 3.7-rc7, after ~20h uptime:
>
> I found the following on patchwork, but this seems to deal with powerpc64
> only, while this PowerBook G4 of min
] [c911fd00] [c00b0e50] .task_work_run+0x80/0x160
[ 17.279482] [c911fdb0] [c0018744] .do_notify_resume+0x94/0xa0
[ 17.279495] [c911fe30] [c0009d1c]
.ret_from_except_lite+0x48/0x4c
Reported-by: Paul E. McKenney
Signed-off-by: Li Zhong
---
arch/powerpc/lib
value, which might form a
loop.
Not very sure whether it is good to clear the back chain to 0 here.
Signed-off-by: Li Zhong
---
arch/powerpc/kernel/entry_64.S |2 ++
1 files changed, 2 insertions(+), 0 deletions(-)
diff --git a/arch/powerpc/kernel/entry_64.S
b/arch/powerpc/kernel/entry_64.S
On Thu, 2012-10-18 at 10:54 +1100, Benjamin Herrenschmidt wrote:
> On Wed, 2012-09-26 at 18:10 +0800, Li Zhong wrote:
>
> ../...
>
> Sorry got distracted, got back on this patch today:
>
> > > We might need to "sanitize" the enable state in the PACA before
On Thu, 2012-10-18 at 10:54 +1100, Benjamin Herrenschmidt wrote:
> On Wed, 2012-09-26 at 18:10 +0800, Li Zhong wrote:
>
> ../...
>
> Sorry got distracted, got back on this patch today:
>
> > > We might need to "sanitize" the enable state in the PACA before
On Mon, 2012-09-24 at 15:30 +1000, Benjamin Herrenschmidt wrote:
> On Mon, 2012-09-24 at 11:56 +0800, Li Zhong wrote:
> > This patch tries to fix a WARNING in irq.c(below), when doing cpu
> > offline/online.
> >
> > The reason is that if the preferred offline
On Mon, 2012-09-24 at 15:30 +1000, Benjamin Herrenschmidt wrote:
> On Mon, 2012-09-24 at 11:56 +0800, Li Zhong wrote:
> > This patch tries to fix a WARNING in irq.c(below), when doing cpu
> > offline/online.
> >
> > The reason is that if the preferred offline
83d7f9ba ]---
Reported-by: Paul E. McKenney
Signed-off-by: Li Zhong
---
arch/powerpc/platforms/pseries/hotplug-cpu.c |2 ++
1 files changed, 2 insertions(+), 0 deletions(-)
diff --git a/arch/powerpc/platforms/pseries/hotplug-cpu.c
b/arch/powerpc/platforms/pseries/hotplug-cpu.c
index 64c97d8.
be used in
area marked by rcu_irq_enter()/rcu_irq_exit(), called in
irq_enter()/irq_exit() respectively.
Move them into the irq_enter()/irq_exit() area to avoid the reporting.
Signed-off-by: Li Zhong
---
arch/powerpc/kernel/irq.c |8
arch/powerpc/kernel/time.c |8
2
).
Thanks, Zhong
Signed-off-by: Li Zhong
---
arch/powerpc/kernel/irq.c |4 ++--
arch/powerpc/kernel/time.c |4 ++--
include/linux/tracepoint.h | 10 ++
3 files changed, 14 insertions(+), 4 deletions(-)
diff --git a/arch/powerpc/kernel/irq.c b/arch/powerpc/kernel/irq.c
index
On Mon, 2012-07-09 at 09:01 -0500, Christoph Lameter wrote:
> > I was pointed by Glauber to the slab common code patches. I need some
> > more time to read the patches. Now I think the slab/slot changes in this
> > v3 are not needed, and can be ignored.
>
> That may take some kernel cycles. You ha
On Fri, 2012-07-06 at 08:56 -0500, Christoph Lameter wrote:
> I thought I posted this a couple of days ago. Would this not fix things
> without having to change all the allocators?
I was pointed by Glauber to the slab common code patches. I need some
more time to read the patches. Now I think the
On Fri, 2012-07-06 at 14:13 +0400, Glauber Costa wrote:
> On 07/05/2012 01:29 PM, Li Zhong wrote:
> > On Thu, 2012-07-05 at 12:23 +0400, Glauber Costa wrote:
> >> On 07/05/2012 05:41 AM, Li Zhong wrote:
> >>> On Wed, 2012-07-04 at 16:40 +0400, Glauber Costa wrote:
>
ifdef
CONFIG_SLUB is no longer needed.
Signed-off-by: Li Zhong
---
arch/powerpc/mm/init_64.c |1 +
1 files changed, 1 insertions(+), 0 deletions(-)
diff --git a/arch/powerpc/mm/init_64.c b/arch/powerpc/mm/init_64.c
index 620b7ac..bc7f462 100644
--- a/arch/powerpc/mm/init_64.c
+++ b/arch/power
lub.
for slab, the name of the sizes caches created before
slab_is_available() is not duplicated, and it is not checked in
kmem_cache_destroy(), as I think these caches won't be destroyed.
Signed-off-by: Li Zhong
---
mm/slab.c | 15 ++-
mm/slob.c | 17 ++---
m
On Thu, 2012-07-05 at 12:23 +0400, Glauber Costa wrote:
> On 07/05/2012 05:41 AM, Li Zhong wrote:
> > On Wed, 2012-07-04 at 16:40 +0400, Glauber Costa wrote:
> >> On 07/04/2012 01:00 PM, Li Zhong wrote:
> >>> On Tue, 2012-07-03 at 15:36 -0500, Christoph Lameter wrot
On Wed, 2012-07-04 at 16:40 +0400, Glauber Costa wrote:
> On 07/04/2012 01:00 PM, Li Zhong wrote:
> > On Tue, 2012-07-03 at 15:36 -0500, Christoph Lameter wrote:
> >> > Looking through the emails it seems that there is an issue with alias
> >> > strings.
> >
On Tue, 2012-07-03 at 15:36 -0500, Christoph Lameter wrote:
> Looking through the emails it seems that there is an issue with alias
> strings.
To be more precise, there seems no big issue currently. I just wanted to
make following usage of kmem_cache_create (SLUB) possible:
name = some s
ing after calling cache create.
v2: removed an unnecessary assignment in v1; some changes in change log,
added more details
Signed-off-by: Li Zhong
---
mm/slub.c |7 ++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/mm/slub.c b/mm/slub.c
index 8c691fa..ed9f3c5 100644
--- a
On Mon, 2012-06-25 at 15:10 +0400, Glauber Costa wrote:
> On 06/25/2012 01:53 PM, Li Zhong wrote:
> > SLUB duplicates the cache name in kmem_cache_create(). However if the
> > cache could be merged to others during early booting, the name pointer
> > is saved in saved_alias
On Mon, 2012-06-25 at 18:54 +0800, Wanlong Gao wrote:
> On 06/25/2012 05:53 PM, Li Zhong wrote:
> > SLUB duplicates the cache name in kmem_cache_create(). However if the
> > cache could be merged to others during early booting, the name pointer
> > is saved in saved_alias
. In this case, the name could be safely kfreed
after calling kmem_cache_create() with patch 1.
Signed-off-by: Li Zhong
---
arch/powerpc/mm/init_64.c |3 +++
1 files changed, 3 insertions(+), 0 deletions(-)
diff --git a/arch/powerpc/mm/init_64.c b/arch/powerpc/mm/init_64.c
index 620b7ac
saved_alias list, so
that the cache name could be safely kfreed after calling
kmem_cache_create(), if that name is kmalloced.
Signed-off-by: Li Zhong
---
mm/slub.c |6 ++
1 files changed, 6 insertions(+), 0 deletions(-)
diff --git a/mm/slub.c b/mm/slub.c
index 8c691fa..3dc8ed5 100644
--- a
function __init .call_prom().
This is often because .prom_query_opal lacks a __init
annotation or the annotation of .call_prom is wrong.
Signed-off-by: Li Zhong
---
arch/powerpc/kernel/prom_init.c |4 ++--
1 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/powerpc/kernel
I'm not sure whether it makes sense to add this dependency to avoid
CONFI_NUMA && !CONFIG_SMP.
I want to do this because I saw some build errors on next-tree when
compiling with CONFIG_SMP disabled, and it seems they are caused by some
codes under the CONFIG_NUMA #ifdefs.
Sign
On Fri, 2012-05-18 at 16:54 +0530, Deepthi Dharwar wrote:
> On 05/18/2012 08:24 AM, Li Zhong wrote:
>
> > On Thu, 2012-05-17 at 15:52 +0530, Deepthi Dharwar wrote:
> >> On 05/17/2012 09:58 AM, Benjamin Herrenschmidt wrote:
> >>
> >>> On Thu, 2012-05-17 a
On Thu, 2012-05-17 at 15:52 +0530, Deepthi Dharwar wrote:
> On 05/17/2012 09:58 AM, Benjamin Herrenschmidt wrote:
>
> > On Thu, 2012-05-17 at 12:01 +0800, Li Zhong wrote:
> >> This patch tries to fix following lockdep complaints:
> >
> > .../...
> >
of times this idle state has been entered
/sys/devices/system/cpu/cpu#/cpuidle/state#/time
the amount of time spent in this idle state
So I think we could just remove the function call doing the
disable/enable cycle:
Please correct me if I missed anything.
Reported-by: Paul E. McKenney
On Tue, 2012-01-03 at 11:54 +1100, Benjamin Herrenschmidt wrote:
> On Mon, 2011-12-19 at 10:06 +0800, Li Zhong wrote:
> > Unpaired calling of __trace_hcall_entry and __trace_hcall_exit could
> > cause incorrect preempt count. And it might happen as the global
> > variable hcal
e
the hcall_tracepoint_refcount locally, so __trace_hcall_entry and
__trace_hcall_exit will be called or not called in pair by checking the
same value.
Reported-by: Paul E. McKenney
Signed-off-by: Li Zhong
Tested-by: Paul E. McKenney
---
arch/powerpc/platforms/pseries/hvCall.S | 20 +++---
.
So error happens when only one of probe_hcall_entry and probe_hcall_exit
get called during a hcall.
This patch tries to move the preempt count operations from
probe_hcall_entry and probe_hcall_exit to its callers.
Reported-by: Paul E. McKenney
Signed-off-by: Li Zhong
Tested-by: Paul E. McKenney
101 - 145 of 145 matches
Mail list logo