Re: [RFC 0/2] Reenable might_sleep() checks for might_fault() when atomic

2014-11-27 Thread David Hildenbrand
Code like spin_lock(lock); if (copy_to_user(...)) rc = ... spin_unlock(lock); really *should* generate warnings like it did before. And *only* code like spin_lock(lock); Is only code like this valid or also with the spin_lock() dropped? (e.g. the

Re: [RESEND, V3] powerpc, xmon: Enable HW instruction breakpoint on POWER8

2014-11-27 Thread Anshuman Khandual
On 11/26/2014 01:55 PM, Michael Ellerman wrote: On Tue, 2014-25-11 at 10:08:48 UTC, Anshuman Khandual wrote: This patch enables support for hardware instruction breakpoints on POWER8 with the help of a new register CIABR (Completed Instruction Address Breakpoint Register). With this patch,

Re: [PATCH REPOST 3/3] powerpc/vphn: move endianness fixing to vphn_unpack_associativity()

2014-11-27 Thread Greg Kurz
On Thu, 27 Nov 2014 10:39:23 +1100 Benjamin Herrenschmidt b...@kernel.crashing.org wrote: On Mon, 2014-11-17 at 18:42 +0100, Greg Kurz wrote: The first argument to vphn_unpack_associativity() is a const long *, but the parsing code expects __be64 values actually. This is inconsistent. We

Re: [RFC 0/2] Reenable might_sleep() checks for might_fault() when atomic

2014-11-27 Thread Heiko Carstens
On Thu, Nov 27, 2014 at 09:03:01AM +0100, David Hildenbrand wrote: Code like spin_lock(lock); if (copy_to_user(...)) rc = ... spin_unlock(lock); really *should* generate warnings like it did before. And *only* code like spin_lock(lock); Is only

Re: [RFC 0/2] Reenable might_sleep() checks for might_fault() when atomic

2014-11-27 Thread David Hildenbrand
On Thu, Nov 27, 2014 at 09:03:01AM +0100, David Hildenbrand wrote: Code like spin_lock(lock); if (copy_to_user(...)) rc = ... spin_unlock(lock); really *should* generate warnings like it did before. And *only* code like spin_lock(lock); Is only

[RFC PATCH 1/2]powerpc: foundation code to handle CR5 for local_t

2014-11-27 Thread Madhavan Srinivasan
This patch create the infrastructure to handle the CR based local_* atomic operations. Local atomic operations are fast and highly reentrant per CPU counters. Used for percpu variable updates. Local atomic operations only guarantee variable modification atomicity wrt the CPU which owns the

[RFC PATCH 0/2] powerpc: CR based local atomic operation implementation

2014-11-27 Thread Madhavan Srinivasan
This patchset create the infrastructure to handle the CR based local_* atomic operations. Local atomic operations are fast and highly reentrant per CPU counters. Used for percpu variable updates. Local atomic operations only guarantee variable modification atomicity wrt the CPU which owns the

[RFC PATCH 2/2]powerpc: rewrite local_* to use CR5 flag

2014-11-27 Thread Madhavan Srinivasan
This patch re-write the current local_* functions to CR5 based one. Base flow for each function is { set cr5(eq) load .. store clear cr5(eq) } Above set of instructions are followed by a fixup section which points to the entry of the function incase of

RE: [RFC PATCH 0/2] powerpc: CR based local atomic operation implementation

2014-11-27 Thread David Laight
From: Madhavan Srinivasan This patchset create the infrastructure to handle the CR based local_* atomic operations. Local atomic operations are fast and highly reentrant per CPU counters. Used for percpu variable updates. Local atomic operations only guarantee variable modification atomicity

Re: [RFC PATCH v1 1/1] powerpc/85xx: Add support for Emerson/Artesyn MVME2500.

2014-11-27 Thread Alessio Igor Bogani
Scott, On 26 November 2014 at 23:21, Scott Wood scottw...@freescale.com wrote: On Wed, 2014-11-26 at 15:17 +0100, Alessio Igor Bogani wrote: + board_soc: soc: soc@ffe0 { There's no need for two labels on the same node. I'll remove board_soc label. [...] +

Re: [RFC 0/2] Reenable might_sleep() checks for might_fault() when atomic

2014-11-27 Thread Thomas Gleixner
On Thu, 27 Nov 2014, Heiko Carstens wrote: On Thu, Nov 27, 2014 at 09:03:01AM +0100, David Hildenbrand wrote: Code like spin_lock(lock); if (copy_to_user(...)) rc = ... spin_unlock(lock); really *should* generate warnings like it did before. And *only*

Re: [RFC 0/2] Reenable might_sleep() checks for might_fault() when atomic

2014-11-27 Thread David Hildenbrand
OTOH, there is no reason why we need to disable preemption over that page_fault_disabled() region. There are code pathes which really do not require to disable preemption for that. We have that seperated in preempt-rt for obvious reasons and IIRC Peter Zijlstra tried to distangle it in

RE: [RFC 0/2] Reenable might_sleep() checks for might_fault() when atomic

2014-11-27 Thread David Laight
From: David Hildenbrand ... Although it might not be optimal, but keeping a separate counter for pagefault_disable() as part of the preemption counter seems to be the only doable thing right now. I am not sure if a completely separated counter is even possible, increasing the size of

Re: [RFC 0/2] Reenable might_sleep() checks for might_fault() when atomic

2014-11-27 Thread David Hildenbrand
From: David Hildenbrand ... Although it might not be optimal, but keeping a separate counter for pagefault_disable() as part of the preemption counter seems to be the only doable thing right now. I am not sure if a completely separated counter is even possible, increasing the size of

Re: [PATCH] powerpc: 32 bit getcpu VDSO function uses 64 bit instructions

2014-11-27 Thread Segher Boessenkool
On Wed, Nov 26, 2014 at 05:50:27PM -0600, Peter Bergner wrote: On Thu, 2014-11-27 at 09:38 +1100, Michael Ellerman wrote: On Thu, 2014-11-27 at 08:11 +1100, Anton Blanchard wrote: I used some 64 bit instructions when adding the 32 bit getcpu VDSO function. Fix it. Ouch. The symptom

RE: [RFC 0/2] Reenable might_sleep() checks for might_fault() when atomic

2014-11-27 Thread David Laight
From: David Hildenbrand [mailto:d...@linux.vnet.ibm.com] From: David Hildenbrand ... Although it might not be optimal, but keeping a separate counter for pagefault_disable() as part of the preemption counter seems to be the only doable thing right now. I am not sure if a completely

Re: [RFC 0/2] Reenable might_sleep() checks for might_fault() when atomic

2014-11-27 Thread David Hildenbrand
From: David Hildenbrand [mailto:d...@linux.vnet.ibm.com] From: David Hildenbrand ... Although it might not be optimal, but keeping a separate counter for pagefault_disable() as part of the preemption counter seems to be the only doable thing right now. I am not sure if a

Re: [RFC PATCH 1/2]powerpc: foundation code to handle CR5 for local_t

2014-11-27 Thread Segher Boessenkool
On Thu, Nov 27, 2014 at 05:48:40PM +0530, Madhavan Srinivasan wrote: Here is the design of this patch. Since local_* operations are only need to be atomic to interrupts (IIUC), patch uses one of the Condition Register (CR) fields as a flag variable. When entering the local_*, specific bit

[PATCH RFC 1/2] preempt: track pagefault_disable() calls in the preempt counter

2014-11-27 Thread David Hildenbrand
Let's track the levels of pagefault_disable() calls in a separate part of the preempt counter. Also update the regular preempt counter to keep the existing pagefault infrastructure working (can be demangeled and cleaned up later). This change is needed to detect whether we are running in a simple

[PATCH RFC 0/2] Reenable might_sleep() checks for might_fault() when atomic

2014-11-27 Thread David Hildenbrand
Simple prototype to enable might_sleep() checks in might_fault(), avoiding false positives for scenarios involving explicit pagefault_disable(). So this should work: spin_lock(lock); /* also if left away */ pagefault_disable() rc = copy_to_user(...)

[PATCH RFC 2/2] mm, sched: trigger might_sleep() in might_fault() when pagefaults are disabled

2014-11-27 Thread David Hildenbrand
Commit 662bbcb2747c2422cf98d3d97619509379eee466 removed might_sleep() checks for all user access code (that uses might_fault()). The reason was to disable wrong sleep in atomic warnings in the following scenario: pagefault_disable(); rc = copy_to_user(...);

Re: [PATCH RFC 2/2] mm, sched: trigger might_sleep() in might_fault() when pagefaults are disabled

2014-11-27 Thread Michael S. Tsirkin
On Thu, Nov 27, 2014 at 06:10:17PM +0100, David Hildenbrand wrote: Commit 662bbcb2747c2422cf98d3d97619509379eee466 removed might_sleep() checks for all user access code (that uses might_fault()). The reason was to disable wrong sleep in atomic warnings in the following scenario:

Re: [PATCH RFC 2/2] mm, sched: trigger might_sleep() in might_fault() when pagefaults are disabled

2014-11-27 Thread Michael S. Tsirkin
On Thu, Nov 27, 2014 at 07:24:49PM +0200, Michael S. Tsirkin wrote: On Thu, Nov 27, 2014 at 06:10:17PM +0100, David Hildenbrand wrote: Commit 662bbcb2747c2422cf98d3d97619509379eee466 removed might_sleep() checks for all user access code (that uses might_fault()). The reason was to

Re: [PATCH] powerpc: 32 bit getcpu VDSO function uses 64 bit instructions

2014-11-27 Thread Peter Bergner
On Thu, 2014-11-27 at 10:08 -0600, Segher Boessenkool wrote: On Wed, Nov 26, 2014 at 05:50:27PM -0600, Peter Bergner wrote: Nope, you don't get a SIGILL when executing 64-bit instructions in 32-bit mode, so it'll happily just execute the instruction, doing a full 64-bit compare. I'm

Re: [PATCH RFC 2/2] mm, sched: trigger might_sleep() in might_fault() when pagefaults are disabled

2014-11-27 Thread David Hildenbrand
- - __might_sleep(__FILE__, __LINE__, 0); + if (unlikely(!pagefault_disabled())) + __might_sleep(__FILE__, __LINE__, 0); Same here: so maybe make might_fault a wrapper around __might_fault as well. Yes, I also noticed that. It was part of the original code. For now

Re: [PATCH RFC 2/2] mm, sched: trigger might_sleep() in might_fault() when pagefaults are disabled

2014-11-27 Thread Michael S. Tsirkin
On Thu, Nov 27, 2014 at 07:08:42PM +0100, David Hildenbrand wrote: - - __might_sleep(__FILE__, __LINE__, 0); + if (unlikely(!pagefault_disabled())) + __might_sleep(__FILE__, __LINE__, 0); Same here: so maybe make might_fault a wrapper around

Re: [PATCH] powerpc: 32 bit getcpu VDSO function uses 64 bit instructions

2014-11-27 Thread Andreas Schwab
Segher Boessenkool seg...@kernel.crashing.org writes: On Wed, Nov 26, 2014 at 05:50:27PM -0600, Peter Bergner wrote: On Thu, 2014-11-27 at 09:38 +1100, Michael Ellerman wrote: On Thu, 2014-11-27 at 08:11 +1100, Anton Blanchard wrote: I used some 64 bit instructions when adding the 32 bit

Re: [PATCH] powerpc: 32 bit getcpu VDSO function uses 64 bit instructions

2014-11-27 Thread Segher Boessenkool
On Thu, Nov 27, 2014 at 11:41:40AM -0600, Peter Bergner wrote: On Thu, 2014-11-27 at 10:08 -0600, Segher Boessenkool wrote: On Wed, Nov 26, 2014 at 05:50:27PM -0600, Peter Bergner wrote: Nope, you don't get a SIGILL when executing 64-bit instructions in 32-bit mode, so it'll happily just

Re: [RFC 0/2] Reenable might_sleep() checks for might_fault() when atomic

2014-11-27 Thread Thomas Gleixner
On Thu, 27 Nov 2014, David Hildenbrand wrote: OTOH, there is no reason why we need to disable preemption over that page_fault_disabled() region. There are code pathes which really do not require to disable preemption for that. We have that seperated in preempt-rt for obvious reasons and

Re: [PATCH v2 3/4] powernv: cpuidle: Redesign idle states management

2014-11-27 Thread Paul Mackerras
On Tue, Nov 25, 2014 at 04:47:58PM +0530, Shreyas B. Prabhu wrote: [snip] +2: + /* Sleep or winkle */ + li r7,1 + mfspr r8,SPRN_PIR + /* + * The last 3 bits of PIR represents the thread id of a cpu + * in power8. This will need adjusting for power7. +

Re: powerpc/powernv: Add debugfs file to grab opalv3 trace data

2014-11-27 Thread Michael Ellerman
On Wed, 2014-26-11 at 04:10:04 UTC, Benjamin Herrenschmidt wrote: This adds files in debugfs that can be used to retrieve the OPALv3 firmware live binary traces which can then be parsed using a userspace tool. Mostly from Rusty with some updates by myself (BenH) Signed-off-by: Rusty

Re: [RESEND, V3] powerpc, xmon: Enable HW instruction breakpoint on POWER8

2014-11-27 Thread Michael Ellerman
On Thu, 2014-11-27 at 13:46 +0530, Anshuman Khandual wrote: On 11/26/2014 01:55 PM, Michael Ellerman wrote: Something like this, untested: Yeah it is working on LPAR and also on bare metal platform as well. The new patch will use some of your suggested code, so can I add your

Re: [RFC PATCH 1/2]powerpc: foundation code to handle CR5 for local_t

2014-11-27 Thread Benjamin Herrenschmidt
On Thu, 2014-11-27 at 17:48 +0530, Madhavan Srinivasan wrote: This patch create the infrastructure to handle the CR based local_* atomic operations. Local atomic operations are fast and highly reentrant per CPU counters. Used for percpu variable updates. Local atomic operations only

Re: powerpc/powernv: Add debugfs file to grab opalv3 trace data

2014-11-27 Thread Benjamin Herrenschmidt
On Fri, 2014-11-28 at 11:11 +1100, Michael Ellerman wrote: On Wed, 2014-26-11 at 04:10:04 UTC, Benjamin Herrenschmidt wrote: This adds files in debugfs that can be used to retrieve the OPALv3 firmware live binary traces which can then be parsed using a userspace tool. Mostly from Rusty

Re: [PATCH REPOST 3/3] powerpc/vphn: move endianness fixing to vphn_unpack_associativity()

2014-11-27 Thread Benjamin Herrenschmidt
On Thu, 2014-11-27 at 10:28 +0100, Greg Kurz wrote: On Thu, 27 Nov 2014 10:39:23 +1100 Benjamin Herrenschmidt b...@kernel.crashing.org wrote: On Mon, 2014-11-17 at 18:42 +0100, Greg Kurz wrote: The first argument to vphn_unpack_associativity() is a const long *, but the parsing

Re: [RFC PATCH 1/2]powerpc: foundation code to handle CR5 for local_t

2014-11-27 Thread Benjamin Herrenschmidt
On Thu, 2014-11-27 at 10:56 -0600, Segher Boessenkool wrote: On Thu, Nov 27, 2014 at 05:48:40PM +0530, Madhavan Srinivasan wrote: Here is the design of this patch. Since local_* operations are only need to be atomic to interrupts (IIUC), patch uses one of the Condition Register (CR)

Re: [PATCH] powerpc: 32 bit getcpu VDSO function uses 64 bit instructions

2014-11-27 Thread Benjamin Herrenschmidt
On Thu, 2014-11-27 at 14:50 -0600, Segher Boessenkool wrote: On Thu, Nov 27, 2014 at 11:41:40AM -0600, Peter Bergner wrote: On Thu, 2014-11-27 at 10:08 -0600, Segher Boessenkool wrote: On Wed, Nov 26, 2014 at 05:50:27PM -0600, Peter Bergner wrote: Nope, you don't get a SIGILL when

Re: [RFC PATCH 1/2]powerpc: foundation code to handle CR5 for local_t

2014-11-27 Thread Madhavan Srinivasan
On Thursday 27 November 2014 10:26 PM, Segher Boessenkool wrote: On Thu, Nov 27, 2014 at 05:48:40PM +0530, Madhavan Srinivasan wrote: Here is the design of this patch. Since local_* operations are only need to be atomic to interrupts (IIUC), patch uses one of the Condition Register (CR)

Re: [RFC PATCH 1/2]powerpc: foundation code to handle CR5 for local_t

2014-11-27 Thread Madhavan Srinivasan
On Friday 28 November 2014 07:28 AM, Benjamin Herrenschmidt wrote: On Thu, 2014-11-27 at 10:56 -0600, Segher Boessenkool wrote: On Thu, Nov 27, 2014 at 05:48:40PM +0530, Madhavan Srinivasan wrote: Here is the design of this patch. Since local_* operations are only need to be atomic to

Re: [RFC PATCH 1/2]powerpc: foundation code to handle CR5 for local_t

2014-11-27 Thread Madhavan Srinivasan
On Friday 28 November 2014 06:26 AM, Benjamin Herrenschmidt wrote: On Thu, 2014-11-27 at 17:48 +0530, Madhavan Srinivasan wrote: This patch create the infrastructure to handle the CR based local_* atomic operations. Local atomic operations are fast and highly reentrant per CPU counters.

Re: [RFC PATCH 1/2]powerpc: foundation code to handle CR5 for local_t

2014-11-27 Thread Benjamin Herrenschmidt
On Fri, 2014-11-28 at 08:45 +0530, Madhavan Srinivasan wrote: Can't we just unconditionally clear at as long as we do that after we've saved it ? In that case, it's just a matter for the fixup code to check the saved version rather than the actual CR.. I use CR bit setting in the

[PATCH V4] powerpc, xmon: Enable HW instruction breakpoint on POWER8

2014-11-27 Thread Anshuman Khandual
This patch enables support for hardware instruction breakpoint in xmon on POWER8 platform with the help of a new register called the CIABR (Completed Instruction Address Breakpoint Register). With this patch, a single hardware instruction breakpoint can be added and cleared during any active xmon

Re: [RFC 0/2] Reenable might_sleep() checks for might_fault() when atomic

2014-11-27 Thread David Hildenbrand
On Thu, 27 Nov 2014, David Hildenbrand wrote: OTOH, there is no reason why we need to disable preemption over that page_fault_disabled() region. There are code pathes which really do not require to disable preemption for that. We have that seperated in preempt-rt for obvious