Code like
spin_lock(lock);
if (copy_to_user(...))
rc = ...
spin_unlock(lock);
really *should* generate warnings like it did before.
And *only* code like
spin_lock(lock);
Is only code like this valid or also with the spin_lock() dropped?
(e.g. the
On 11/26/2014 01:55 PM, Michael Ellerman wrote:
On Tue, 2014-25-11 at 10:08:48 UTC, Anshuman Khandual wrote:
This patch enables support for hardware instruction breakpoints
on POWER8 with the help of a new register CIABR (Completed
Instruction Address Breakpoint Register). With this patch,
On Thu, 27 Nov 2014 10:39:23 +1100
Benjamin Herrenschmidt b...@kernel.crashing.org wrote:
On Mon, 2014-11-17 at 18:42 +0100, Greg Kurz wrote:
The first argument to vphn_unpack_associativity() is a const long *, but the
parsing code expects __be64 values actually. This is inconsistent. We
On Thu, Nov 27, 2014 at 09:03:01AM +0100, David Hildenbrand wrote:
Code like
spin_lock(lock);
if (copy_to_user(...))
rc = ...
spin_unlock(lock);
really *should* generate warnings like it did before.
And *only* code like
spin_lock(lock);
Is only
On Thu, Nov 27, 2014 at 09:03:01AM +0100, David Hildenbrand wrote:
Code like
spin_lock(lock);
if (copy_to_user(...))
rc = ...
spin_unlock(lock);
really *should* generate warnings like it did before.
And *only* code like
spin_lock(lock);
Is only
This patch create the infrastructure to handle the CR based
local_* atomic operations. Local atomic operations are fast
and highly reentrant per CPU counters. Used for percpu
variable updates. Local atomic operations only guarantee
variable modification atomicity wrt the CPU which owns the
This patchset create the infrastructure to handle the CR based
local_* atomic operations. Local atomic operations are fast
and highly reentrant per CPU counters. Used for percpu
variable updates. Local atomic operations only guarantee
variable modification atomicity wrt the CPU which owns the
This patch re-write the current local_* functions to CR5 based one.
Base flow for each function is
{
set cr5(eq)
load
..
store
clear cr5(eq)
}
Above set of instructions are followed by a fixup section which points
to the entry of the function incase of
From: Madhavan Srinivasan
This patchset create the infrastructure to handle the CR based
local_* atomic operations. Local atomic operations are fast
and highly reentrant per CPU counters. Used for percpu
variable updates. Local atomic operations only guarantee
variable modification atomicity
Scott,
On 26 November 2014 at 23:21, Scott Wood scottw...@freescale.com wrote:
On Wed, 2014-11-26 at 15:17 +0100, Alessio Igor Bogani wrote:
+ board_soc: soc: soc@ffe0 {
There's no need for two labels on the same node.
I'll remove board_soc label.
[...]
+
On Thu, 27 Nov 2014, Heiko Carstens wrote:
On Thu, Nov 27, 2014 at 09:03:01AM +0100, David Hildenbrand wrote:
Code like
spin_lock(lock);
if (copy_to_user(...))
rc = ...
spin_unlock(lock);
really *should* generate warnings like it did before.
And *only*
OTOH, there is no reason why we need to disable preemption over that
page_fault_disabled() region. There are code pathes which really do
not require to disable preemption for that.
We have that seperated in preempt-rt for obvious reasons and IIRC
Peter Zijlstra tried to distangle it in
From: David Hildenbrand
...
Although it might not be optimal, but keeping a separate counter for
pagefault_disable() as part of the preemption counter seems to be the only
doable thing right now. I am not sure if a completely separated counter is
even
possible, increasing the size of
From: David Hildenbrand
...
Although it might not be optimal, but keeping a separate counter for
pagefault_disable() as part of the preemption counter seems to be the only
doable thing right now. I am not sure if a completely separated counter is
even
possible, increasing the size of
On Wed, Nov 26, 2014 at 05:50:27PM -0600, Peter Bergner wrote:
On Thu, 2014-11-27 at 09:38 +1100, Michael Ellerman wrote:
On Thu, 2014-11-27 at 08:11 +1100, Anton Blanchard wrote:
I used some 64 bit instructions when adding the 32 bit getcpu VDSO
function. Fix it.
Ouch. The symptom
From: David Hildenbrand [mailto:d...@linux.vnet.ibm.com]
From: David Hildenbrand
...
Although it might not be optimal, but keeping a separate counter for
pagefault_disable() as part of the preemption counter seems to be the only
doable thing right now. I am not sure if a completely
From: David Hildenbrand [mailto:d...@linux.vnet.ibm.com]
From: David Hildenbrand
...
Although it might not be optimal, but keeping a separate counter for
pagefault_disable() as part of the preemption counter seems to be the
only
doable thing right now. I am not sure if a
On Thu, Nov 27, 2014 at 05:48:40PM +0530, Madhavan Srinivasan wrote:
Here is the design of this patch. Since local_* operations
are only need to be atomic to interrupts (IIUC), patch uses
one of the Condition Register (CR) fields as a flag variable. When
entering the local_*, specific bit
Let's track the levels of pagefault_disable() calls in a separate part of the
preempt counter. Also update the regular preempt counter to keep the existing
pagefault infrastructure working (can be demangeled and cleaned up later).
This change is needed to detect whether we are running in a simple
Simple prototype to enable might_sleep() checks in might_fault(), avoiding false
positives for scenarios involving explicit pagefault_disable().
So this should work:
spin_lock(lock); /* also if left away */
pagefault_disable()
rc = copy_to_user(...)
Commit 662bbcb2747c2422cf98d3d97619509379eee466 removed might_sleep() checks
for all user access code (that uses might_fault()).
The reason was to disable wrong sleep in atomic warnings in the following
scenario:
pagefault_disable();
rc = copy_to_user(...);
On Thu, Nov 27, 2014 at 06:10:17PM +0100, David Hildenbrand wrote:
Commit 662bbcb2747c2422cf98d3d97619509379eee466 removed might_sleep() checks
for all user access code (that uses might_fault()).
The reason was to disable wrong sleep in atomic warnings in the following
scenario:
On Thu, Nov 27, 2014 at 07:24:49PM +0200, Michael S. Tsirkin wrote:
On Thu, Nov 27, 2014 at 06:10:17PM +0100, David Hildenbrand wrote:
Commit 662bbcb2747c2422cf98d3d97619509379eee466 removed might_sleep() checks
for all user access code (that uses might_fault()).
The reason was to
On Thu, 2014-11-27 at 10:08 -0600, Segher Boessenkool wrote:
On Wed, Nov 26, 2014 at 05:50:27PM -0600, Peter Bergner wrote:
Nope, you don't get a SIGILL when executing 64-bit instructions in
32-bit mode, so it'll happily just execute the instruction, doing
a full 64-bit compare. I'm
-
- __might_sleep(__FILE__, __LINE__, 0);
+ if (unlikely(!pagefault_disabled()))
+ __might_sleep(__FILE__, __LINE__, 0);
Same here: so maybe make might_fault a wrapper
around __might_fault as well.
Yes, I also noticed that. It was part of the original code.
For now
On Thu, Nov 27, 2014 at 07:08:42PM +0100, David Hildenbrand wrote:
-
- __might_sleep(__FILE__, __LINE__, 0);
+ if (unlikely(!pagefault_disabled()))
+ __might_sleep(__FILE__, __LINE__, 0);
Same here: so maybe make might_fault a wrapper
around
Segher Boessenkool seg...@kernel.crashing.org writes:
On Wed, Nov 26, 2014 at 05:50:27PM -0600, Peter Bergner wrote:
On Thu, 2014-11-27 at 09:38 +1100, Michael Ellerman wrote:
On Thu, 2014-11-27 at 08:11 +1100, Anton Blanchard wrote:
I used some 64 bit instructions when adding the 32 bit
On Thu, Nov 27, 2014 at 11:41:40AM -0600, Peter Bergner wrote:
On Thu, 2014-11-27 at 10:08 -0600, Segher Boessenkool wrote:
On Wed, Nov 26, 2014 at 05:50:27PM -0600, Peter Bergner wrote:
Nope, you don't get a SIGILL when executing 64-bit instructions in
32-bit mode, so it'll happily just
On Thu, 27 Nov 2014, David Hildenbrand wrote:
OTOH, there is no reason why we need to disable preemption over that
page_fault_disabled() region. There are code pathes which really do
not require to disable preemption for that.
We have that seperated in preempt-rt for obvious reasons and
On Tue, Nov 25, 2014 at 04:47:58PM +0530, Shreyas B. Prabhu wrote:
[snip]
+2:
+ /* Sleep or winkle */
+ li r7,1
+ mfspr r8,SPRN_PIR
+ /*
+ * The last 3 bits of PIR represents the thread id of a cpu
+ * in power8. This will need adjusting for power7.
+
On Wed, 2014-26-11 at 04:10:04 UTC, Benjamin Herrenschmidt wrote:
This adds files in debugfs that can be used to retrieve the
OPALv3 firmware live binary traces which can then be parsed
using a userspace tool.
Mostly from Rusty with some updates by myself (BenH)
Signed-off-by: Rusty
On Thu, 2014-11-27 at 13:46 +0530, Anshuman Khandual wrote:
On 11/26/2014 01:55 PM, Michael Ellerman wrote:
Something like this, untested:
Yeah it is working on LPAR and also on bare metal platform as well. The new
patch
will use some of your suggested code, so can I add your
On Thu, 2014-11-27 at 17:48 +0530, Madhavan Srinivasan wrote:
This patch create the infrastructure to handle the CR based
local_* atomic operations. Local atomic operations are fast
and highly reentrant per CPU counters. Used for percpu
variable updates. Local atomic operations only
On Fri, 2014-11-28 at 11:11 +1100, Michael Ellerman wrote:
On Wed, 2014-26-11 at 04:10:04 UTC, Benjamin Herrenschmidt wrote:
This adds files in debugfs that can be used to retrieve the
OPALv3 firmware live binary traces which can then be parsed
using a userspace tool.
Mostly from Rusty
On Thu, 2014-11-27 at 10:28 +0100, Greg Kurz wrote:
On Thu, 27 Nov 2014 10:39:23 +1100
Benjamin Herrenschmidt b...@kernel.crashing.org wrote:
On Mon, 2014-11-17 at 18:42 +0100, Greg Kurz wrote:
The first argument to vphn_unpack_associativity() is a const long *, but
the
parsing
On Thu, 2014-11-27 at 10:56 -0600, Segher Boessenkool wrote:
On Thu, Nov 27, 2014 at 05:48:40PM +0530, Madhavan Srinivasan wrote:
Here is the design of this patch. Since local_* operations
are only need to be atomic to interrupts (IIUC), patch uses
one of the Condition Register (CR)
On Thu, 2014-11-27 at 14:50 -0600, Segher Boessenkool wrote:
On Thu, Nov 27, 2014 at 11:41:40AM -0600, Peter Bergner wrote:
On Thu, 2014-11-27 at 10:08 -0600, Segher Boessenkool wrote:
On Wed, Nov 26, 2014 at 05:50:27PM -0600, Peter Bergner wrote:
Nope, you don't get a SIGILL when
On Thursday 27 November 2014 10:26 PM, Segher Boessenkool wrote:
On Thu, Nov 27, 2014 at 05:48:40PM +0530, Madhavan Srinivasan wrote:
Here is the design of this patch. Since local_* operations
are only need to be atomic to interrupts (IIUC), patch uses
one of the Condition Register (CR)
On Friday 28 November 2014 07:28 AM, Benjamin Herrenschmidt wrote:
On Thu, 2014-11-27 at 10:56 -0600, Segher Boessenkool wrote:
On Thu, Nov 27, 2014 at 05:48:40PM +0530, Madhavan Srinivasan wrote:
Here is the design of this patch. Since local_* operations
are only need to be atomic to
On Friday 28 November 2014 06:26 AM, Benjamin Herrenschmidt wrote:
On Thu, 2014-11-27 at 17:48 +0530, Madhavan Srinivasan wrote:
This patch create the infrastructure to handle the CR based
local_* atomic operations. Local atomic operations are fast
and highly reentrant per CPU counters.
On Fri, 2014-11-28 at 08:45 +0530, Madhavan Srinivasan wrote:
Can't we just unconditionally clear at as long as we do that after we've
saved it ? In that case, it's just a matter for the fixup code to check
the saved version rather than the actual CR..
I use CR bit setting in the
This patch enables support for hardware instruction breakpoint in
xmon on POWER8 platform with the help of a new register called the
CIABR (Completed Instruction Address Breakpoint Register). With this
patch, a single hardware instruction breakpoint can be added and
cleared during any active xmon
On Thu, 27 Nov 2014, David Hildenbrand wrote:
OTOH, there is no reason why we need to disable preemption over that
page_fault_disabled() region. There are code pathes which really do
not require to disable preemption for that.
We have that seperated in preempt-rt for obvious
43 matches
Mail list logo