The patch intends to support blocking IO access. Basically, if
the EEH core detects that the IO access has been blocked on one
specific PHB, we will simply return 0xFF's for reading and drop
writing.
Signed-off-by: Gavin Shan sha...@linux.vnet.ibm.com
---
arch/powerpc/include/asm/eeh.h
While doing recovery from fenced PHB, we need hold the PCI-CFG and
I/O access until the complete PHB reset and BARs restore are done.
The patch addresses that.
Signed-off-by: Gavin Shan sha...@linux.vnet.ibm.com
---
arch/powerpc/kernel/eeh_driver.c | 11 +++
On the PowerNV platform, the EEH address cache isn't built correctly
because we skipped the EEH devices without binding PE. The patch
fixes that.
Signed-off-by: Gavin Shan sha...@linux.vnet.ibm.com
---
arch/powerpc/kernel/eeh_cache.c |2 +-
arch/powerpc/platforms/powernv/pci-ioda.c
Scott Wood scottw...@freescale.com wrote on 2013/06/25 02:51:00:
On Fri, Jul 20, 2012 at 10:37:17AM +0200, Joakim Tjernlund wrote:
Zang Roy-R61911 r61...@freescale.com wrote on 2012/07/20 10:27:52:
-Original Message-
From:
On Tue, 2013-06-25 at 13:55 +0800, Gavin Shan wrote:
* don't touch the other command bits
*/
- eeh_ops-read_config(dn, PCI_COMMAND, 4, cmd);
- if (edev-config_space[1] PCI_COMMAND_PARITY)
- cmd |= PCI_COMMAND_PARITY;
- else
-
On Tue, 2013-06-25 at 13:55 +0800, Gavin Shan wrote:
When the PHB gets fenced, 0xFF's returns from PCI config space and
MMIO space in the hardware. The operations writting to them should
be dropped. The patch introduce backends allow to set/get flags that
indicate the access to PCI-CFG and
On Tue, 2013-06-25 at 13:55 +0800, Gavin Shan wrote:
When the driver is encountering EEH errors, which might be caused
by frozen PCI host controller, the driver needn't keep reading on
MMIO until timeout. For the case, 0xFF's should be returned from
hardware. Otherwise, it possibly trigger
Originally, eeh_mutex was introduced to protect the PE hierarchy
tree and the attached EEH devices because EEH core was possiblly
running with multiple threads to access the PE hierarchy tree.
However, we now have only one kthread in EEH core. So we needn't
the eeh_mutex and just remove it.
To replace down() with down_interrutible() to avoid following
warning:
[c0007ba7b710] [c0014410] .__switch_to+0x1b0/0x380
[c0007ba7b7c0] [c07b408c] .__schedule+0x3ec/0x970
[c0007ba7ba50] [c07b1f24] .schedule_timeout+0x1a4/0x2b0
[c0007ba7bb30]
From: Gerhard Sittig g...@denx.de
This patch does not change the content, it merely re-orders
configuration items and drops explicit options which already
apply as the default.
Signed-off-by: Gerhard Sittig g...@denx.de
Signed-off-by: Anatolij Gustschin ag...@denx.de
---
Enable USB EHCI, mass storage and USB gadget support.
Signed-off-by: Anatolij Gustschin ag...@denx.de
---
arch/powerpc/configs/mpc512x_defconfig | 7 +++
1 file changed, 7 insertions(+)
diff --git a/arch/powerpc/configs/mpc512x_defconfig
b/arch/powerpc/configs/mpc512x_defconfig
index
Hi Kees,
On Monday 24 June 2013 11:27 PM, Kees Cook wrote:
On Sun, Jun 23, 2013 at 11:23 PM, Aruna Balakrishnaiah
ar...@linux.vnet.ibm.com wrote:
The patch set supports compression of oops messages while writing to NVRAM,
this helps in capturing more of oops data to lnx,oops-log. The pstore
On Tue, Jun 25, 2013 at 04:07:24PM +1000, Benjamin Herrenschmidt wrote:
On Tue, 2013-06-25 at 13:55 +0800, Gavin Shan wrote:
When the PHB gets fenced, 0xFF's returns from PCI config space and
MMIO space in the hardware. The operations writting to them should
be dropped. The patch introduce
On Tue, Jun 18, 2013 at 09:09:06PM -0700, Paul E. McKenney wrote:
On Mon, Jun 17, 2013 at 05:42:13PM +1000, Michael Ellerman wrote:
On Sat, Jun 15, 2013 at 12:02:21PM +1000, Benjamin Herrenschmidt wrote:
On Fri, 2013-06-14 at 17:06 -0400, Steven Rostedt wrote:
I was pretty much able to
On Tue, 2013-06-25 at 17:19 +1000, Michael Ellerman wrote:
Here's another trace from 3.10-rc7 plus a few local patches.
We suspect that the perf enable could be causing a flood of
interrupts, but why
that's clogging things up so badly who knows.
Additionally, perf being potentially NMIs ,
On Tue, Jun 25, 2013 at 05:19:14PM +1000, Michael Ellerman wrote:
Here's another trace from 3.10-rc7 plus a few local patches.
And here's another with CONFIG_RCU_CPU_STALL_INFO=y in case that's useful:
PASS running test_pmc5_6_overuse()
INFO: rcu_sched self-detected stall on CPU
8: (1
We have relocation on exception handlers defined for h_data_storage and
h_instr_storage. However we will never take relocation on exceptions for
these because they can only come from a guest, and we never take
relocation on exceptions when we transition from guest to host.
We also have a handler
KVMTEST is a macro which checks whether we are taking an exception from
guest context, if so we branch out of line and eventually call into the
KVM code to handle the switch.
When running real guests on bare metal (HV KVM) the hardware ensures
that we never take a relocation on exception when
From: Michael Ellerman micha...@au1.ibm.com
The exception at 0xf60 is not the TM (Transactional Memory) unavailable
exception, it is the Facility Unavailable Exception, rename it as
such.
Flesh out the handler to acknowledge the fact that it can be called for
many reasons, one of which is TM
Similar to the facility unavailble exception, except the facilities are
controlled by HFSCR.
Adapt the facility_unavailable_exception() so it can be called for
either the regular or Hypervisor facility unavailable exceptions.
Signed-off-by: Michael Ellerman mich...@ellerman.id.au
---
On Tue, Jun 25, 2013 at 04:06:24PM +1000, Benjamin Herrenschmidt wrote:
On Tue, 2013-06-25 at 13:55 +0800, Gavin Shan wrote:
* don't touch the other command bits
*/
- eeh_ops-read_config(dn, PCI_COMMAND, 4, cmd);
- if (edev-config_space[1] PCI_COMMAND_PARITY)
-
On Sun, 2013-06-16 at 14:12 +0930, Rusty Russell wrote:
Sweep of the simple cases.
Cc: net...@vger.kernel.org
Cc: linuxppc-dev@lists.ozlabs.org
Cc: linux-arm-ker...@lists.infradead.org
Cc: Julia Lawall julia.law...@lip6.fr
Signed-off-by: Rusty Russell ru...@rustcorp.com.au
Acked-by:
Hi Linus !
This is a fix for a regression causing a freescale 83xx based platforms
to crash on boot due to some PCI breakage. Please apply.
Cheers,
Ben.
The following changes since commit 17858ca65eef148d335ffd4cfc09228a1c1cbfb5:
Merge tag 'please-pull-fixia64' of
On Tue, 2013-06-25 at 15:47 +0800, Gavin Shan wrote:
If we just have complete reset for fenced PHB, we need restore it
from the cache (edev-config_space[1]) instead of reading that from
hardware. Fenced PHB is the special case on PowerNV :-)
Well not really...
In general we can also end up
On Tue, Jun 25, 2013 at 05:57:44PM +1000, Benjamin Herrenschmidt wrote:
On Tue, 2013-06-25 at 15:47 +0800, Gavin Shan wrote:
If we just have complete reset for fenced PHB, we need restore it
from the cache (edev-config_space[1]) instead of reading that from
hardware. Fenced PHB is the special
On 06/24/2013 11:17 AM, Michael Neuling wrote:
The smallest match region for both the DABR and DAWR is 8 bytes, so the
kernel needs to filter matches when users want to look at regions smaller than
this.
Currently we set the length of PPC_BREAKPOINT_MODE_EXACT breakpoints to 8.
This is
On 06/24/2013 11:17 AM, Michael Neuling wrote:
In 9422de3 powerpc: Hardware breakpoints rewrite to handle non DABR
breakpoint
registers we changed the way we mark extraneous irqs with this:
- info-extraneous_interrupt = !((bp-attr.bp_addr = dar)
- (dar -
When the PHB is fenced or dead, it's pointless to collect the data
from PCI config space of subordinate PCI devices since it should
return 0xFF's. It also has potential risk to incur additional errors.
The patch avoids collecting PCI-CFG data while PHB is in fenced or
dead state.
Signed-off-by:
After reset (e.g. complete reset) in order to bring the fenced PHB
back, the PCIe link might not be ready yet. The patch intends to
make sure the PCIe link is ready before accessing its subordinate
PCI devices. The patch also fixes that wrong values restored to
PCI_COMMAND register for PCI
We needn't the the whole backtrace other than one-line message in
the error reporting interrupt handler. For errors triggered by
access PCI config space or MMIO, we replace WARN(1, ...) with
pr_err() and dump_stack().
Signed-off-by: Gavin Shan sha...@linux.vnet.ibm.com
---
The patch is for avoiding following build warnings:
The function .pnv_pci_ioda_fixup() references
the function __init .eeh_init().
This is often because .pnv_pci_ioda_fixup lacks a __init
The function .pnv_pci_ioda_fixup() references
the function __init .eeh_addr_cache_build().
The series of patches are follow-up in order to make EEH workable for PowerNV
platform on Juno-IOC-L machine. Couple of issues have been fixed with help of
Ben:
- Check PCIe link after PHB complete reset
- Restore config space for bridges
- The EEH address cache wasn't
On the PowerNV platform, the EEH address cache isn't built correctly
because we skipped the EEH devices without binding PE. The patch
fixes that.
Signed-off-by: Gavin Shan sha...@linux.vnet.ibm.com
---
arch/powerpc/kernel/eeh_cache.c |2 +-
arch/powerpc/platforms/powernv/pci-ioda.c
We have 2 fields in struct pnv_phb to trace the states. The patch
replace the fields with one and introduces flags for that. The patch
doesn't impact the logic.
Signed-off-by: Gavin Shan sha...@linux.vnet.ibm.com
---
arch/powerpc/platforms/powernv/eeh-ioda.c |8
diff --git a/drivers/base/Makefile b/drivers/base/Makefile
index 4e22ce3..5d93bb5 100644
--- a/drivers/base/Makefile
+++ b/drivers/base/Makefile
@@ -6,7 +6,7 @@ obj-y := core.o bus.o dd.o syscore.o \
attribute_container.o transport_class.o \
On 06/24/2013 04:58 PM, Michael Ellerman wrote:
A mistake we have made in the past is that we pull out the fields we
need from the event code, but don't check that there are no unknown bits
set. This means that we can't ever assign meaning to those unknown bits
in future.
Although we have
On 06/24/2013 04:58 PM, Michael Ellerman wrote:
In pmu_disable() we disable the PMU by setting the FC (Freeze Counters)
bit in MMCR0. In order to do this we have to read/modify/write MMCR0.
It's possible that we read a value from MMCR0 which has PMAO (PMU Alert
Occurred) set. When we write
On Tue, 2013-06-25 at 18:00 +0800, Gavin Shan wrote:
+ /*
+* When the PHB is fenced or dead, it's pointless to collect
+* the data from PCI config space because it should return
+* 0xFF's. For ER, we still retrieve the data from the PCI
+* config space.
On Tue, 2013-06-25 at 18:00 +0800, Gavin Shan wrote:
+ pci_regs_buf[0] = 0;
+ eeh_pe_for_each_dev(pe, edev) {
+ loglen += eeh_gather_pci_data(edev, pci_regs_buf,
+ EEH_PCI_REGS_LOG_LEN);
+
On Tue, 2013-06-25 at 18:00 +0800, Gavin Shan wrote:
After reset (e.g. complete reset) in order to bring the fenced PHB
back, the PCIe link might not be ready yet. The patch intends to
make sure the PCIe link is ready before accessing its subordinate
PCI devices. The patch also fixes that
In the Power7 PMU guide:
https://www.power.org/documentation/commonly-used-metrics-for-performance-analysis/
PM_BRU_MPRED is referred to as PM_BR_MPRED.
It fixed the typo by changing the name of the event in kernel
and documentation accordingly.
This patch changes the ABI, there are some reasons
Power7 supports over 530 different perf events but only a small
subset of these can be specified by name, for the remaining
events, we must specify them by their raw code:
perf stat -e r2003c application
This patch makes all the POWER7 events available in sysfs.
So we can instead specify
Thank for Sukadev Bhattip and Xiao Guangrong's help.
Thank for Michael Ellerman's review.
There is the Change Log for v2:
1. As Michael Ellerman suggested, I added runtime overhead information
in the 0002 patch's description.
2. Put the events name in a new head file which is named
Introducing headersize in pstore_write() API would need changes at
multiple places whereits being called. The idea is to move the
compression support to pstore infrastructure so that other platforms
could also make use of it.
Any thoughts on the back/forward compatibility as we switch to
On Tue, Jun 25, 2013 at 05:44:23PM +1000, Michael Ellerman wrote:
On Tue, Jun 25, 2013 at 05:19:14PM +1000, Michael Ellerman wrote:
Here's another trace from 3.10-rc7 plus a few local patches.
And here's another with CONFIG_RCU_CPU_STALL_INFO=y in case that's useful:
PASS running
On Tue, Jun 25, 2013 at 12:04 AM, Aruna Balakrishnaiah
ar...@linux.vnet.ibm.com wrote:
Hi Kees,
On Monday 24 June 2013 11:27 PM, Kees Cook wrote:
On Sun, Jun 23, 2013 at 11:23 PM, Aruna Balakrishnaiah
ar...@linux.vnet.ibm.com wrote:
The patch set supports compression of oops messages
Hi,
there is a bug in kernel 3.9 which the new fsl_pci platform driver. The
pcibios_init in pci_32.c will be called before the platform driver probe
will be invoked.
The call order for a p2020 board with linux 3.9 is currently:
fsl_pci_init
pcibios_init
fsl_pci_probe
fsl_pci_probe
fsl_pci_probe
On Sat, Jun 22, 2013 at 5:00 PM, Benjamin Herrenschmidt
b...@kernel.crashing.org wrote:
Afaik e300 is slightly out of order, maybe it's missing a memory barrier
somewhere One thing to try is to add some to the dma_map/unmap ops.
I went through the driver and added memory barriers to the
On 06/25/2013 04:56 AM, Steven Rostedt wrote:
On Sun, 2013-06-23 at 19:08 +0530, Srivatsa S. Bhat wrote:
Just to make the code a little cleaner, can you add:
diff --git a/kernel/cpu.c b/kernel/cpu.c
index 860f51a..e90d9d7 100644
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -63,6 +63,72 @@
On 06/25/2013 08:43 AM, Benjamin Herrenschmidt wrote:
On Tue, 2013-06-25 at 12:58 +1000, Michael Ellerman wrote:
On Tue, Jun 25, 2013 at 12:13:04PM +1000, Benjamin Herrenschmidt wrote:
On Tue, 2013-06-25 at 12:08 +1000, Michael Ellerman wrote:
We're not checking for allocation failure, which
Hi,
This patchset is a first step towards removing stop_machine() from the
CPU hotplug offline path. It introduces a set of APIs (as a replacement to
preempt_disable()/preempt_enable()) to synchronize with CPU hotplug from
atomic contexts.
The motivation behind getting rid of stop_machine() is
The current CPU offline code uses stop_machine() internally. And disabling
preemption prevents stop_machine() from taking effect, thus also preventing
CPUs from going offline, as a side effect.
There are places where this side-effect of preempt_disable() (or equivalent)
is used to synchronize
We have quite a few APIs now which help synchronize with CPU hotplug.
Among them, get/put_online_cpus() is the oldest and the most well-known,
so no problems there. By extension, its easy to comprehend the new
set : get/put_online_cpus_atomic().
But there is yet another set, which might appear
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
So add documentation to recommend using the new get/put_online_cpus_atomic()
APIs to prevent CPUs from going offline, while invoking from
Add a debugging infrastructure to warn if an atomic hotplug reader has not
invoked get_online_cpus_atomic() before traversing/accessing the
cpu_online_mask. Encapsulate these checks under a new debug config option
DEBUG_HOTPLUG_CPU.
This debugging infrastructure proves useful in the tree-wide
When bringing a secondary CPU online, the task running on the CPU coming up
sets itself in the cpu_online_mask. This is safe even though this task is not
the hotplug writer task.
But it is kinda hard to teach this to the CPU hotplug debug infrastructure,
and if we get it wrong, we risk making the
Now that we have a debug infrastructure in place to detect cases where
get/put_online_cpus_atomic() had to be used, add these checks at the
right spots to help catch places where we missed converting to the new
APIs.
Cc: Rusty Russell ru...@rustcorp.com.au
Cc: Alex Shi alex@intel.com
Cc:
Now that we have all the pieces of the CPU hotplug debug infrastructure
in place, expose the feature by growing a new Kconfig option,
CONFIG_DEBUG_HOTPLUG_CPU.
Cc: Andrew Morton a...@linux-foundation.org
Cc: Paul E. McKenney paul.mcken...@linaro.org
Cc: Akinobu Mita akinobu.m...@gmail.com
Cc:
Convert the macros in the CPU hotplug code to static inline C functions.
Cc: Thomas Gleixner t...@linutronix.de
Cc: Andrew Morton a...@linux-foundation.org
Cc: Tejun Heo t...@kernel.org
Cc: Rafael J. Wysocki r...@sisk.pl
Signed-off-by: Srivatsa S. Bhat srivatsa.b...@linux.vnet.ibm.com
---
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.
Cc: Andrew Morton
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.
Cc: Ingo Molnar
We need not use the raw_spin_lock_irqsave/restore primitives because
all CPU_DYING notifiers run with interrupts disabled. So just use
raw_spin_lock/unlock.
Cc: Ingo Molnar mi...@redhat.com
Cc: Peter Zijlstra pet...@infradead.org
Signed-off-by: Srivatsa S. Bhat srivatsa.b...@linux.vnet.ibm.com
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.
Cc: Ingo Molnar
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.
Cc: Thomas Gleixner
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.
Cc: Ingo Molnar
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.
Cc: Thomas Gleixner
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
In RCU code, rcu_implicit_dynticks_qs() checks if a CPU is offline,
while being protected by a spinlock. Use the get/put_online_cpus_atomic()
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.
Cc: John Stultz
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.
Cc: Frederic Weisbecker
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.
Cc: Thomas Gleixner
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.
Cc: David S. Miller
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.
Cc: Jens Axboe
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.
Cc: Al Viro
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.
Cc: Hoang-Nam Nguyen
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.
Cc: Robert Love
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.
Cc: Greg Kroah-Hartman
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.
Cc: Thomas Gleixner
The CPU_DYING notifier modifies the per-cpu pointer pmu-box, and this can
race with functions such as uncore_pmu_to_box() and uncore_pci_remove() when
we remove stop_machine() from the CPU offline path. So protect them using
get/put_online_cpus_atomic().
Cc: Peter Zijlstra a.p.zijls...@chello.nl
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.
Cc: Gleb Natapov
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.
Cc: Gleb Natapov
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.
Cc: Konrad Rzeszutek Wilk
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.
Cc: Richard Henderson
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.
Cc: Mike Frysinger
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.
Acked-by: Jesper Nilsson
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.
Cc: Richard Kuo
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.
Cc: Tony Luck
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.
Cc: Tony Luck
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.
Cc: Hirokazu Takata
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.
Cc: Ralf Baechle
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.
Cc: David Howells
The function migrate_irqs() is called with interrupts disabled
and hence its not safe to do GFP_KERNEL allocations inside it,
because they can sleep. So change the gfp mask to GFP_ATOMIC.
Cc: Benjamin Herrenschmidt b...@kernel.crashing.org
Cc: Michael Ellerman mich...@ellerman.id.au
Cc: Paul
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.
Cc: Benjamin Herrenschmidt
Bringing a secondary CPU online is a special case in which, accessing
the cpu_online_mask is safe, even though that task (which running on the
CPU coming online) is not the hotplug writer.
It is a little hard to teach this to the debugging checks under
CONFIG_DEBUG_HOTPLUG_CPU. But luckily
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.
Cc: Paul Mundt
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.
Cc: David S. Miller
On Wed, Jun 26, 2013 at 02:00:04AM +0530, Srivatsa S. Bhat wrote:
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.
Cc: Chris Metcalf
On Wed, Jul 18, 2012 at 05:00:29PM +0800, Xufeng Zhang wrote:
Hi All,
I detected below error when booting p1021mds after enabled EDAC feature:
EDAC MC: Ver: 2.1.0 Jul 17 2012
Freescale(R) MPC85xx EDAC driver, (C) 2006 Montavista Software
EDAC MC0: Giving out device to 'MPC85xx_edac'
String instruction emulation would erroneously result in a segfault if
the upper bits of the EA are set and is so high that it fails access
check. Truncate the EA to 32 bits if the process is 32-bit.
Signed-off-by: James Yang james.y...@freescale.com
---
arch/powerpc/kernel/traps.c |4
On Tue, 25 Jun 2013, Runzhen Wang wrote:
This patch makes all the POWER7 events available in sysfs.
...
$ size arch/powerpc/perf/power7-pmu.o
text data bss dec hex filename
3073 2720 0579316a1 arch/powerpc/perf/power7-pmu.o
and after
1 - 100 of 127 matches
Mail list logo