The patchset takes care of compressing oops messages while writing to NVRAM,
so that more oops data can be captured in the given space.
big_oops_buf (2.22 * oops_data_sz) is allocated for compression.
oops_data_sz is oops header size less of oops partition size.
Pstore will internally call
nvram_compress() and zip_oops() is used by the nvram_pstore_write
API to compress oops messages hence re-organise the functions
accordingly to avoid forward declarations.
Signed-off-by: Aruna Balakrishnaiah ar...@linux.vnet.ibm.com
---
arch/powerpc/platforms/pseries/nvram.c | 104
pstore_get_header_size will return the size of the header added by pstore
while logging messages to the registered buffer.
Signed-off-by: Aruna Balakrishnaiah ar...@linux.vnet.ibm.com
---
fs/pstore/platform.c |7 ++-
include/linux/pstore.h |6 ++
2 files changed, 12
The patch set supports compression of oops messages while writing to NVRAM,
this helps in capturing more of oops data to lnx,oops-log. The pstore file
for oops messages will be in decompressed format making it readable.
In case compression fails, the patch takes care of copying the header added
Since now we have pstore support for nvram in pseries, enable it
in the default config. With this config option enabled, pstore
infra-structure will be used to read/write the messages from/to nvram.
Signed-off-by: Aruna Balakrishnaiah ar...@linux.vnet.ibm.com
---
Hi Michael,
On Monday 24 June 2013 06:51 AM, Michael Neuling wrote:
Enable PSTORE in pseries_defconfig
Please add a why to your changelogs eg. Now we have pstore support for
nvram on pseries, enable it in the default config
Why you are changing something is more important than what, since
you
On Sun, Jun 23, 2013 at 07:15:39PM +0530, Srivatsa S. Bhat wrote:
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
Since now we have pstore support for nvram in pseries, enable it
in the default config. With this config option enabled, pstore
infra-structure will be used to read/write the messages from/to nvram.
Signed-off-by: Aruna Balakrishnaiah ar...@linux.vnet.ibm.com
---
v3:
Move pstore config
Aruna Balakrishnaiah ar...@linux.vnet.ibm.com wrote:
Since now we have pstore support for nvram in pseries, enable it
in the default config. With this config option enabled, pstore
infra-structure will be used to read/write the messages from/to nvram.
Signed-off-by: Aruna Balakrishnaiah
Anshuman Khandual khand...@linux.vnet.ibm.com wrote:
Completely ignore BHRB privilege state filter request as we are
already configuring that with privilege state filtering attribute
for the accompanying PMU event. This would help achieve cleaner
user space interaction for BHRB.
This patch
Anshuman Khandual khand...@linux.vnet.ibm.com wrote:
When the task moves around the system, the corresponding cpuhw
per cpu strcuture should be popullated with the BHRB filter
request value so that PMU could be configured appropriately with
that during the next call into power_pmu_enable().
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
We want to use CMA for allocating hash page table and real mode area for
PPC64. Hence move DMA contiguous related changes into a seperate config
so that ppc64 can enable CMA without requiring DMA contiguous.
Signed-off-by: Aneesh Kumar K.V
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
Use CMA for allocation of RMA region for guest. Also remove linear allocator
now that it is not used
Signed-off-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
---
arch/powerpc/include/asm/kvm_book3s_64.h | 1 +
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
Use CMA for allocation of guest hash page.
Signed-off-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
---
arch/powerpc/include/asm/kvm_book3s_64.h | 1 -
arch/powerpc/include/asm/kvm_host.h | 2 +-
On Mon, 24 Jun 2013, Geert Uytterhoeven wrote:
JFYI, when comparing v3.10-rc7 to v3.10-rc6[3], the summaries are:
- build errors: +51/-11
After filtering out false-positives (v3.10-rc6 had only 72% build coverage):
+ arch/powerpc/kernel/fadump.c: error: 'KEXEC_CORE_NOTE_NAME' undeclared
On Thu, Jun 20, 2013 at 09:31:28PM +0530, Varun Sethi wrote:
This patch provides the PAMU driver (fsl_pamu.c) and the corresponding IOMMU
API implementation (fsl_pamu_domain.c). The PAMU hardware driver (fsl_pamu.c)
has been derived from the work done by Ashish Kalra and Timur Tabi.
AlexW,
A mistake we have made in the past is that we pull out the fields we
need from the event code, but don't check that there are no unknown bits
set. This means that we can't ever assign meaning to those unknown bits
in future.
Although we have once again failed to do this at release, it is still
In pmu_disable() we disable the PMU by setting the FC (Freeze Counters)
bit in MMCR0. In order to do this we have to read/modify/write MMCR0.
It's possible that we read a value from MMCR0 which has PMAO (PMU Alert
Occurred) set. When we write that value back it will cause an interrupt
to occur.
On Power8 we can freeze PMC5 and 6 if we're not using them. Normally they
run all the time.
As noticed by Anshuman, we should unfreeze them when we disable the PMU
as there are legacy tools which expect them to run all the time.
Signed-off-by: Michael Ellerman mich...@ellerman.id.au
---
In power_pmu_enable() we can use the existing out label to reduce the
number of return paths.
Signed-off-by: Michael Ellerman mich...@ellerman.id.au
---
arch/powerpc/perf/core-book3s.c |9 +
1 file changed, 5 insertions(+), 4 deletions(-)
diff --git a/arch/powerpc/perf/core-book3s.c
In power_pmu_enable() we still enable the PMU even if we have zero
events. This should have no effect but doesn't make much sense. Instead
just return after telling the hypervisor that we are not using the PMCs.
Signed-off-by: Michael Ellerman mich...@ellerman.id.au
---
In commit 59affcd Context switch more PMU related SPRs I added more
PMU SPRs to thread_struct, later modified in commit b11ae95. To add
insult to injury it turns out we don't need to switch MMCRA as it's
only user readable, and the value is recomputed by the PMU code.
Signed-off-by: Michael
Add logic to the power8 PMU code to support EBB. Future processors would
also be expected to implement similar constraints. At that time we could
possibly factor these out into common code.
Finally mark the power8 PMU as supporting EBB, which is the actual
enable switch which allows EBBs to be
Add support for EBB (Event Based Branches) on 64-bit book3s. See the
included documentation for more details.
EBBs are a feature which allows the hardware to branch directly to a
specified user space address when a PMU event overflows. This can be
used by programs for self-monitoring with no
The topology update code that updates the cpu node registration in sysfs
should not be called while in stop_machine(). The register/unregister
calls take a lock and may sleep.
This patch moves these calls outside of the call to stop_machine().
Signed-off-by:Nathan Fontenot
Building with CONFIG_TRANSPARENT_HUGEPAGE disabled causes the following
build wearnings;
powerpc/arch/powerpc/include/asm/mmu-hash64.h: In function ‘__hash_page_thp’:
powerpc/arch/powerpc/include/asm/mmu-hash64.h:354: warning: no return statement
in function returning non-void
This patch adds a
Nathan Fontenot nf...@linux.vnet.ibm.com writes:
Building with CONFIG_TRANSPARENT_HUGEPAGE disabled causes the following
build wearnings;
powerpc/arch/powerpc/include/asm/mmu-hash64.h: In function ‘__hash_page_thp’:
powerpc/arch/powerpc/include/asm/mmu-hash64.h:354: warning: no return
Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com writes:
Nathan Fontenot nf...@linux.vnet.ibm.com writes:
Building with CONFIG_TRANSPARENT_HUGEPAGE disabled causes the following
build wearnings;
powerpc/arch/powerpc/include/asm/mmu-hash64.h: In function ‘__hash_page_thp’:
failure:
__
static inline __attribute__((always_inline))
__attribute__((no_instrument_function)) int
__hash_page_thp(unsigned long ea, unsigned long access,
unsigned long vsid, pmd_t *pmdp,
unsigned long trap, int local,
int ssize, unsigned int psize)
{
do {
On Mon, Jun 24, 2013 at 09:14:23AM -0500, Nathan Fontenot wrote:
The topology update code that updates the cpu node registration in sysfs
should not be called while in stop_machine(). The register/unregister
calls take a lock and may sleep.
This patch moves these calls outside of the call to
On 06/24/2013 12:47 AM, Joe Perches wrote:
On Mon, 2013-06-24 at 00:25 +0530, Srivatsa S. Bhat wrote:
On 06/23/2013 11:47 PM, Greg Kroah-Hartman wrote:
On Sun, Jun 23, 2013 at 07:13:33PM +0530, Srivatsa S. Bhat wrote:
[]
diff --git a/drivers/staging/octeon/ethernet-rx.c
On Sun, Jun 23, 2013 at 07:12:59PM +0530, Srivatsa S. Bhat wrote:
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
On Sun, Jun 23, 2013 at 11:23 PM, Aruna Balakrishnaiah
ar...@linux.vnet.ibm.com wrote:
The patch set supports compression of oops messages while writing to NVRAM,
this helps in capturing more of oops data to lnx,oops-log. The pstore file
for oops messages will be in decompressed format making
On Mon, Jun 24, 2013 at 10:55:35AM -0700, Tejun Heo wrote:
@@ -105,6 +106,7 @@ s64 __percpu_counter_sum(struct percpu_counter *fbc)
ret += *pcount;
}
raw_spin_unlock(fbc-lock);
+ put_online_cpus_atomic();
I don't think this is necessary. CPU on/offlining is
On 06/24/2013 11:36 PM, Tejun Heo wrote:
On Mon, Jun 24, 2013 at 10:55:35AM -0700, Tejun Heo wrote:
@@ -105,6 +106,7 @@ s64 __percpu_counter_sum(struct percpu_counter *fbc)
ret += *pcount;
}
raw_spin_unlock(fbc-lock);
+ put_online_cpus_atomic();
I don't think this is
On 06/23/2013 11:55 AM, Srivatsa S. Bhat wrote:
On 06/23/2013 11:47 PM, Greg Kroah-Hartman wrote:
On Sun, Jun 23, 2013 at 07:13:33PM +0530, Srivatsa S. Bhat wrote:
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from
On 06/19/2013 11:50:30 AM, Scott Wood wrote:
On 06/19/2013 10:06:38 AM, Kumar Gala wrote:
On Jun 18, 2013, at 3:14 PM, Scott Wood wrote:
This fixes a regression that causes 83xx to oops on boot if a
non-express PCI bus is present.
The following changes since commit
On Mon, Jun 24, 2013 at 12:18:04PM -0500, Seth Jennings wrote:
On Mon, Jun 24, 2013 at 09:14:23AM -0500, Nathan Fontenot wrote:
The topology update code that updates the cpu node registration in sysfs
should not be called while in stop_machine(). The register/unregister
calls take a lock
On 06/24/2013 02:16 PM, Seth Jennings wrote:
On Mon, Jun 24, 2013 at 12:18:04PM -0500, Seth Jennings wrote:
On Mon, Jun 24, 2013 at 09:14:23AM -0500, Nathan Fontenot wrote:
The topology update code that updates the cpu node registration in sysfs
should not be called while in stop_machine().
The __cpuinit type of throwaway sections might have made sense
some time ago when RAM was more constrained, but now the savings
do not offset the cost and complications. For example, the fix in
commit 5e427ec2d0 (x86: Fix bit corruption at CPU resume time)
is a good example of the nasty type of
[Resending with only lists on Cc: -- previous mail header on the 00/32
was too long; failed to get passed vger's crap filters.]
On 13-06-24 03:30 PM, Paul Gortmaker wrote:
This is the whole patch queue for removal of __cpuinit support
against the latest linux-next tree (Jun24th). Some of you
On Mon, 2013-06-24 at 17:02 -0500, Scott Wood wrote:
This fixes a regression that causes 83xx to oops on boot if a
non-express PCI bus is present. It is the same patch as the last pull
request, but with the changelog reworded to be clearer that this is a
regression.
Ok, Kumar, I'll pick that
On Sun, 2013-06-23 at 19:08 +0530, Srivatsa S. Bhat wrote:
The current CPU offline code uses stop_machine() internally. And disabling
preemption prevents stop_machine() from taking effect, thus also preventing
CPUs from going offline, as a side effect.
There are places where this side-effect
On Sun, 2013-06-23 at 19:08 +0530, Srivatsa S. Bhat wrote:
Just to make the code a little cleaner, can you add:
diff --git a/kernel/cpu.c b/kernel/cpu.c
index 860f51a..e90d9d7 100644
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -63,6 +63,72 @@ static struct {
.refcount = 0,
};
On Fri, Jul 20, 2012 at 10:37:17AM +0200, Joakim Tjernlund wrote:
Zang Roy-R61911 r61...@freescale.com wrote on 2012/07/20 10:27:52:
-Original Message-
From: linuxppc-dev-bounces+tie-fei.zang=freescale@lists.ozlabs.org
On Thu, Jun 13, 2013 at 12:09:47PM +0530, Anshuman Khandual wrote:
On 06/13/2013 06:46 AM, Michael Ellerman wrote:
On Power8 we can freeze PMC5 and 6 if we're not using them. Normally they
run all the time.
index f7d1c4f..e791c68 100644
--- a/arch/powerpc/perf/power8-pmu.c
+++
On Mon, Jun 24, 2013 at 02:25:59PM -0500, Nathan Fontenot wrote:
On 06/24/2013 02:16 PM, Seth Jennings wrote:
On Mon, Jun 24, 2013 at 12:18:04PM -0500, Seth Jennings wrote:
On Mon, Jun 24, 2013 at 09:14:23AM -0500, Nathan Fontenot wrote:
The topology update code that updates the cpu node
On Mon, Jun 24, 2013 at 09:14:23AM -0500, Nathan Fontenot wrote:
The topology update code that updates the cpu node registration in sysfs
should not be called while in stop_machine(). The register/unregister
calls take a lock and may sleep.
This patch moves these calls outside of the call to
On Sun, Jun 23, 2013 at 07:17:00PM +0530, Srivatsa S. Bhat wrote:
The function migrate_irqs() is called with interrupts disabled
and hence its not safe to do GFP_KERNEL allocations inside it,
because they can sleep. So change the gfp mask to GFP_ATOMIC.
OK so it gets there via:
On Tue, 2013-06-25 at 12:08 +1000, Michael Ellerman wrote:
We're not checking for allocation failure, which we should be.
But this code is only used on powermac and 85xx, so it should probably
just be a TODO to fix this up to handle the failure.
And what can we do if they fail ?
Cheers,
On 06/24/2013 08:50 PM, Michael Ellerman wrote:
On Mon, Jun 24, 2013 at 02:25:59PM -0500, Nathan Fontenot wrote:
On 06/24/2013 02:16 PM, Seth Jennings wrote:
On Mon, Jun 24, 2013 at 12:18:04PM -0500, Seth Jennings wrote:
On Mon, Jun 24, 2013 at 09:14:23AM -0500, Nathan Fontenot wrote:
The
On 06/24/2013 08:50 PM, Michael Ellerman wrote:
On Mon, Jun 24, 2013 at 09:14:23AM -0500, Nathan Fontenot wrote:
The topology update code that updates the cpu node registration in sysfs
should not be called while in stop_machine(). The register/unregister
calls take a lock and may sleep.
On Tue, Jun 25, 2013 at 12:13:04PM +1000, Benjamin Herrenschmidt wrote:
On Tue, 2013-06-25 at 12:08 +1000, Michael Ellerman wrote:
We're not checking for allocation failure, which we should be.
But this code is only used on powermac and 85xx, so it should probably
just be a TODO to fix
The topology update code that updates the cpu node registration in sysfs
should not be called while in stop_machine(). The register/unregister
calls take a lock and may sleep.
This patch moves these calls outside of the call to stop_machine().
Signed-off-by:Nathan Fontenot
On Tue, 2013-06-25 at 12:58 +1000, Michael Ellerman wrote:
On Tue, Jun 25, 2013 at 12:13:04PM +1000, Benjamin Herrenschmidt wrote:
On Tue, 2013-06-25 at 12:08 +1000, Michael Ellerman wrote:
We're not checking for allocation failure, which we should be.
But this code is only used on
On Thu, 2013-06-20 at 21:31 +0530, Varun Sethi wrote:
+#define REQ_ACS_FLAGS(PCI_ACS_SV | PCI_ACS_RR | PCI_ACS_CR |
PCI_ACS_UF)
+
+static struct iommu_group *get_device_iommu_group(struct device *dev)
+{
+ struct iommu_group *group;
+
+ group = iommu_group_get(dev);
+
Originally, eeh_mutex was introduced to protect the PE hierarchy
tree and the attached EEH devices because EEH core was possiblly
running with multiple threads to access the PE hierarchy tree.
However, we now have only one kthread in EEH core. So we needn't
the eeh_mutex and just remove it. The
The series of patches are follow-up in order to make EEH workable for PowerNV
platform on Juno-IOC-L machine. Couple of issues have been fixed with help of
Ben:
- eeh_lock() and eeh_unlock() were introduced to protect the PE
hierarchy
tree. However, we already had one kthread
When the PHB gets fenced, 0xFF's returns from PCI config space and
MMIO space in the hardware. The operations writting to them should
be dropped. The patch introduce backends allow to set/get flags that
indicate the access to PCI-CFG and MMIO should be blocked.
Signed-off-by: Gavin Shan
When the PHB is fenced or dead, it's pointless to collect the data
from PCI config space of subordinate PCI devices since it should
return 0xFF's. It also has potential risk to incur additional errors.
The patch avoids collecting PCI-CFG data while PHB is in fenced or
dead state.
Signed-off-by:
When the driver is encountering EEH errors, which might be caused
by frozen PCI host controller, the driver needn't keep reading on
MMIO until timeout. For the case, 0xFF's should be returned from
hardware. Otherwise, it possibly trigger soft-lockup. The patch
adds more check on that by
The patch implements PowerNV backends to support set/get settings.
Also, we needn't maintain multiple fields in struct pnv_phb to
trace different EEH states. The patch merges all EEH states to one
field eeh_state.
Signed-off-by: Gavin Shan sha...@linux.vnet.ibm.com
---
After reset (e.g. complete reset) in order to bring the fenced PHB
back, the PCIe link might not be ready yet. The patch intends to
make sure the PCIe link is ready before accessing its subordinate
PCI devices. The patch also fixes that wrong values restored to
PCI_COMMAND register for PCI
If the PCI-CFG access on the specific PHB, to return 0xFF's for
reading and drop writing. The patch implements that for PowerNV
platform. The patch also removes the check on hose == NULL
for PCI-CFG accessors since the kernel should stop while fetching
platform-dependent PHB (struct pnv_phb).
64 matches
Mail list logo