From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
PAPR defines these errors as negative values. So print them accordingly
for easy debugging.
Acked-by: Paul Mackerras pau...@samba.org
Signed-off-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
---
arch/powerpc/platforms/pseries/lpar.c
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
We were not saving DAR and DSISR on MCE. Save then and also print the values
along with exception details in xmon.
Acked-by: Paul Mackerras pau...@samba.org
Signed-off-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
---
Hi,
This patchset adds transparent huge page support for PPC64.
I am marking the series to linux-mm because the PPC64 implementation
required few interface changes to core THP code.
TODO:
* ppc64 KVM related changes
* batch support for hpte invalidate
* powernv still doesn't boot
* hash preload
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
USE PTRS_PER_PTE to indicate the size of pte page. To support THP,
later patches will be changing PTRS_PER_PTE value.
Signed-off-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
---
arch/powerpc/include/asm/pgtable.h |6 ++
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
This make one PMD cover 16MB range. That helps in easier implementation of THP
on power. THP core code make use of one pmd entry to track the huge page and
the range mapped by a single pmd entry should be equal to the huge page size
supported
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
This patch move the common code to 32/64 bit headers. We will
later change the 64 bit version to support smaller PTE fragments
Signed-off-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
---
arch/powerpc/include/asm/pgalloc-32.h | 45
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
We allocate one page for the last level of linux page table. With THP and
large page size of 16MB, that would mean we are be wasting large part
of that page. To map 16MB area, we only need a PTE space of 2K with 64K
page size. This patch
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
We will use this later with THP changes to request for pmd table of double the
size.
THP code does PTE page allocation along with large page request and deposit them
for later use. This is to ensure that we won't have any failures when we
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
This make sure we handle multiple page size segment correctly.
Signed-off-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
---
arch/powerpc/mm/hash_native_64.c | 52 +-
1 file changed, 40
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
We look at both the segment base page size and actual page size and store
the pte-lp-encodings in an array per base page size.
We also update all relevant functions to take actual page size argument
so that we can use the correct PTE LP
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
In all these cases we are doing something similar to
HPTE_V_COMPARE(hpte_v, want_v) which ignores the HPTE_V_LARGE bit
With MPSS support we would need actual page size to set HPTE_V_LARGE
bit and that won't be available in most of these
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
Signed-off-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
---
arch/powerpc/include/asm/mmu-hash64.h |3 ++-
arch/powerpc/mm/hash_utils_64.c | 12 +++-
arch/powerpc/mm/hugetlbpage-hash64.c |2 +-
3 files
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
Signed-off-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
---
arch/powerpc/kvm/book3s_hv.c | 11 +--
1 file changed, 9 insertions(+), 2 deletions(-)
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
This gives hint about different base and actual page size combination
supported by the platform.
Signed-off-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
---
arch/powerpc/mm/hash_utils_64.c | 10 +-
1 file changed, 5
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
On archs like powerpc that support different huge page sizes, HPAGE_SHIFT
and other derived values like HPAGE_PMD_ORDER are not constants. So move
that to hugepage_init
Cc: Andrea Arcangeli aarca...@redhat.com
Signed-off-by: Aneesh Kumar K.V
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
This will be later used by powerpc THP support. In powerpc we want to use
pgtable for storing the hash index values. So instead of adding them to
mm_context list, we would like to store them in the second half of pmd
Cc: Andrea Arcangeli
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
For architectures like ppc64 we look at deposited pgtable when
calling pmdp_get_and_clear. So do the pgtable_trans_huge_withdraw
after finishing pmdp related operations.
Cc: Andrea Arcangeli aarca...@redhat.com
Signed-off-by: Aneesh Kumar
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
We now have pmd entries covering to 16MB range. To implement THP on powerpc,
we double the size of PMD. The second half is used to deposit the pgtable (PTE
page).
We also use the depoisted PTE page for tracking the HPTE information. The
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
HUGETLB clear the top bit of PMD entries and use that to indicate
a HUGETLB page directory. Since we store pfns in PMDs for THP,
we would have the top bit cleared by default. Add the top bit mask
for THP PMD entries and clear that when we are
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
As per ISA doc, we encode base and actual page size in the LP bits of
PTE. The number of bit used to encode the page sizes depend on actual
page size. ISA doc lists this as
PTE LP actual page size
rrrz ≥8KB
rrzz
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
We now have pmd entries covering to 16MB range. To implement THP on powerpc,
we double the size of PMD. The second half is used to deposit the pgtable (PTE
page).
We also use the depoisted PTE page for tracking the HPTE information. The
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
Signed-off-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
---
arch/powerpc/perf/callchain.c | 32 +---
1 file changed, 21 insertions(+), 11 deletions(-)
diff --git a/arch/powerpc/perf/callchain.c
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
Without this insert will return H_PARAMETER error. Also use
the signed variant when printing error.
Signed-off-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
---
arch/powerpc/mm/hugepage-hash64.c |2 ++
1 file changed, 2
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
handle large pages for get_user_pages_fast. Also take care of large page
splitting.
Signed-off-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
---
arch/powerpc/mm/gup.c | 84 +++--
1 file
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
We enable only if the we support 16MB page size.
Signed-off-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
---
arch/powerpc/include/asm/pgtable.h | 30 --
1 file changed, 28 insertions(+), 2 deletions(-)
On 02/26/2013 05:47 AM, Lai Jiangshan wrote:
On Tue, Feb 26, 2013 at 3:26 AM, Srivatsa S. Bhat
srivatsa.b...@linux.vnet.ibm.com wrote:
Hi Lai,
On 02/25/2013 09:23 PM, Lai Jiangshan wrote:
Hi, Srivatsa,
The target of the whole patchset is nice for me.
Cool! Thanks :-)
[...]
I wrote an
book3e is different with book3s since 3s includes the exception
vectors code in head_64.S as it relies on absolute addressing
which is only possible within this compilation unit. So we have
to get that label address with got.
And when boot a relocated kernel, we should reset ipvr properly again
This patchset is used to support kexec and kdump on book3e.
Tested on fsl-p5040 DS.
v1:
--
* improve some patch head
* rebase on next branch with patch 7
Tiejun Chen (7):
powerpc/book3e: support CONFIG_RELOCATABLE
book3e/kexec/kdump: enable kexec for kernel
We need to active KEXEC for book3e and bypass or convert non-book3e stuff
in kexec coverage.
Signed-off-by: Tiejun Chen tiejun.c...@windriver.com
---
arch/powerpc/Kconfig |2 +-
arch/powerpc/kernel/machine_kexec_64.c |6 ++
arch/powerpc/kernel/misc_64.S |
We need to introduce a flag to indicate we're already running
a kexec kernel then we can go proper path. For example, We
shouldn't access spin_table from the bootloader to up any secondary
cpu for kexec kernel, and kexec kernel already know how to jump to
generic_secondary_smp_init.
ppc64 kexec mechanism has a different implementation with ppc32.
Signed-off-by: Tiejun Chen tiejun.c...@windriver.com
---
arch/powerpc/platforms/85xx/smp.c | 13 +
1 file changed, 13 insertions(+)
diff --git a/arch/powerpc/platforms/85xx/smp.c
b/arch/powerpc/platforms/85xx/smp.c
Book3e is always aligned 1GB to create TLB so we should
use (KERNELBASE - MEMORY_START) as VIRT_PHYS_OFFSET to
get __pa/__va properly while boot kdump.
Signed-off-by: Tiejun Chen tiejun.c...@windriver.com
---
arch/powerpc/include/asm/page.h |2 ++
1 file changed, 2 insertions(+)
diff --git
book3e have no real MMU mode so we have to create a 1:1 TLB
mapping to make sure we can access the real physical address.
And correct something to support this pseudo real mode on book3e.
Signed-off-by: Tiejun Chen tiejun.c...@windriver.com
---
arch/powerpc/kernel/head_64.S |9 ---
In commit 96f013f, powerpc/kexec: Add kexec hold support for Book3e
processors, requires that GPR4 survive the hold process, for IBM Blue
Gene/Q with with some very strange firmware. But for FSL Book3E, r4 = 1
to indicate that the initial TLB entry for this core already exists so
we still should
Currently we already support p5040ds which has 4 e5500 cores, but
twelve dual-threaded e6500 cores are also built on T4240, we can
change CONFIG_NR_CPUS with this value now.
Signed-off-by: Tiejun Chen tiejun.c...@windriver.com
---
arch/powerpc/configs/corenet64_smp_defconfig |2 +-
1 file
On Tue, Feb 26, 2013 at 5:02 PM, Srivatsa S. Bhat
srivatsa.b...@linux.vnet.ibm.com wrote:
On 02/26/2013 05:47 AM, Lai Jiangshan wrote:
On Tue, Feb 26, 2013 at 3:26 AM, Srivatsa S. Bhat
srivatsa.b...@linux.vnet.ibm.com wrote:
Hi Lai,
On 02/25/2013 09:23 PM, Lai Jiangshan wrote:
Hi, Srivatsa,
On Tue, Feb 26, 2013 at 3:26 AM, Srivatsa S. Bhat
srivatsa.b...@linux.vnet.ibm.com wrote:
Hi Lai,
On 02/25/2013 09:23 PM, Lai Jiangshan wrote:
Hi, Srivatsa,
The target of the whole patchset is nice for me.
Cool! Thanks :-)
A question: How did you find out the such usages of
On Mon, Feb 18, 2013 at 8:38 PM, Srivatsa S. Bhat
srivatsa.b...@linux.vnet.ibm.com wrote:
Using global rwlocks as the backend for per-CPU rwlocks helps us avoid many
lock-ordering related problems (unlike per-cpu locks). However, global
rwlocks lead to unnecessary cache-line bouncing even when
Hi Lai,
I'm really not convinced that piggy-backing on lglocks would help
us in any way. But still, let me try to address some of the points
you raised...
On 02/26/2013 06:29 PM, Lai Jiangshan wrote:
On Tue, Feb 26, 2013 at 5:02 PM, Srivatsa S. Bhat
srivatsa.b...@linux.vnet.ibm.com wrote:
On
On 02/26/2013 07:47 PM, Lai Jiangshan wrote:
On Mon, Feb 18, 2013 at 8:38 PM, Srivatsa S. Bhat
srivatsa.b...@linux.vnet.ibm.com wrote:
Using global rwlocks as the backend for per-CPU rwlocks helps us avoid many
lock-ordering related problems (unlike per-cpu locks). However, global
rwlocks
On 02/26/2013 07:04 PM, Lai Jiangshan wrote:
On Tue, Feb 26, 2013 at 3:26 AM, Srivatsa S. Bhat
srivatsa.b...@linux.vnet.ibm.com wrote:
Hi Lai,
On 02/25/2013 09:23 PM, Lai Jiangshan wrote:
Hi, Srivatsa,
The target of the whole patchset is nice for me.
Cool! Thanks :-)
A question: How
On Tue, Feb 26, 2013 at 10:22 PM, Srivatsa S. Bhat
srivatsa.b...@linux.vnet.ibm.com wrote:
Hi Lai,
I'm really not convinced that piggy-backing on lglocks would help
us in any way. But still, let me try to address some of the points
you raised...
On 02/26/2013 06:29 PM, Lai Jiangshan wrote:
This is a note to let you know that I have just added a patch titled
uprobes/powerpc: Add dependency on single step emulation
to the linux-3.5.y-queue branch of the 3.5.y.z extended stable tree
which can be found at:
On 02/26/2013 09:55 PM, Lai Jiangshan wrote:
On Tue, Feb 26, 2013 at 10:22 PM, Srivatsa S. Bhat
srivatsa.b...@linux.vnet.ibm.com wrote:
Hi Lai,
I'm really not convinced that piggy-backing on lglocks would help
us in any way. But still, let me try to address some of the points
you raised...
Michael Ellerman [mich...@ellerman.id.au] wrote:
| On Tue, Jan 22, 2013 at 10:26:13PM -0800, Sukadev Bhattiprolu wrote:
|
| [PATCH 5/6][v4]: perf: Create a sysfs entry for Power event format
|
| Create a sysfs entry, '/sys/bus/event_source/devices/cpu/format/event'
| which describes the
Have not got through the entire file, but have a few comments...
+/*
+ * Set the PAACE type as primary and set the coherency required domain
+ * attribute
+ */
+static void pamu_setup_default_xfer_to_host_ppaace(struct paace *ppaace)
+{
+ set_bf(ppaace-addr_bitfields, PAACE_AF_PT,
3.8-stable review patch. If anyone has any objections, please let me know.
--
From: Suzuki K. Poulose suz...@in.ibm.com
commit 5e249d4528528c9a77da051a89ec7f99d31b83eb upstream.
Uprobes uses emulate_step in sstep.c, but we haven't explicitly specified
the dependency. On
On Wed, Feb 27, 2013 at 3:30 AM, Srivatsa S. Bhat
srivatsa.b...@linux.vnet.ibm.com wrote:
On 02/26/2013 09:55 PM, Lai Jiangshan wrote:
On Tue, Feb 26, 2013 at 10:22 PM, Srivatsa S. Bhat
srivatsa.b...@linux.vnet.ibm.com wrote:
Hi Lai,
I'm really not convinced that piggy-backing on lglocks
On Tue, Feb 26, 2013 at 12:03:43PM -0800, Sukadev Bhattiprolu wrote:
Michael Ellerman [mich...@ellerman.id.au] wrote:
| On Tue, Jan 22, 2013 at 10:26:13PM -0800, Sukadev Bhattiprolu wrote:
|
| [PATCH 5/6][v4]: perf: Create a sysfs entry for Power event format
|
| Create a sysfs entry,
On Tue, Jan 22, 2013 at 10:26:13PM -0800, Sukadev Bhattiprolu wrote:
[PATCH 5/6][v4]: perf: Create a sysfs entry for Power event format
Create a sysfs entry, '/sys/bus/event_source/devices/cpu/format/event'
which describes the format of a POWER cpu.
diff --git
On 02/08/2013 10:41 AM, Benjamin Herrenschmidt wrote:
On Thu, 2013-01-31 at 20:04 -0600, Jason Wessel wrote:
diff --git a/arch/powerpc/kernel/kgdb.c b/arch/powerpc/kernel/kgdb.c
index 8747447..5ca82cd 100644
--- a/arch/powerpc/kernel/kgdb.c
+++ b/arch/powerpc/kernel/kgdb.c
@@ -154,12 +154,12
This patchset is used to support kgdb/gdb on book3e.
Validated on p4080ds and p5040ds with test single step and breakpoint
v3:
* make work when enable CONFIG_RELOCATABLE
* fix one typo in patch,
powerpc/book3e: store critical/machine/debug exception thread info:
ld
We always alloc critical/machine/debug check exceptions. This is
different from the normal exception. So we should load these exception
stack properly like we did for booke.
Signed-off-by: Tiejun Chen tiejun.c...@windriver.com
---
arch/powerpc/kernel/exceptions-64e.S | 49
We need to store thread info to these exception thread info like something
we already did for PPC32.
Signed-off-by: Tiejun Chen tiejun.c...@windriver.com
---
arch/powerpc/kernel/exceptions-64e.S | 15 +++
1 file changed, 15 insertions(+)
diff --git
gdb always need to generate a single step properly to invoke
a kgdb state. But with lazy interrupt, book3e can't always
trigger a debug exception with a single step since the current
is blocked for handling those pending exception, then we miss
that expected dbcr configuration at last to generate
Currently we need to skip this for supporting KGDB.
Signed-off-by: Tiejun Chen tiejun.c...@windriver.com
---
arch/powerpc/kernel/exceptions-64e.S |5 -
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/arch/powerpc/kernel/exceptions-64e.S
We can't look up the address of the entry point of the function simply
via that function symbol for all architectures.
For PPC64 ABI, actually there is a function descriptors structure.
A function descriptor is a three doubleword data structure that contains
the following values:
* The
Currently BookE and Book3E always copy the thread_info from
the kernel stack when we enter the debug exception, so we can
remove these action here to avoid copying again.
Signed-off-by: Tiejun Chen tiejun.c...@windriver.com
---
arch/powerpc/kernel/kgdb.c | 28
1
On 02/27/2013 11:04 AM, Tiejun Chen wrote:
This patchset is used to support kgdb/gdb on book3e.
Validated on p4080ds and p5040ds with test single step and breakpoint
Please ignore this thread since looks I'm missing to CC Jason :(
Tiejun
v3:
* make work when enable CONFIG_RELOCATABLE
*
This patchset is used to support kgdb/gdb on book3e.
Validated on p4080ds and p5040ds with test single step and breakpoint
v3:
* make work when enable CONFIG_RELOCATABLE
* fix one typo in patch,
powerpc/book3e: store critical/machine/debug exception thread info:
ld
We always alloc critical/machine/debug check exceptions. This is
different from the normal exception. So we should load these exception
stack properly like we did for booke.
Signed-off-by: Tiejun Chen tiejun.c...@windriver.com
---
arch/powerpc/kernel/exceptions-64e.S | 49
We need to store thread info to these exception thread info like something
we already did for PPC32.
Signed-off-by: Tiejun Chen tiejun.c...@windriver.com
---
arch/powerpc/kernel/exceptions-64e.S | 15 +++
1 file changed, 15 insertions(+)
diff --git
gdb always need to generate a single step properly to invoke
a kgdb state. But with lazy interrupt, book3e can't always
trigger a debug exception with a single step since the current
is blocked for handling those pending exception, then we miss
that expected dbcr configuration at last to generate
Currently we need to skip this for supporting KGDB.
Signed-off-by: Tiejun Chen tiejun.c...@windriver.com
---
arch/powerpc/kernel/exceptions-64e.S |5 -
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/arch/powerpc/kernel/exceptions-64e.S
We can't look up the address of the entry point of the function simply
via that function symbol for all architectures.
For PPC64 ABI, actually there is a function descriptors structure.
A function descriptor is a three doubleword data structure that contains
the following values:
* The
Currently BookE and Book3E always copy the thread_info from
the kernel stack when we enter the debug exception, so we can
remove these action here to avoid copying again.
Signed-off-by: Tiejun Chen tiejun.c...@windriver.com
---
arch/powerpc/kernel/kgdb.c | 28
1
66 matches
Mail list logo