On Fri, Feb 01, 2008 at 09:02:40PM +0100, Bastian Blank wrote:
Fix ext4 bitops.
Signed-off-by: Bastian Blank [EMAIL PROTECTED]
diff --git a/include/asm-powerpc/bitops.h b/include/asm-powerpc/bitops.h
index 220d9a7..d0980df 100644
--- a/include/asm-powerpc/bitops.h
+++
On Mon, Feb 04, 2008 at 10:24:36AM +0100, Heiko Carstens wrote:
| fs/ext4/mballoc.c: In function 'ext4_mb_generate_buddy':
| fs/ext4/mballoc.c:954: error: implicit declaration of function
'generic_find_next_le_bit'
The s390 specific bitops uses parts of the generic
Marek Szyprowski m.szyprow...@samsung.com writes:
Hello,
On 6/24/2013 10:25 AM, Aneesh Kumar K.V wrote:
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
We want to use CMA for allocating hash page table and real mode area for
PPC64. Hence move DMA contiguous related changes
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
We want to use CMA for allocating hash page table and real mode area for
PPC64. Hence move DMA contiguous related changes into a seperate config
so that ppc64 can enable CMA without requiring DMA contiguous.
Signed-off-by: Aneesh Kumar K.V
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
Use CMA for allocation of guest hash page.
Signed-off-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
---
arch/powerpc/include/asm/kvm_book3s_64.h | 1 -
arch/powerpc/include/asm/kvm_host.h | 2 +-
arch/powerpc/include/asm
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
Use CMA for allocation of RMA region for guest. Also remove linear allocator
now that it is not used
Signed-off-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
---
arch/powerpc/include/asm/kvm_book3s_64.h | 1 +
arch/powerpc/include
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
Both RMA and hash page table request will be a multiple of 256K. We can use
a chunk size of 256K to track the free/used 256K chunk in the bitmap. This
should help to reduce the bitmap size.
Signed-off-by: Aneesh Kumar K.V aneesh.ku
Michal Nazarewicz min...@mina86.com writes:
On Fri, Jun 28 2013, Aneesh Kumar K.V wrote:
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
We want to use CMA for allocating hash page table and real mode area for
PPC64. Hence move DMA contiguous related changes into a seperate config
so
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
We want to use CMA for allocating hash page table and real mode area for
PPC64. Hence move DMA contiguous related changes into a seperate config
so that ppc64 can enable CMA without requiring DMA contiguous.
Acked-by: Michal Nazarewicz min
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
Powerpc architecture uses a hash based page table mechanism for mapping virtual
addresses to physical address. The architecture require this hash page table to
be physically contiguous. With KVM on Powerpc currently we use early reservation
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
Both RMA and hash page table request will be a multiple of 256K. We can use
a chunk size of 256K to track the free/used 256K chunk in the bitmap. This
should help to reduce the bitmap size.
Signed-off-by: Aneesh Kumar K.V aneesh.ku
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
Older version of power architecture use Real Mode Offset register and Real Mode
Limit
Selector for mapping guest Real Mode Area. The guest RMA should be physically
contigous since we use the range when address translation is not enabled
Alexander Graf ag...@suse.de writes:
On 07/02/2013 07:45 AM, Aneesh Kumar K.V wrote:
From: Aneesh Kumar K.Vaneesh.ku...@linux.vnet.ibm.com
Older version of power architecture use Real Mode Offset register and Real
Mode Limit
Selector for mapping guest Real Mode Area. The guest RMA should
Alexander Graf ag...@suse.de writes:
On 07/02/2013 07:45 AM, Aneesh Kumar K.V wrote:
From: Aneesh Kumar K.Vaneesh.ku...@linux.vnet.ibm.com
Powerpc architecture uses a hash based page table mechanism for mapping
virtual
addresses to physical address. The architecture require this hash page
Marek Szyprowski m.szyprow...@samsung.com writes:
Hello,
On 7/2/2013 7:45 AM, Aneesh Kumar K.V wrote:
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
We want to use CMA for allocating hash page table and real mode area for
PPC64. Hence move DMA contiguous related changes
Alexander Graf ag...@suse.de writes:
On 07/02/2013 05:29 PM, Aneesh Kumar K.V wrote:
Alexander Grafag...@suse.de writes:
On 07/02/2013 07:45 AM, Aneesh Kumar K.V wrote:
From: Aneesh Kumar K.Vaneesh.ku...@linux.vnet.ibm.com
Older version of power architecture use Real Mode Offset register
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
We should not fallthrough different case statements in hpte_decode. Add
break statement to break out of the switch. The regression is introduced by
dcda287a9b26309ae43a091d0ecde16f8f61b4c0 powerpc/mm: Simplify hpte_decode
Reported-by: Paul
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
The sllp value is stored in mmu_psize_defs in such a way that we can easily OR
the value to get the operand for slbmte instruction. ie, the L and LP bits are
not contiguous. Decode the bits and use them correctly in tlbie.
regression
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
The sllp value is stored in mmu_psize_defs in such a way that we can easily OR
the value to get the operand for slbmte instruction. ie, the L and LP bits are
not contiguous. Decode the bits and use them correctly in tlbie.
regression
Mahesh J Salgaonkar mah...@linux.vnet.ibm.com writes:
From: Mahesh Salgaonkar mah...@linux.vnet.ibm.com
During Machine Check interrupt on pseries platform, R3 generally points to
memory region inside RTAS (FWNMI) area. We see r3 corruption because when RTAS
delivers the machine check
Denis Kirjanov k...@linux-powerpc.org writes:
Fix a typo in pSeries_lpar_hpte_insert()
Signed-off-by: Denis Kirjanov k...@linux-powerpc.org
looks good
Reviewed-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
We may want to add the commit that introduced the change
Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com writes:
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
The sllp value is stored in mmu_psize_defs in such a way that we can easily OR
the value to get the operand for slbmte instruction. ie, the L and LP bits are
not contiguous. Decode
Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com writes:
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
We should not fallthrough different case statements in hpte_decode. Add
break statement to break out of the switch. The regression is introduced
Michael Ellerman mich...@ellerman.id.au writes:
On Wed, Aug 07, 2013 at 09:31:00AM +1000, Michael Neuling wrote:
Anton Blanchard an...@samba.org wrote:
This is the pseries_defconfig with CONFIG_CPU_LITTLE_ENDIAN enabled
and CONFIG_VIRTUALIZATION disabled (required until we fix some
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
Otherwise we would clear the pvr value
Signed-off-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
---
arch/powerpc/kvm/book3s_hv.c | 7 +++
1 file changed, 3 insertions(+), 4 deletions(-)
diff --git a/arch/powerpc/kvm/book3s_hv.c
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
We should be able to copy upto count bytes
Signed-off-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
---
arch/powerpc/kvm/book3s_64_mmu_hv.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/powerpc/kvm
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
We should be able to copy upto count bytes
Signed-off-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
---
arch/powerpc/kvm/book3s_64_mmu_hv.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/powerpc/kvm
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
Otherwise we would clear the pvr value
Signed-off-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
---
arch/powerpc/kvm/book3s_hv.c | 7 +++
1 file changed, 3 insertions(+), 4 deletions(-)
diff --git a/arch/powerpc/kvm/book3s_hv.c
Alexander Graf ag...@suse.de writes:
On 22.08.2013, at 12:37, Aneesh Kumar K.V wrote:
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
Isn't this you?
Yes. The patches are generated using git format-patch and sent by
git send-email. That's how it always created patches for me. I am
Alexander Graf ag...@suse.de writes:
On 23.08.2013, at 04:31, Aneesh Kumar K.V wrote:
Alexander Graf ag...@suse.de writes:
On 22.08.2013, at 12:37, Aneesh Kumar K.V wrote:
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
Isn't this you?
Yes. The patches are generated using
Alexander Graf ag...@suse.de writes:
On 26.08.2013, at 05:28, Aneesh Kumar K.V wrote:
Alexander Graf ag...@suse.de writes:
On 23.08.2013, at 04:31, Aneesh Kumar K.V wrote:
Alexander Graf ag...@suse.de writes:
On 22.08.2013, at 12:37, Aneesh Kumar K.V wrote:
From: Aneesh Kumar K.V
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
stack_grow_into/14082 is trying to acquire lock:
(mm-mmap_sem){++}, at: [c0206d28] .might_fault+0x78/0xe0
but task is already holding lock:
(mm-mmap_sem){++}, at: [c07ffd8c] .do_page_fault+0x24c/0x910
other
Paul Mackerras pau...@samba.org writes:
On Thu, Sep 05, 2013 at 12:47:02PM +0530, Aneesh Kumar K.V wrote:
@@ -280,6 +280,13 @@ int __kprobes do_page_fault(struct pt_regs *regs,
unsigned long address,
perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address);
+/*
+ * We
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
stack_grow_into/14082 is trying to acquire lock:
(mm-mmap_sem){++}, at: [c0206d28] .might_fault+0x78/0xe0
but task is already holding lock:
(mm-mmap_sem){++}, at: [c07ffd8c] .do_page_fault+0x24c/0x910
other
Benjamin Herrenschmidt b...@kernel.crashing.org writes:
On Thu, 2013-09-05 at 17:18 +0530, Aneesh Kumar K.V wrote:
Paul Mackerras pau...@samba.org writes:
On Thu, Sep 05, 2013 at 12:47:02PM +0530, Aneesh Kumar K.V wrote:
@@ -280,6 +280,13 @@ int __kprobes do_page_fault(struct pt_regs
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
stack_grow_into/14082 is trying to acquire lock:
(mm-mmap_sem){++}, at: [c0206d28] .might_fault+0x78/0xe0
but task is already holding lock:
(mm-mmap_sem){++}, at: [c07ffd8c] .do_page_fault+0x24c/0x910
other
Hi Alex,
Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com writes:
Ok, please give me an example with real numbers and why it breaks.
http://mid.gmane.org/1376995766-16526-4-git-send-email-aneesh.ku...@linux.vnet.ibm.com
Didn't quiet get what you are looking for. As explained before
From: Paul Mackerras pau...@samba.org
add kvmppc_free_vcores() to free the kvmppc_vcore structures
that we allocate for a guest, which are currently being leaked.
Signed-off-by: Paul Mackerras pau...@samba.org
Signed-off-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
---
arch/powerpc/kvm
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
This moves /dev/kvm ownership to kvm.ko module. Depending on
which KVM mode we select during VM creation we take a reference
count on respective module
Signed-off-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
---
arch/powerpc/include
From: Paul Mackerras pau...@samba.org
This label is not used now.
Signed-off-by: Paul Mackerras pau...@samba.org
Signed-off-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
---
arch/powerpc/kvm/book3s_hv_interrupts.S | 3 ---
arch/powerpc/kvm/book3s_interrupts.S| 3 ---
2 files changed
Hi All,
This patch series support enabling HV and PR KVM together in the same kernel. We
extend machine property with new property kvm_type. A value of 1 will force HV
KVM and 2 PR KVM. The default value is 0 which will select the fastest KVM mode.
ie, HV if that is supported otherwise PR.
With
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
This help ups to select the relevant code in the kernel code
when we later move HV and PR bits as seperate modules.
Signed-off-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
---
arch/powerpc/include/asm/kvm_book3s_64.h | 6
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
Signed-off-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
---
arch/powerpc/kvm/Kconfig | 6 +++---
arch/powerpc/kvm/Makefile | 12
arch/powerpc/kvm/book3s.c | 19
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
This help us to identify whether we are running with hypervisor mode KVM
enabled. The change is needed so that we can have both HV and PR kvm
enabled in the same kernel.
If both HV and PR KVM are included, interrupts come in to the HV
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
Signed-off-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
---
arch/arm/kvm/arm.c | 4 ++--
arch/ia64/kvm/kvm-ia64.c | 4 ++--
arch/mips/kvm/kvm_mips.c | 6 ++
arch/powerpc/include/asm
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
This moves the kvmppc_ops callbacks to be a per VM entity. This
enables us to select HV and PR mode when creating a VM
Signed-off-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
---
arch/powerpc/include/asm/kvm_host.h | 3 ++
arch
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
This patch moves PR related tracepoints to a separate header. This
enables in converting PR to a kernel module which will be done in
later patches
Signed-off-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
---
arch/powerpc/kvm
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
This moves HV and PR specific functions to kvmppc_ops callback.
This is needed so that we can enable HV and PR together in the
same kernel. Actual changes to enable both come in the later
patch.This also renames almost all of the symbols
an EXPORT_SYMBOL_GPL().
Signed-off-by: Paul Mackerras pau...@samba.org
Signed-off-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
---
arch/powerpc/kvm/Makefile | 12
arch/powerpc/kvm/book3s_64_vio_hv.c | 2 ++
2 files changed, 10 insertions(+), 4 deletions(-)
diff
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
Even though we have same value for linux PTE bits and hash PTE pits
use the hash pte bits wen updating hash pte
Signed-off-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
---
arch/powerpc/platforms/cell/beat_htab.c | 4 ++--
arch
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
only hash 64 config we always set memory coherence, If
a platform cannot have memory coherence always set they
can infer that from _PAGE_NO_CACHE and _PAGE_WRITETHRU
like in lpar. So we dont' really need a separate bit
for tracking
Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com writes:
Hi All,
This patch series support enabling HV and PR KVM together in the same kernel.
We
extend machine property with new property kvm_type. A value of 1 will force
HV
KVM and 2 PR KVM. The default value is 0 which will select
Hi,
This patchset include preparatory patches for supporting 64TB with ppc64. I
haven't
completed the actual patch that bump the USER_ESID bits. I wanted the share the
changes early so that I can get feedback on the approach. The changes itself
contains few FIXME!! which I will be addressing in
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
Don't open code the same
Signed-off-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
---
arch/powerpc/platforms/cell/beat_htab.c |2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/powerpc/platforms/cell
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
This is in preparation to the conversion of 64 bit powerpc virtual address
to the max 78 bits. Later patch will switch struct virt_addr to a struct
of virtual segment id and segment offset.
Signed-off-by: Aneesh Kumar K.V aneesh.ku
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
This patch simplify hpte_decode for easy switching of virtual address to
vsid and segment offset combination in the later patch
Signed-off-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
---
arch/powerpc/mm/hash_native_64.c | 51
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
This patch enables us to have 78 bit virtual address.
With 1TB segments we use 40 bits of virtual adress as segment offset and
the remaining 24 bits (of the current 64 bit virtual address) are used
to index the virtual segment. Out of the 24
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
As we keep increasing PGTABLE_RANGE we need not increase the virual
map area for kernel.
Signed-off-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
---
arch/powerpc/include/asm/pgtable-ppc64.h |2 +-
1 file changed, 1 insertion
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
This patch makes the high psizes mask as an unsigned char array
so that we can have more than 16TB. Currently we support upto
64TB
Signed-off-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
---
arch/powerpc/include/asm/mmu-hash64.h
Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com writes:
Hi,
This patchset include preparatory patches for supporting 64TB with ppc64. I
haven't
completed the actual patch that bump the USER_ESID bits. I wanted the share
the
changes early so that I can get feedback on the approach
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
Don't open code the same
Signed-off-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
---
arch/powerpc/platforms/cell/beat_htab.c |2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/powerpc/platforms/cell
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
This is in preparation to the conversion of 64 bit powerpc virtual address
to the max 78 bits. Later patch will switch struct virt_addr to a struct
of virtual segment id and segment offset.
Signed-off-by: Aneesh Kumar K.V aneesh.ku
Hi,
This patchset include patches for supporting 64TB with ppc64. I haven't booted
this on hardware with 64TB memory yet. But they boot fine on real hardware with
less memory. Changes extend VSID bits to 38 bits for a 256MB segment
and 26 bits for 1TB segments.
The patches are not for
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
This patch enables us to have 78 bit virtual address.
With 1TB segments we use 40 bits of virtual adress as segment offset and
the remaining 24 bits (of the current 64 bit virtual address) are used
to index the virtual segment. Out of the 24
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
As we keep increasing PGTABLE_RANGE we need not increase the virual
map area for kernel.
Signed-off-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
---
arch/powerpc/include/asm/pgtable-ppc64.h |2 +-
1 file changed, 1 insertion
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
This patch simplify hpte_decode for easy switching of virtual address to
vsid and segment offset combination in the later patch
Signed-off-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
---
arch/powerpc/mm/hash_native_64.c | 51
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
This patch makes the high psizes mask as an unsigned char array
so that we can have more than 16TB. Currently we support upto
64TB
Signed-off-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
---
arch/powerpc/include/asm/mmu-hash64.h
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
Increase the number of valid VSID bits in slbmte instruction.
We will use the new bits when we increase valid VSID bits.
Signed-off-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
---
arch/powerpc/mm/slb_low.S |4 ++--
1 file
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
With larger vsid we need to track more bits of ESID in slb cache
for slb invalidate.
Signed-off-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
---
arch/powerpc/include/asm/paca.h |2 +-
arch/powerpc/mm/slb_low.S |8
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
Increase max addressable range to 64TB. This is not tested on
real hardware yet.
Signed-off-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
---
arch/powerpc/include/asm/mmu-hash64.h|8
arch/powerpc/include/asm
Benjamin Herrenschmidt b...@kernel.crashing.org writes:
On Fri, 2012-06-29 at 19:47 +0530, Aneesh Kumar K.V wrote:
+/* 78 bit power virtual address */
struct virt_addr {
- unsigned long addr;
+ unsigned long vsid;
+ unsigned long seg_off;
};
Do we really need to do
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
Don't open code the same
Signed-off-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
---
arch/powerpc/platforms/cell/beat_htab.c |2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/powerpc/platforms/cell
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
Don't open code the same
Signed-off-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
---
arch/powerpc/platforms/cell/beat_htab.c |2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/powerpc/platforms/cell
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
This patch simplify hpte_decode for easy switching of virtual address to
virtual page number in the later patch
Signed-off-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
---
arch/powerpc/mm/hash_native_64.c | 49
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
This patch makes the high psizes mask as an unsigned char array
so that we can have more than 16TB. Currently we support upto
64TB
Signed-off-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
---
arch/powerpc/include/asm/mmu-hash64.h
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
This patch convert different functions to take virtual page number
instead of virtual address. Virtual page number is virtual address
shifted right by VPN_SHIFT (12) bits. This enable us to have an
address range of upto 76 bits.
Signed-off
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
With larger vsid we need to track more bits of ESID in slb cache
for slb invalidate.
Signed-off-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
---
arch/powerpc/include/asm/paca.h |2 +-
arch/powerpc/mm/slb_low.S |8
Hi,
This patchset include patches for supporting 64TB with ppc64. I haven't booted
this on hardware with 64TB memory yet. But they boot fine on real hardware with
less memory. Changes extend VSID bits to 38 bits for a 256MB segment
and 26 bits for 1TB segments.
Changes from V1:
* Drop the usage
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
As we keep increasing PGTABLE_RANGE we need not increase the virual
map area for kernel.
Signed-off-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
---
arch/powerpc/include/asm/pgtable-ppc64.h |2 +-
1 file changed, 1 insertion
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
Increase max addressable range to 64TB. This is not tested on
real hardware yet.
Signed-off-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
---
arch/powerpc/include/asm/mmu-hash64.h|8
arch/powerpc/include/asm
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
Rename the variable to better reflect the values. No functional change
in this patch.
Signed-off-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
---
arch/powerpc/include/asm/kvm_book3s.h |2 +-
arch/powerpc/include/asm/machdep.h
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
Increase the number of valid VSID bits in slbmte instruction.
We will use the new bits when we increase valid VSID bits.
Signed-off-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
---
arch/powerpc/mm/slb_low.S |4 ++--
1 file
Hi,
This patchset include patches for supporting 64TB with ppc64. I haven't booted
this on hardware with 64TB memory yet. But they boot fine on real hardware with
less memory. Changes extend VSID bits to 38 bits for a 256MB segment
and 26 bits for 1TB segments.
Changes from V2:
* Fix few
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
This patch convert different functions to take virtual page number
instead of virtual address. Virtual page number is virtual address
shifted right by VPN_SHIFT (12) bits. This enable us to have an
address range of upto 76 bits.
Signed-off
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
ISA doc doesn't talk about this. As per ISA doc for a 4K page
tlbie RB RS
The Abbreviated Virtual Address (AVA) field in register RB must
contain bits 14:65 of the virtual address translated by the TLB
entry to be invalidated
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
Rename the variable to better reflect the values. No functional change
in this patch.
Signed-off-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
---
arch/powerpc/include/asm/kvm_book3s.h |2 +-
arch/powerpc/include/asm/machdep.h
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
This patch simplify hpte_decode for easy switching of virtual address to
virtual page number in the later patch
Signed-off-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
---
arch/powerpc/mm/hash_native_64.c | 49
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
As we keep increasing PGTABLE_RANGE we need not increase the virual
map area for kernel.
Signed-off-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
---
arch/powerpc/include/asm/pgtable-ppc64.h |2 +-
1 file changed, 1 insertion
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
Don't open code the same
Signed-off-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
---
arch/powerpc/platforms/cell/beat_htab.c |2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/powerpc/platforms/cell
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
This patch makes the high psizes mask as an unsigned char array
so that we can have more than 16TB. Currently we support upto
64TB
Signed-off-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
---
arch/powerpc/include/asm/mmu-hash64.h
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
With larger vsid we need to track more bits of ESID in slb cache
for slb invalidate.
Signed-off-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
---
arch/powerpc/include/asm/paca.h |2 +-
arch/powerpc/mm/slb_low.S |8
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
Increase max addressable range to 64TB. This is not tested on
real hardware yet.
Signed-off-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
---
arch/powerpc/include/asm/mmu-hash64.h|8
arch/powerpc/include/asm
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
Increase the number of valid VSID bits in slbmte instruction.
We will use the new bits when we increase valid VSID bits.
Signed-off-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
---
arch/powerpc/mm/slb_low.S |4 ++--
1 file
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
slice array size and slice mask size depend on PGTABLE_RANGE. We
can't directly include pgtable.h in these header because there is
a circular dependency. So add compile time check for these values.
Signed-off-by: Aneesh Kumar K.V aneesh.ku
Stephen Rothwell s...@canb.auug.org.au writes:
Hi Aneesh,
On Mon, 9 Jul 2012 18:43:33 +0530 Aneesh Kumar K.V
aneesh.ku...@linux.vnet.ibm.com wrote:
diff --git a/arch/powerpc/include/asm/mmu-hash64.h
b/arch/powerpc/include/asm/mmu-hash64.h
index 1c65a59..1c984a6 100644
--- a/arch
Stephen Rothwell s...@canb.auug.org.au writes:
Hi Aneesh,
On Mon, 9 Jul 2012 18:43:33 +0530 Aneesh Kumar K.V
aneesh.ku...@linux.vnet.ibm.com wrote:
diff --git a/arch/powerpc/mm/hash_native_64.c
b/arch/powerpc/mm/hash_native_64.c
index 660b8bb..308e29d 100644
--- a/arch/powerpc/mm
Paul Mackerras pau...@samba.org writes:
On Mon, Jul 09, 2012 at 06:43:32PM +0530, Aneesh Kumar K.V wrote:
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
This patch simplify hpte_decode for easy switching of virtual address to
virtual page number in the later patch
Signed-off
Paul Mackerras pau...@samba.org writes:
On Mon, Jul 09, 2012 at 06:43:33PM +0530, Aneesh Kumar K.V wrote:
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
This patch convert different functions to take virtual page number
instead of virtual address. Virtual page number is virtual
Paul Mackerras pau...@samba.org writes:
On Mon, Jul 09, 2012 at 06:43:34PM +0530, Aneesh Kumar K.V wrote:
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
Rename the variable to better reflect the values. No functional change
in this patch.
Signed-off-by: Aneesh Kumar K.V aneesh.ku
Paul Mackerras pau...@samba.org writes:
On Mon, Jul 23, 2012 at 11:22:08AM +1000, Benjamin Herrenschmidt wrote:
On Mon, 2012-07-23 at 09:56 +1000, Paul Mackerras wrote:
That indicate we should not mask the top 16 bits. So remove the
same.
Older versions of the architecture (2.02 and
1 - 100 of 4531 matches
Mail list logo