[PATCH -trivial 1/4] powerpc/simpleboot: Spelling s/trucate/truncate/

2014-06-29 Thread Geert Uytterhoeven
Signed-off-by: Geert Uytterhoeven ge...@linux-m68k.org
Cc: Benjamin Herrenschmidt b...@kernel.crashing.org
Cc: linuxppc-dev@lists.ozlabs.org
---
 arch/powerpc/boot/simpleboot.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/powerpc/boot/simpleboot.c b/arch/powerpc/boot/simpleboot.c
index 21cd48074ec8..9f8c678f0d9a 100644
--- a/arch/powerpc/boot/simpleboot.c
+++ b/arch/powerpc/boot/simpleboot.c
@@ -61,7 +61,7 @@ void platform_init(unsigned long r3, unsigned long r4, 
unsigned long r5,
if (*reg++ != 0)
fatal(Memory range is not based at address 0\n);
 
-   /* get the memsize and trucate it to under 4G on 32 bit machines */
+   /* get the memsize and truncate it to under 4G on 32 bit machines */
memsize64 = 0;
for (i = 0; i  *ns; i++)
memsize64 = (memsize64  32) | *reg++;
-- 
1.9.1

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

[PATCH 1/6] KVM: PPC: BOOK3S: HV: Clear hash pte bits from do_h_enter callers

2014-06-29 Thread Aneesh Kumar K.V
We will use this to set HPTE_V_VRMA bit in the later patch. This also
make sure we clear the hpte bits only when called via hcall.

Signed-off-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
---
 arch/powerpc/kvm/book3s_64_mmu_hv.c | 15 +--
 arch/powerpc/kvm/book3s_hv_rm_mmu.c |  8 ++--
 2 files changed, 19 insertions(+), 4 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c 
b/arch/powerpc/kvm/book3s_64_mmu_hv.c
index 09a47aeb5b63..1c137f45dd55 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_hv.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c
@@ -371,8 +371,6 @@ long kvmppc_virtmode_do_h_enter(struct kvm *kvm, unsigned 
long flags,
if (!psize)
return H_PARAMETER;
 
-   pteh = ~(HPTE_V_HVLOCK | HPTE_V_ABSENT | HPTE_V_VALID);
-
/* Find the memslot (if any) for this address */
gpa = (ptel  HPTE_R_RPN)  ~(psize - 1);
gfn = gpa  PAGE_SHIFT;
@@ -408,6 +406,12 @@ long kvmppc_virtmode_h_enter(struct kvm_vcpu *vcpu, 
unsigned long flags,
 long pte_index, unsigned long pteh,
 unsigned long ptel)
 {
+   /*
+* Clear few bits, when called via hcall
+*/
+   pteh = ~(HPTE_V_HVLOCK | HPTE_V_ABSENT | HPTE_V_VALID);
+   ptel = ~(HPTE_R_KEY_HI | HPTE_R_KEY_LO | HPTE_GR_RESERVED);
+
return kvmppc_virtmode_do_h_enter(vcpu-kvm, flags, pte_index,
  pteh, ptel, vcpu-arch.gpr[4]);
 }
@@ -1560,6 +1564,13 @@ static ssize_t kvm_htab_write(struct file *file, const 
char __user *buf,
if (be64_to_cpu(hptp[0])  (HPTE_V_VALID | 
HPTE_V_ABSENT))
kvmppc_do_h_remove(kvm, 0, i, 0, tmp);
err = -EIO;
+   /*
+* Clear few bits we got via read_htab which we
+* don't need to carry forward.
+*/
+   v = ~(HPTE_V_HVLOCK | HPTE_V_ABSENT | HPTE_V_VALID);
+   r = ~(HPTE_R_KEY_HI | HPTE_R_KEY_LO | 
HPTE_GR_RESERVED);
+
ret = kvmppc_virtmode_do_h_enter(kvm, H_EXACT, i, v, r,
 tmp);
if (ret != H_SUCCESS) {
diff --git a/arch/powerpc/kvm/book3s_hv_rm_mmu.c 
b/arch/powerpc/kvm/book3s_hv_rm_mmu.c
index 084ad54c73cd..157a5f35edfa 100644
--- a/arch/powerpc/kvm/book3s_hv_rm_mmu.c
+++ b/arch/powerpc/kvm/book3s_hv_rm_mmu.c
@@ -182,8 +182,6 @@ long kvmppc_do_h_enter(struct kvm *kvm, unsigned long flags,
if (!psize)
return H_PARAMETER;
writing = hpte_is_writable(ptel);
-   pteh = ~(HPTE_V_HVLOCK | HPTE_V_ABSENT | HPTE_V_VALID);
-   ptel = ~HPTE_GR_RESERVED;
g_ptel = ptel;
 
/* used later to detect if we might have been invalidated */
@@ -367,6 +365,12 @@ EXPORT_SYMBOL_GPL(kvmppc_do_h_enter);
 long kvmppc_h_enter(struct kvm_vcpu *vcpu, unsigned long flags,
long pte_index, unsigned long pteh, unsigned long ptel)
 {
+   /*
+* Clear few bits. when called via hcall.
+*/
+   pteh = ~(HPTE_V_HVLOCK | HPTE_V_ABSENT | HPTE_V_VALID);
+   ptel = ~(HPTE_R_KEY_HI | HPTE_R_KEY_LO | HPTE_GR_RESERVED);
+
return kvmppc_do_h_enter(vcpu-kvm, flags, pte_index, pteh, ptel,
 vcpu-arch.pgdir, true, vcpu-arch.gpr[4]);
 }
-- 
1.9.1

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

[PATCH 0/6] Use virtual page class key protection mechanism for speeding up guest page fault

2014-06-29 Thread Aneesh Kumar K.V
Hi,

With the current code we do an expensive hash page table lookup on every
page fault resulting from a missing hash page table entry. A NO_HPTE
page fault can happen due to the below reasons:

1) Missing hash pte as per guest. This should be forwarded to the guest
2) MMIO hash pte. The address against which the load/store is performed
   should be emulated as a MMIO operation.
3) Missing hash pte because host swapped out the guest page.

We want to differentiate (1) from (2) and (3) so that we can speed up
page fault due to (1). Optimizing (1) will help in improving
the overall performance because that covers a large percentage of
the page faults.

To achieve the above we use virtual page calss protection mechanism for
covering (2) and (3). For both the above case we mark the hpte
valid, but associate the page with virtual page class index 30 and 31.
The authority mask register is configured such that class index 30 and 31
will have read/write denied. The above change results in a key fault
for (2) and (3). This allows us to forward a NO_HPTE fault directly to guest
without doing the expensive hash pagetable lookup.

For the test below:

#include unistd.h
#include stdio.h
#include stdlib.h
#include sys/mman.h

#define PAGES (40*1024)

int main()
{
unsigned long size = getpagesize();
unsigned long length = size * PAGES;
unsigned long i, j, k = 0;

for (j = 0; j  10; j++) {
char *c = mmap(NULL, length, PROT_READ|PROT_WRITE,
   MAP_PRIVATE|MAP_ANONYMOUS, -1, 0);
if (c == MAP_FAILED) {
perror(mmap);
exit(1);
}
for (i = 0; i  length; i += size)
c[i] = 0;

/* flush hptes */
mprotect(c, length, PROT_WRITE);

for (i = 0; i  length; i += size)
c[i] = 10;

mprotect(c, length, PROT_READ);

for (i = 0; i  length; i += size)
k += c[i];

munmap(c, length);
}
}

Without Fix:
--
[root@qemu-pr-host ~]# time ./pfault

real0m8.438s
user0m0.855s
sys 0m7.540s
[root@qemu-pr-host ~]#


With Fix:

[root@qemu-pr-host ~]# time ./pfault

real0m7.833s
user0m0.782s
sys 0m7.038s
[root@qemu-pr-host ~]#



Aneesh Kumar K.V (6):
  KVM: PPC: BOOK3S: HV: Clear hash pte bits from do_h_enter callers
  KVM: PPC: BOOK3S: HV: Deny virtual page class key update via h_protect
  KVM: PPC: BOOK3S: HV: Remove dead code
  KVM: PPC: BOOK3S: HV: Use new functions for mapping/unmapping hpte in
host
  KVM: PPC: BOOK3S: Use hpte_update_in_progress to track invalid hpte
during an hpte update
  KVM: PPC: BOOK3S: HV: Use virtual page class protection mechanism for
host fault and mmio

 arch/powerpc/include/asm/kvm_book3s_64.h |  97 +-
 arch/powerpc/include/asm/kvm_host.h  |   1 +
 arch/powerpc/include/asm/reg.h   |   1 +
 arch/powerpc/kernel/asm-offsets.c|   1 +
 arch/powerpc/kvm/book3s_64_mmu_hv.c  |  99 --
 arch/powerpc/kvm/book3s_hv.c |   1 +
 arch/powerpc/kvm/book3s_hv_rm_mmu.c  | 166 +--
 arch/powerpc/kvm/book3s_hv_rmhandlers.S  | 100 +--
 8 files changed, 371 insertions(+), 95 deletions(-)

-- 
1.9.1

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

[PATCH] KVM: PPC: BOOK3S: HV: Update compute_tlbie_rb to handle 16MB base page

2014-06-29 Thread Aneesh Kumar K.V
When calculating the lower bits of AVA field, use the shift
count based on the base page size. Also add the missing segment
size and remove stale comment.

Signed-off-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
---
 arch/powerpc/include/asm/kvm_book3s_64.h | 6 --
 arch/powerpc/kvm/book3s_hv.c | 6 --
 2 files changed, 4 insertions(+), 8 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_book3s_64.h 
b/arch/powerpc/include/asm/kvm_book3s_64.h
index 66a0a44b62a8..ca7c1688a7b6 100644
--- a/arch/powerpc/include/asm/kvm_book3s_64.h
+++ b/arch/powerpc/include/asm/kvm_book3s_64.h
@@ -158,6 +158,8 @@ static inline unsigned long compute_tlbie_rb(unsigned long 
v, unsigned long r,
 */
/* This covers 14..54 bits of va*/
rb = (v  ~0x7fUL)  16;   /* AVA field */
+
+   rb |= v  (62 - 8);/*  B field */
/*
 * AVA in v had cleared lower 23 bits. We need to derive
 * that from pteg index
@@ -188,10 +190,10 @@ static inline unsigned long compute_tlbie_rb(unsigned 
long v, unsigned long r,
{
int aval_shift;
/*
-* remaining 7bits of AVA/LP fields
+* remaining bits of AVA/LP fields
 * Also contain the rr bits of LP
 */
-   rb |= (va_low  0x7f)  16;
+   rb |= (va_low  mmu_psize_defs[b_psize].shift)  0x7ff000;
/*
 * Now clear not needed LP bits based on actual psize
 */
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index cbf46eb3f59c..328416f28a55 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -1917,12 +1917,6 @@ static void kvmppc_add_seg_page_size(struct 
kvm_ppc_one_seg_page_size **sps,
(*sps)-page_shift = def-shift;
(*sps)-slb_enc = def-sllp;
(*sps)-enc[0].page_shift = def-shift;
-   /*
-* Only return base page encoding. We don't want to return
-* all the supporting pte_enc, because our H_ENTER doesn't
-* support MPSS yet. Once they do, we can start passing all
-* support pte_enc here
-*/
(*sps)-enc[0].pte_enc = def-penc[linux_psize];
/*
 * Add 16MB MPSS support if host supports it
-- 
1.9.1

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

[PATCH 2/6] KVM: PPC: BOOK3S: HV: Deny virtual page class key update via h_protect

2014-06-29 Thread Aneesh Kumar K.V
This makes it consistent with h_enter where we clear the key
bits. We also want to use virtual page class key protection mechanism
for indicating host page fault. For that we will be using key class
index 30 and 31. So prevent the guest from updating key bits until
we add proper support for virtual page class protection mechanism for
the guest. This will not have any impact for PAPR linux guest because
Linux guest currently don't use virtual page class key protection model

Signed-off-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
---
 arch/powerpc/kvm/book3s_hv_rm_mmu.c | 12 
 1 file changed, 8 insertions(+), 4 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_hv_rm_mmu.c 
b/arch/powerpc/kvm/book3s_hv_rm_mmu.c
index 157a5f35edfa..f908845f7379 100644
--- a/arch/powerpc/kvm/book3s_hv_rm_mmu.c
+++ b/arch/powerpc/kvm/book3s_hv_rm_mmu.c
@@ -658,13 +658,17 @@ long kvmppc_h_protect(struct kvm_vcpu *vcpu, unsigned 
long flags,
}
 
v = pte;
+   /*
+* We ignore key bits here. We use class 31 and 30 for
+* hypervisor purpose. We still don't track the page
+* class seperately. Until then don't allow h_protect
+* to change key bits.
+*/
bits = (flags  55)  HPTE_R_PP0;
-   bits |= (flags  48)  HPTE_R_KEY_HI;
-   bits |= flags  (HPTE_R_PP | HPTE_R_N | HPTE_R_KEY_LO);
+   bits |= flags  (HPTE_R_PP | HPTE_R_N);
 
/* Update guest view of 2nd HPTE dword */
-   mask = HPTE_R_PP0 | HPTE_R_PP | HPTE_R_N |
-   HPTE_R_KEY_HI | HPTE_R_KEY_LO;
+   mask = HPTE_R_PP0 | HPTE_R_PP | HPTE_R_N;
rev = real_vmalloc_addr(kvm-arch.revmap[pte_index]);
if (rev) {
r = (rev-guest_rpte  ~mask) | bits;
-- 
1.9.1

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

[PATCH 3/6] KVM: PPC: BOOK3S: HV: Remove dead code

2014-06-29 Thread Aneesh Kumar K.V
Since we do don't support virtual page class key protection mechanism in
the guest, we should not find a keyfault that needs to be forwarded to
the guest. So remove the dead code.

Signed-off-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
---
 arch/powerpc/kvm/book3s_64_mmu_hv.c | 9 -
 arch/powerpc/kvm/book3s_hv_rm_mmu.c | 9 -
 2 files changed, 18 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c 
b/arch/powerpc/kvm/book3s_64_mmu_hv.c
index 1c137f45dd55..590e07b1a43f 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_hv.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c
@@ -499,15 +499,6 @@ static int kvmppc_mmu_book3s_64_hv_xlate(struct kvm_vcpu 
*vcpu, gva_t eaddr,
gpte-may_write = hpte_write_permission(pp, key);
gpte-may_execute = gpte-may_read  !(gr  (HPTE_R_N | HPTE_R_G));
 
-   /* Storage key permission check for POWER7 */
-   if (data  virtmode  cpu_has_feature(CPU_FTR_ARCH_206)) {
-   int amrfield = hpte_get_skey_perm(gr, vcpu-arch.amr);
-   if (amrfield  1)
-   gpte-may_read = 0;
-   if (amrfield  2)
-   gpte-may_write = 0;
-   }
-
/* Get the guest physical address */
gpte-raddr = kvmppc_mmu_get_real_addr(v, gr, eaddr);
return 0;
diff --git a/arch/powerpc/kvm/book3s_hv_rm_mmu.c 
b/arch/powerpc/kvm/book3s_hv_rm_mmu.c
index f908845f7379..1884bff3122c 100644
--- a/arch/powerpc/kvm/book3s_hv_rm_mmu.c
+++ b/arch/powerpc/kvm/book3s_hv_rm_mmu.c
@@ -925,15 +925,6 @@ long kvmppc_hpte_hv_fault(struct kvm_vcpu *vcpu, unsigned 
long addr,
return status | DSISR_PROTFAULT;
}
 
-   /* Check storage key, if applicable */
-   if (data  (vcpu-arch.shregs.msr  MSR_DR)) {
-   unsigned int perm = hpte_get_skey_perm(gr, vcpu-arch.amr);
-   if (status  DSISR_ISSTORE)
-   perm = 1;
-   if (perm  1)
-   return status | DSISR_KEYFAULT;
-   }
-
/* Save HPTE info for virtual-mode handler */
vcpu-arch.pgfault_addr = addr;
vcpu-arch.pgfault_index = index;
-- 
1.9.1

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

[PATCH 4/6] KVM: PPC: BOOK3S: HV: Use new functions for mapping/unmapping hpte in host

2014-06-29 Thread Aneesh Kumar K.V
We want to use virtual page class key protection mechanism for
indicating a MMIO mapped hpte entry or a guest hpte entry that is swapped out
in the host. Those hptes will be marked valid, but have virtual page
class key set to 30 or 31. These virtual page class numbers are
configured in AMR to deny read/write. To accomodate such a change, add
new functions that map, unmap and check whether a hpte is mapped in the
host. This patch still use HPTE_V_VALID and HPTE_V_ABSENT and don't use
virtual page class keys. But we want to differentiate in the code
where we explicitly check for HPTE_V_VALID with places where we want to
check whether the hpte is host mapped. This patch enables a closer
review for such a change.

Signed-off-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
---
 arch/powerpc/include/asm/kvm_book3s_64.h | 36 
 arch/powerpc/kvm/book3s_64_mmu_hv.c  | 24 +++--
 arch/powerpc/kvm/book3s_hv_rm_mmu.c  | 30 ++
 3 files changed, 66 insertions(+), 24 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_book3s_64.h 
b/arch/powerpc/include/asm/kvm_book3s_64.h
index 0aa817933e6a..da00b1f05ea1 100644
--- a/arch/powerpc/include/asm/kvm_book3s_64.h
+++ b/arch/powerpc/include/asm/kvm_book3s_64.h
@@ -400,6 +400,42 @@ static inline int is_vrma_hpte(unsigned long hpte_v)
(HPTE_V_1TB_SEG | (VRMA_VSID  (40 - 16)));
 }
 
+static inline void __kvmppc_unmap_host_hpte(struct kvm *kvm,
+   unsigned long *hpte_v,
+   unsigned long *hpte_r,
+   bool mmio)
+{
+   *hpte_v |= HPTE_V_ABSENT;
+   if (mmio)
+   *hpte_r |= HPTE_R_KEY_HI | HPTE_R_KEY_LO;
+}
+
+static inline void kvmppc_unmap_host_hpte(struct kvm *kvm, __be64 *hptep)
+{
+   /*
+* We will never call this for MMIO
+*/
+   hptep[0] |= cpu_to_be64(HPTE_V_ABSENT);
+}
+
+static inline void kvmppc_map_host_hpte(struct kvm *kvm, unsigned long *hpte_v,
+   unsigned long *hpte_r)
+{
+   *hpte_v |= HPTE_V_VALID;
+   *hpte_v = ~HPTE_V_ABSENT;
+}
+
+static inline bool kvmppc_is_host_mapped_hpte(struct kvm *kvm, __be64 *hpte)
+{
+   unsigned long v;
+
+   v = be64_to_cpu(hpte[0]);
+   if (v  HPTE_V_VALID)
+   return true;
+   return false;
+}
+
+
 #ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE
 /*
  * Note modification of an HPTE; set the HPTE modified bit
diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c 
b/arch/powerpc/kvm/book3s_64_mmu_hv.c
index 590e07b1a43f..8ce5e95613f8 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_hv.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c
@@ -752,7 +752,8 @@ int kvmppc_book3s_hv_page_fault(struct kvm_run *run, struct 
kvm_vcpu *vcpu,
if (be64_to_cpu(hptep[0])  HPTE_V_VALID) {
/* HPTE was previously valid, so we need to invalidate it */
unlock_rmap(rmap);
-   hptep[0] |= cpu_to_be64(HPTE_V_ABSENT);
+   /* Always mark HPTE_V_ABSENT before invalidating */
+   kvmppc_unmap_host_hpte(kvm, hptep);
kvmppc_invalidate_hpte(kvm, hptep, index);
/* don't lose previous R and C bits */
r |= be64_to_cpu(hptep[1])  (HPTE_R_R | HPTE_R_C);
@@ -897,11 +898,12 @@ static int kvm_unmap_rmapp(struct kvm *kvm, unsigned long 
*rmapp,
/* Now check and modify the HPTE */
ptel = rev[i].guest_rpte;
psize = hpte_page_size(be64_to_cpu(hptep[0]), ptel);
-   if ((be64_to_cpu(hptep[0])  HPTE_V_VALID) 
+   if (kvmppc_is_host_mapped_hpte(kvm, hptep) 
hpte_rpn(ptel, psize) == gfn) {
if (kvm-arch.using_mmu_notifiers)
-   hptep[0] |= cpu_to_be64(HPTE_V_ABSENT);
+   kvmppc_unmap_host_hpte(kvm, hptep);
kvmppc_invalidate_hpte(kvm, hptep, i);
+
/* Harvest R and C */
rcbits = be64_to_cpu(hptep[1])  (HPTE_R_R | HPTE_R_C);
*rmapp |= rcbits  KVMPPC_RMAP_RC_SHIFT;
@@ -990,7 +992,7 @@ static int kvm_age_rmapp(struct kvm *kvm, unsigned long 
*rmapp,
}
 
/* Now check and modify the HPTE */
-   if ((be64_to_cpu(hptep[0])  HPTE_V_VALID) 
+   if (kvmppc_is_host_mapped_hpte(kvm, hptep) 
(be64_to_cpu(hptep[1])  HPTE_R_R)) {
kvmppc_clear_ref_hpte(kvm, hptep, i);
if (!(rev[i].guest_rpte  HPTE_R_R)) {
@@ -1121,11 +1123,12 @@ static int kvm_test_clear_dirty_npages(struct kvm *kvm, 
unsigned long *rmapp)
}
 
/* Now check and modify the HPTE */
-   if (!(hptep[0]  cpu_to_be64(HPTE_V_VALID)))
+   if 

[PATCH 5/6] KVM: PPC: BOOK3S: Use hpte_update_in_progress to track invalid hpte during an hpte update

2014-06-29 Thread Aneesh Kumar K.V
As per ISA, we first need to mark hpte invalid (V=0) before we update
the hpte lower half bits. With virtual page class key protection mechanism we 
want
to send any fault other than key fault to guest directly without
searching the hash page table. But then we can get NO_HPTE fault while
we are updating the hpte. To track that add a vm specific atomic
variable that we check in the fault path to always send the fault
to host.

Signed-off-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
---
 arch/powerpc/include/asm/kvm_book3s_64.h |  1 +
 arch/powerpc/include/asm/kvm_host.h  |  1 +
 arch/powerpc/kernel/asm-offsets.c|  1 +
 arch/powerpc/kvm/book3s_64_mmu_hv.c  | 19 ++
 arch/powerpc/kvm/book3s_hv.c |  1 +
 arch/powerpc/kvm/book3s_hv_rm_mmu.c  | 40 +++--
 arch/powerpc/kvm/book3s_hv_rmhandlers.S  | 60 +---
 7 files changed, 109 insertions(+), 14 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_book3s_64.h 
b/arch/powerpc/include/asm/kvm_book3s_64.h
index da00b1f05ea1..a6bf41865a66 100644
--- a/arch/powerpc/include/asm/kvm_book3s_64.h
+++ b/arch/powerpc/include/asm/kvm_book3s_64.h
@@ -416,6 +416,7 @@ static inline void kvmppc_unmap_host_hpte(struct kvm *kvm, 
__be64 *hptep)
 * We will never call this for MMIO
 */
hptep[0] |= cpu_to_be64(HPTE_V_ABSENT);
+   atomic_dec(kvm-arch.hpte_update_in_progress);
 }
 
 static inline void kvmppc_map_host_hpte(struct kvm *kvm, unsigned long *hpte_v,
diff --git a/arch/powerpc/include/asm/kvm_host.h 
b/arch/powerpc/include/asm/kvm_host.h
index f9ae69682ce1..0a9ff60fae4c 100644
--- a/arch/powerpc/include/asm/kvm_host.h
+++ b/arch/powerpc/include/asm/kvm_host.h
@@ -254,6 +254,7 @@ struct kvm_arch {
atomic_t hpte_mod_interest;
spinlock_t slot_phys_lock;
cpumask_t need_tlb_flush;
+   atomic_t hpte_update_in_progress;
struct kvmppc_vcore *vcores[KVM_MAX_VCORES];
int hpt_cma_alloc;
 #endif /* CONFIG_KVM_BOOK3S_HV_POSSIBLE */
diff --git a/arch/powerpc/kernel/asm-offsets.c 
b/arch/powerpc/kernel/asm-offsets.c
index f5995a912213..54a36110f8f2 100644
--- a/arch/powerpc/kernel/asm-offsets.c
+++ b/arch/powerpc/kernel/asm-offsets.c
@@ -496,6 +496,7 @@ int main(void)
DEFINE(KVM_LPCR, offsetof(struct kvm, arch.lpcr));
DEFINE(KVM_RMOR, offsetof(struct kvm, arch.rmor));
DEFINE(KVM_VRMA_SLB_V, offsetof(struct kvm, arch.vrma_slb_v));
+   DEFINE(KVM_HPTE_UPDATE, offsetof(struct kvm, 
arch.hpte_update_in_progress));
DEFINE(VCPU_DSISR, offsetof(struct kvm_vcpu, arch.shregs.dsisr));
DEFINE(VCPU_DAR, offsetof(struct kvm_vcpu, arch.shregs.dar));
DEFINE(VCPU_VPA, offsetof(struct kvm_vcpu, arch.vpa.pinned_addr));
diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c 
b/arch/powerpc/kvm/book3s_64_mmu_hv.c
index 8ce5e95613f8..cb7a616aacb1 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_hv.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c
@@ -592,6 +592,7 @@ int kvmppc_book3s_hv_page_fault(struct kvm_run *run, struct 
kvm_vcpu *vcpu,
unsigned int writing, write_ok;
struct vm_area_struct *vma;
unsigned long rcbits;
+   bool hpte_invalidated = false;
 
/*
 * Real-mode code has already searched the HPT and found the
@@ -750,13 +751,15 @@ int kvmppc_book3s_hv_page_fault(struct kvm_run *run, 
struct kvm_vcpu *vcpu,
r = rcbits | ~(HPTE_R_R | HPTE_R_C);
 
if (be64_to_cpu(hptep[0])  HPTE_V_VALID) {
-   /* HPTE was previously valid, so we need to invalidate it */
+   /*
+* If we had mapped this hpte before, we now need to
+* invalidate that.
+*/
unlock_rmap(rmap);
-   /* Always mark HPTE_V_ABSENT before invalidating */
-   kvmppc_unmap_host_hpte(kvm, hptep);
kvmppc_invalidate_hpte(kvm, hptep, index);
/* don't lose previous R and C bits */
r |= be64_to_cpu(hptep[1])  (HPTE_R_R | HPTE_R_C);
+   hpte_invalidated = true;
} else {
kvmppc_add_revmap_chain(kvm, rev, rmap, index, 0);
}
@@ -765,6 +768,9 @@ int kvmppc_book3s_hv_page_fault(struct kvm_run *run, struct 
kvm_vcpu *vcpu,
eieio();
hptep[0] = cpu_to_be64(hpte[0]);
asm volatile(ptesync : : : memory);
+   if (hpte_invalidated)
+   atomic_dec(kvm-arch.hpte_update_in_progress);
+
preempt_enable();
if (page  hpte_is_writable(r))
SetPageDirty(page);
@@ -1128,10 +1134,9 @@ static int kvm_test_clear_dirty_npages(struct kvm *kvm, 
unsigned long *rmapp)
/*
 * need to make it temporarily absent so C is stable
 */
-   kvmppc_unmap_host_hpte(kvm, hptep);
-   kvmppc_invalidate_hpte(kvm, hptep, i);
v = be64_to_cpu(hptep[0]);
r = 

[PATCH 6/6] KVM: PPC: BOOK3S: HV: Use virtual page class protection mechanism for host fault and mmio

2014-06-29 Thread Aneesh Kumar K.V
With this patch we use AMR class 30 and 31 for indicating a page
fault that should be handled by host. This includes the MMIO access and
the page fault resulting from guest RAM swapout in the host. This
enables us to forward the fault to guest without doing the expensive
hash page table search for finding the hpte entry. With this patch, we
mark hash pte always valid and use class index 30 and 31 for key based
fault. These virtual class index are configured in AMR to deny
read/write. Since access class protection mechanism doesn't work with
VRMA region, we need to handle them separately. We mark those HPTEs
invalid and use the software defined bit, HPTE_V_VRMA, to differentiate
them.

NOTE: We still need to handle protection fault in host so that a
write to KSM shared page is handled in the host.

Signed-off-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
---
 arch/powerpc/include/asm/kvm_book3s_64.h | 80 +++-
 arch/powerpc/include/asm/reg.h   |  1 +
 arch/powerpc/kvm/book3s_64_mmu_hv.c  | 48 ++-
 arch/powerpc/kvm/book3s_hv_rm_mmu.c  | 69 ++-
 arch/powerpc/kvm/book3s_hv_rmhandlers.S  | 52 -
 5 files changed, 194 insertions(+), 56 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_book3s_64.h 
b/arch/powerpc/include/asm/kvm_book3s_64.h
index a6bf41865a66..4aa9c3601fe8 100644
--- a/arch/powerpc/include/asm/kvm_book3s_64.h
+++ b/arch/powerpc/include/asm/kvm_book3s_64.h
@@ -48,7 +48,18 @@ extern unsigned long kvm_rma_pages;
  * HPTEs.
  */
 #define HPTE_V_HVLOCK  0x40UL
-#define HPTE_V_ABSENT  0x20UL
+/*
+ * VRMA mapping
+ */
+#define HPTE_V_VRMA0x20UL
+
+#define HPTE_R_HOST_UNMAP_KEY  0x3e00UL
+/*
+ * We use this to differentiate between an MMIO key fault and
+ * and a key fault resulting from host swapping out the page.
+ */
+#define HPTE_R_MMIO_UNMAP_KEY  0x3c00UL
+
 
 /*
  * We use this bit in the guest_rpte field of the revmap entry
@@ -405,35 +416,82 @@ static inline void __kvmppc_unmap_host_hpte(struct kvm 
*kvm,
unsigned long *hpte_r,
bool mmio)
 {
-   *hpte_v |= HPTE_V_ABSENT;
-   if (mmio)
-   *hpte_r |= HPTE_R_KEY_HI | HPTE_R_KEY_LO;
+   /*
+* We unmap on host by adding the page to AMR class 31
+* which have both read/write access denied.
+*
+* For VRMA area we mark them invalid.
+*
+* If we are not using mmu_notifiers we don't use Access
+* class protection.
+*
+* Since we are not changing the hpt directly we don't
+* Worry about update ordering.
+*/
+   if ((*hpte_v  HPTE_V_VRMA) || !kvm-arch.using_mmu_notifiers)
+   *hpte_v = ~HPTE_V_VALID;
+   else if (!mmio) {
+   *hpte_r |= HPTE_R_HOST_UNMAP_KEY;
+   *hpte_v |= HPTE_V_VALID;
+   } else {
+   *hpte_r |= HPTE_R_MMIO_UNMAP_KEY;
+   *hpte_v |= HPTE_V_VALID;
+   }
 }
 
 static inline void kvmppc_unmap_host_hpte(struct kvm *kvm, __be64 *hptep)
 {
+   unsigned long pte_v, pte_r;
+
+   pte_v = be64_to_cpu(hptep[0]);
+   pte_r = be64_to_cpu(hptep[1]);
/*
 * We will never call this for MMIO
 */
-   hptep[0] |= cpu_to_be64(HPTE_V_ABSENT);
+   __kvmppc_unmap_host_hpte(kvm, pte_v, pte_r, 0);
+   hptep[1] = cpu_to_be64(pte_r);
+   eieio();
+   hptep[0] = cpu_to_be64(pte_v);
+   asm volatile(ptesync : : : memory);
+   /*
+* we have now successfully marked the hpte using key bits
+*/
atomic_dec(kvm-arch.hpte_update_in_progress);
 }
 
 static inline void kvmppc_map_host_hpte(struct kvm *kvm, unsigned long *hpte_v,
unsigned long *hpte_r)
 {
-   *hpte_v |= HPTE_V_VALID;
-   *hpte_v = ~HPTE_V_ABSENT;
+   /*
+* We will never try to map an MMIO region
+*/
+   if ((*hpte_v  HPTE_V_VRMA) || !kvm-arch.using_mmu_notifiers)
+   *hpte_v |= HPTE_V_VALID;
+   else {
+   /*
+* When we allow guest keys we should set this with key
+* for this page.
+*/
+   *hpte_r = ~(HPTE_R_KEY_HI | HPTE_R_KEY_LO);
+   }
 }
 
 static inline bool kvmppc_is_host_mapped_hpte(struct kvm *kvm, __be64 *hpte)
 {
-   unsigned long v;
+   unsigned long v, r;
 
v = be64_to_cpu(hpte[0]);
-   if (v  HPTE_V_VALID)
-   return true;
-   return false;
+   if ((v  HPTE_V_VRMA) || !kvm-arch.using_mmu_notifiers)
+   return v  HPTE_V_VALID;
+
+   r = be64_to_cpu(hpte[1]);
+   if (!(v  HPTE_V_VALID))
+   return false;
+   if ((r  (HPTE_R_KEY_HI | HPTE_R_KEY_LO)) == HPTE_R_HOST_UNMAP_KEY)
+   return false;
+   if ((r  (HPTE_R_KEY_HI | HPTE_R_KEY_LO)) == 

Re: [PATCH 0/6] Use virtual page class key protection mechanism for speeding up guest page fault

2014-06-29 Thread Benjamin Herrenschmidt
On Sun, 2014-06-29 at 16:47 +0530, Aneesh Kumar K.V wrote:

 To achieve the above we use virtual page calss protection mechanism for
 covering (2) and (3). For both the above case we mark the hpte
 valid, but associate the page with virtual page class index 30 and 31.
 The authority mask register is configured such that class index 30 and 31
 will have read/write denied. The above change results in a key fault
 for (2) and (3). This allows us to forward a NO_HPTE fault directly to guest
 without doing the expensive hash pagetable lookup.

So we have a measurable performance benefit (about half a second out of
8) but you didn't explain the drawback here which is to essentially make
it impossible for guests to exploit virtual page class keys, or did you
find a way to still make that possible ?

As it-is, it's not a huge issue for Linux but we might have to care with
other OSes that do care...

Do we have a way in PAPR to signify to the guest that the keys are not
available ?

Cheers,
Ben.


___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

Re: [PATCH 0/6] Use virtual page class key protection mechanism for speeding up guest page fault

2014-06-29 Thread Aneesh Kumar K.V
Benjamin Herrenschmidt b...@kernel.crashing.org writes:

 On Sun, 2014-06-29 at 16:47 +0530, Aneesh Kumar K.V wrote:

 To achieve the above we use virtual page calss protection mechanism for
 covering (2) and (3). For both the above case we mark the hpte
 valid, but associate the page with virtual page class index 30 and 31.
 The authority mask register is configured such that class index 30 and 31
 will have read/write denied. The above change results in a key fault
 for (2) and (3). This allows us to forward a NO_HPTE fault directly to guest
 without doing the expensive hash pagetable lookup.

 So we have a measurable performance benefit (about half a second out of
 8).

I was able to measure a performance benefit of 2 seconds earlier. But
once i get the below patch applied that got reduced. I am trying
to find how the patch is helping the performance. May be it is
avoiding some unnecessary invalidation ?

http://mid.gmane.org/1403876103-32459-1-git-send-email-aneesh.ku...@linux.vnet.ibm.com

I also believe the benefit depends on how much impact a hash table
lookup have. I did try to access the addresses linearly so that I can make
sure we do take a cache miss for hash page table access. 

but you didn't explain the drawback here which is to essentially make
 it impossible for guests to exploit virtual page class keys, or did you
 find a way to still make that possible ?

I am now making PROTFAULT to go to host. That means, ksm sharing is
represented as read only page and an attempt to write to it will get to
host via PROTFAULT. Now with that we can implement keys for guest if we
want to. So irrespective of what restrictions guest has put in, if the
host swapout the page, we will deny read/write. Now if the key fault
need to go to guest, we will find that out looking at the key index. 


 As it-is, it's not a huge issue for Linux but we might have to care with
 other OSes that do care...

 Do we have a way in PAPR to signify to the guest that the keys are not
 available ?

Right now Qemu doesn't provide the device tree node
ibm,processor-storage-keys. That means guest cannot use keys. So we are
good there. If we want to support guest keys, we need to fill that with
the value that indicate how many keys can be used for data and instruction.

-aneesh

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

Re: [RFC PATCH v2 4/6] mmc: sdhci: host: add new f_sdh30

2014-06-29 Thread Vincent Yang
2014-06-27 18:12 GMT+08:00 Mark Rutland mark.rutl...@arm.com:
 On Fri, Jun 27, 2014 at 04:32:21AM +0100, Vincent Yang wrote:
 2014-06-26 19:03 GMT+08:00 Mark Rutland mark.rutl...@arm.com:
  On Thu, Jun 26, 2014 at 07:23:30AM +0100, Vincent Yang wrote:
  This patch adds new host controller driver for
  Fujitsu SDHCI controller f_sdh30.
 
  Signed-off-by: Vincent Yang vincent.y...@tw.fujitsu.com
  ---
   .../devicetree/bindings/mmc/sdhci-fujitsu.txt  |  35 +++
   drivers/mmc/host/Kconfig   |   7 +
   drivers/mmc/host/Makefile  |   1 +
   drivers/mmc/host/sdhci_f_sdh30.c   | 346 
  +
   drivers/mmc/host/sdhci_f_sdh30.h   |  40 +++
   5 files changed, 429 insertions(+)
   create mode 100644 
  Documentation/devicetree/bindings/mmc/sdhci-fujitsu.txt
   create mode 100644 drivers/mmc/host/sdhci_f_sdh30.c
   create mode 100644 drivers/mmc/host/sdhci_f_sdh30.h
 
  diff --git a/Documentation/devicetree/bindings/mmc/sdhci-fujitsu.txt 
  b/Documentation/devicetree/bindings/mmc/sdhci-fujitsu.txt
  new file mode 100644
  index 000..40add438
  --- /dev/null
  +++ b/Documentation/devicetree/bindings/mmc/sdhci-fujitsu.txt
  @@ -0,0 +1,35 @@
  +* Fujitsu SDHCI controller
  +
  +This file documents differences between the core properties in mmc.txt
  +and the properties used by the sdhci_f_sdh30 driver.
  +
  +Required properties:
  +- compatible: fujitsu,f_sdh30
 
  Please use '-' rather than '_' in compatible strings.

 Hi Mark,
 Yes, I'll update it to '-' in next version.

 
  This seems to be the name of the driver. What is the precise name of the
  IP block?

 The name of the IP block is F_SDH30.
 That's why it uses fujitsu,f_sdh30

 Hmm. I'd still be tempted to use fujitsu,f-sdh30.

Hi Mark,
Sure, I'll update it to fujitsu,f-sdh30 in next version.


 
  [...]
 
  +   if (!of_property_read_u32(pdev-dev.of_node, vendor-hs200,
  + priv-vendor_hs200))
  +   dev_info(dev, Applying vendor-hs200 setting\n);
  +   else
  +   priv-vendor_hs200 = 0;
 
  This wasn't in the binding document, and a grep for vendor-hs200 in a
  v3.16-rc2 tree found me nothing.
 
  Please document this.

 Yes, it is a setting for a vendor specific register.
 I'll update it in next version.

 It would be nice to know exactly what this is. We usually shy clear of
 placing register values in dt. I can wait until the next posting if
 you're goin to document that.

I'm thinking about removing this register value in dt.
I'll update it in next version.


  +   if (!of_property_read_u32(pdev-dev.of_node, bus-width, 
  bus_width)) {
  +   if (bus_width == 8) {
  +   dev_info(dev, Applying 8 bit bus width\n);
  +   host-mmc-caps |= MMC_CAP_8_BIT_DATA;
  +   }
  +   }
 
  What if bus-width is not 8, or is not present?

 In both cases, it will not touch host-mmc-caps at all. Then 
 sdhci_add_host()
 will handle it and set MMC_CAP_4_BIT_DATA as default:

 [...]
 /*
 * A controller may support 8-bit width, but the board itself
 * might not have the pins brought out.  Boards that support
 * 8-bit width must set mmc-caps |= MMC_CAP_8_BIT_DATA; in
 * their platform code before calling sdhci_add_host(), and we
 * won't assume 8-bit width for hosts without that CAP.
 */
 if (!(host-quirks  SDHCI_QUIRK_FORCE_1_BIT_DATA))
 mmc-caps |= MMC_CAP_4_BIT_DATA;

 Ok, but does it make sense for a dts to have:

 bus-width = 1;

 If so, we should presumably do something.

 If not, we should at least print a warning that the dtb doesn't make
 sense.

I'll print a warning for invalid values in next version.
Thanks a lot for your review!


Best regards,
Vincent Yang


 Cheers,
 Mark.
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

Re: [PATCH v4] [BUGFIX] kprobes: Fix Failed to find blacklist error on ia64 and ppc64

2014-06-29 Thread Masami Hiramatsu
Ping? :)

(2014/06/20 11:23), Masami Hiramatsu wrote:
 On ia64 and ppc64, the function pointer does not point the
 entry address of the function, but the address of function
 discriptor (which contains the entry address and misc
 data.) Since the kprobes passes the function pointer stored
 by NOKPROBE_SYMBOL() to kallsyms_lookup_size_offset() for
 initalizing its blacklist, it fails and reports many errors
 as below.
 
   Failed to find blacklist 000101316830
   Failed to find blacklist 0001013000f0a000
   Failed to find blacklist 000101315f70a000
   Failed to find blacklist 000101324c80a000
   Failed to find blacklist 0001013063f0a000
   Failed to find blacklist 000101327800a000
   Failed to find blacklist 0001013277f0a000
   Failed to find blacklist 000101315a70a000
   Failed to find blacklist 0001013277e0a000
   Failed to find blacklist 000101305a20a000
   Failed to find blacklist 0001013277d0a000
   Failed to find blacklist 00010130bdc0a000
   Failed to find blacklist 00010130dc20a000
   Failed to find blacklist 000101309a00a000
   Failed to find blacklist 0001013277c0a000
   Failed to find blacklist 0001013277b0a000
   Failed to find blacklist 0001013277a0a000
   Failed to find blacklist 000101327790a000
   Failed to find blacklist 000101303140a000
   Failed to find blacklist 0001013a3280a000
 
 To fix this bug, this introduces function_entry() macro to
 retrieve the entry address from the given function pointer,
 and uses for kallsyms_lookup_size_offset() while initializing
 blacklist.
 
 Changes in v4:
  - Add kernel_text_address() check for verifying the address.
  - Moved on the latest linus tree.
 
 Changes in v3:
  - Fix a bug to get blacklist address based on function entry
instead of function descriptor. (Suzuki's work, Thanks!)
 
 Changes in V2:
  - Use function_entry() macro when lookin up symbols instead
of storing it.
  - Update for the latest -next.
 
 Signed-off-by: Masami Hiramatsu masami.hiramatsu...@hitachi.com
 Reported-by: Tony Luck tony.l...@gmail.com
 Tested-by: Tony Luck tony.l...@intel.com
 Cc: Michael Ellerman m...@ellerman.id.au
 Cc: Suzuki K. Poulose suz...@in.ibm.com
 Cc: Tony Luck tony.l...@intel.com
 Cc: Fenghua Yu fenghua...@intel.com
 Cc: Benjamin Herrenschmidt b...@kernel.crashing.org
 Cc: Paul Mackerras pau...@samba.org
 Cc: Ananth N Mavinakayanahalli ana...@in.ibm.com
 Cc: Kevin Hao haoke...@gmail.com
 Cc: linux-i...@vger.kernel.org
 Cc: linux-ker...@vger.kernel.org
 Cc: linuxppc-dev@lists.ozlabs.org
 ---
  arch/ia64/include/asm/types.h|2 ++
  arch/powerpc/include/asm/types.h |   11 +++
  include/linux/types.h|4 
  kernel/kprobes.c |   15 ++-
  4 files changed, 27 insertions(+), 5 deletions(-)
 
 diff --git a/arch/ia64/include/asm/types.h b/arch/ia64/include/asm/types.h
 index 4c351b1..95279dd 100644
 --- a/arch/ia64/include/asm/types.h
 +++ b/arch/ia64/include/asm/types.h
 @@ -27,5 +27,7 @@ struct fnptr {
   unsigned long gp;
  };
  
 +#define function_entry(fn) (((struct fnptr *)(fn))-ip)
 +
  #endif /* !__ASSEMBLY__ */
  #endif /* _ASM_IA64_TYPES_H */
 diff --git a/arch/powerpc/include/asm/types.h 
 b/arch/powerpc/include/asm/types.h
 index bfb6ded..8b89d65 100644
 --- a/arch/powerpc/include/asm/types.h
 +++ b/arch/powerpc/include/asm/types.h
 @@ -25,6 +25,17 @@ typedef struct {
   unsigned long env;
  } func_descr_t;
  
 +#if defined(CONFIG_PPC64)  (!defined(_CALL_ELF) || _CALL_ELF == 1)
 +/*
 + * On PPC64 ABIv1 the function pointer actually points to the
 + * function's descriptor. The first entry in the descriptor is the
 + * address of the function text.
 + */
 +#define function_entry(fn)   (((func_descr_t *)(fn))-entry)
 +#else
 +#define function_entry(fn)   ((unsigned long)(fn))
 +#endif
 +
  #endif /* __ASSEMBLY__ */
  
  #endif /* _ASM_POWERPC_TYPES_H */
 diff --git a/include/linux/types.h b/include/linux/types.h
 index a0bb704..3b95369 100644
 --- a/include/linux/types.h
 +++ b/include/linux/types.h
 @@ -213,5 +213,9 @@ struct callback_head {
  };
  #define rcu_head callback_head
  
 +#ifndef function_entry
 +#define function_entry(fn)   ((unsigned long)(fn))
 +#endif
 +
  #endif /*  __ASSEMBLY__ */
  #endif /* _LINUX_TYPES_H */
 diff --git a/kernel/kprobes.c b/kernel/kprobes.c
 index 3214289..7412535 100644
 --- a/kernel/kprobes.c
 +++ b/kernel/kprobes.c
 @@ -32,6 +32,7 @@
   *   prasa...@in.ibm.com added function-return probes.
   */
  #include linux/kprobes.h
 +#include linux/types.h
  #include linux/hash.h
  #include linux/init.h
  #include linux/slab.h
 @@ -2037,19 +2038,23 @@ static int __init populate_kprobe_blacklist(unsigned 
 long *start,
  {
   unsigned long *iter;
   struct kprobe_blacklist_entry *ent;
 - unsigned long offset = 0, size = 0;
 + unsigned long entry, offset = 0, size = 0;
  
   for (iter = start; iter  end; iter++) {
 - if (!kallsyms_lookup_size_offset(*iter, size, offset)) {
 - 

[PATCH] Fixes return issues in uic_init_one

2014-06-29 Thread Nicholas Krause
This patch fixes the FIXME messages for returning a ENOMEM error
if uic is not allocated and if uic-irqhost is not allocated a
IRQ domain that is linear returns EIO.

Signed-off-by: Nicholas Krause xerofo...@gmail.com
---
 arch/powerpc/sysdev/uic.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/sysdev/uic.c b/arch/powerpc/sysdev/uic.c
index 9203393..f95010a 100644
--- a/arch/powerpc/sysdev/uic.c
+++ b/arch/powerpc/sysdev/uic.c
@@ -239,7 +239,7 @@ static struct uic * __init uic_init_one(struct device_node 
*node)
 
uic = kzalloc(sizeof(*uic), GFP_KERNEL);
if (! uic)
-   return NULL; /* FIXME: panic? */
+   return -ENOMEM; 
 
raw_spin_lock_init(uic-lock);
indexp = of_get_property(node, cell-index, len);
@@ -261,7 +261,7 @@ static struct uic * __init uic_init_one(struct device_node 
*node)
uic-irqhost = irq_domain_add_linear(node, NR_UIC_INTS, uic_host_ops,
 uic);
if (! uic-irqhost)
-   return NULL; /* FIXME: panic? */
+   return -EIO; 
 
/* Start with all interrupts disabled, level and non-critical */
mtdcr(uic-dcrbase + UIC_ER, 0);
-- 
1.9.1

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

RE: [RESEND PATCH] memory: Freescale CoreNet Coherency Fabric error reporting driver

2014-06-29 Thread bharat.bhus...@freescale.com


 -Original Message-
 From: Wood Scott-B07421
 Sent: Wednesday, June 04, 2014 10:38 PM
 To: Bhushan Bharat-R65777
 Cc: Greg Kroah-Hartman; linuxppc-dev@lists.ozlabs.org; linux-
 ker...@vger.kernel.org
 Subject: Re: [RESEND PATCH] memory: Freescale CoreNet Coherency Fabric error
 reporting driver
 
 On Wed, 2014-06-04 at 12:04 -0500, Bhushan Bharat-R65777 wrote:
 
   -Original Message-
   From: Wood Scott-B07421
   Sent: Wednesday, June 04, 2014 10:12 PM
   To: Bhushan Bharat-R65777
   Cc: Greg Kroah-Hartman; linuxppc-dev@lists.ozlabs.org; linux-
   ker...@vger.kernel.org
   Subject: Re: [RESEND PATCH] memory: Freescale CoreNet Coherency
   Fabric error reporting driver
  
   On Wed, 2014-06-04 at 03:17 -0500, Bhushan Bharat-R65777 wrote:
 +static int ccf_remove(struct platform_device *pdev) {
 + struct ccf_private *ccf = dev_get_drvdata(pdev-dev);
 +
 + switch (ccf-info-version) {
 + case CCF1:
 + iowrite32be(0, ccf-err_regs-errdis);
 + break;
 +
 + case CCF2:
 + iowrite32be(0, ccf-err_regs-errinten);
   
Do you think it is same to disable detection bits in ccf-err_regs-
 errdis?
  
   Disabling the interrupt is what we're aiming for here, but ccf1
   doesn't provide a way to do that separate from disabling detection.
 
  What I wanted to say that do we also need to disable detection (set
  ERRDET_LAE | ERRDET_CV bits in errdis) apart from clearing errinten on
  ccf2 ?
 
 I don't think we need to.  You could argue that we should for consistency,
 though I think there's value in errors continuing to be detected even without
 the driver (e.g. can dump the registers in a debugger).

Yes this comment was for consistency. Also IIUC, the state which is left when 
the driver is removed is not default reset behavior.
If we want errors to be detected then should not we have a sysfs interface?

Thanks
-Bharat

 
 -Scott
 

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev