[PATCH] of: Fix address decoding on Bimini and js2x machines

2013-07-03 Thread Benjamin Herrenschmidt
 Commit:

  e38c0a1fbc5803cbacdaac0557c70ac8ca5152e7
  of/address: Handle #address-cells  2 specially

broke real time clock access on Bimini, js2x, and similar powerpc
machines using the maple platform. That code was indirectly relying
on the old (broken) behaviour of the translation for the hypertransport
to ISA bridge.

This fixes it by treating hypertransport as a PCI bus

Signed-off-by: Benjamin Herrenschmidt b...@kernel.crashing.org
CC: sta...@vger.kernel.org [v3.6+]
---

Rob, if you have no objection I will put that in powerpc -next

diff --git a/drivers/of/address.c b/drivers/of/address.c
index 04da786..7c8221d 100644
--- a/drivers/of/address.c
+++ b/drivers/of/address.c
@@ -106,8 +106,12 @@ static unsigned int of_bus_default_get_flags(const __be32 *
 
 static int of_bus_pci_match(struct device_node *np)
 {
-   /* vci is for the /chaos bridge on 1st-gen PCI powermacs */
-   return !strcmp(np-type, pci) || !strcmp(np-type, vci);
+   /*
+* vci is for the /chaos bridge on 1st-gen PCI powermacs
+* ht is hypertransport
+*/
+   return !strcmp(np-type, pci) || !strcmp(np-type, vci) ||
+   !strcmp(np-type, ht);
 }
 
 static void of_bus_pci_count_cells(struct device_node *np,


___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


Re: [PATCH -V3 4/4] powerpc/kvm: Use 256K chunk to track both RMA and hash page table allocation.

2013-07-03 Thread Paul Mackerras
On Tue, Jul 02, 2013 at 11:15:18AM +0530, Aneesh Kumar K.V wrote:
 From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
 
 Both RMA and hash page table request will be a multiple of 256K. We can use
 a chunk size of 256K to track the free/used 256K chunk in the bitmap. This
 should help to reduce the bitmap size.
 
 Signed-off-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com

Acked-by: Paul Mackerras pau...@samba.org

Thanks!
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


Re: [PATCH -V3 2/4] powerpc/kvm: Contiguous memory allocator based hash page table allocation

2013-07-03 Thread Paul Mackerras
On Tue, Jul 02, 2013 at 11:15:16AM +0530, Aneesh Kumar K.V wrote:
 From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
 
 Powerpc architecture uses a hash based page table mechanism for mapping 
 virtual
 addresses to physical address. The architecture require this hash page table 
 to
 be physically contiguous. With KVM on Powerpc currently we use early 
 reservation
 mechanism for allocating guest hash page table. This implies that we need to
 reserve a big memory region to ensure we can create large number of guest
 simultaneously with KVM on Power. Another disadvantage is that the reserved 
 memory
 is not available to rest of the subsystems and and that implies we limit the 
 total
 available memory in the host.
 
 This patch series switch the guest hash page table allocation to use
 contiguous memory allocator.
 
 Signed-off-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com

Acked-by: Paul Mackerras pau...@samba.org
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


Re: [PATCH 1/2] DMA: Freescale: Add new 8-channel DMA engine device tree nodes

2013-07-03 Thread Hongbo Zhang

On 07/03/2013 11:53 AM, Hongbo Zhang wrote:

hmm...add the devicetree-disc...@lists.ozlabs.org into list.

Note that we are discussing a better naming for this new compatible 
property in the corresponding [PATCH 2/2], so I will resend a v2 of 
this patch.



On 07/01/2013 11:46 AM, hongbo.zh...@freescale.com wrote:

From: Hongbo Zhang hongbo.zh...@freescale.com

Freescale QorIQ T4 and B4 introduce new 8-channel DMA engines, this 
patch add

the device tree nodes for them.

Signed-off-by: Hongbo Zhang hongbo.zh...@freescale.com
---
  arch/powerpc/boot/dts/fsl/qoriq-dma2-0.dtsi |   90 
+++
  arch/powerpc/boot/dts/fsl/qoriq-dma2-1.dtsi |   90 
+++

  arch/powerpc/boot/dts/fsl/t4240si-post.dtsi |4 +-
  3 files changed, 182 insertions(+), 2 deletions(-)
  create mode 100644 arch/powerpc/boot/dts/fsl/qoriq-dma2-0.dtsi
  create mode 100644 arch/powerpc/boot/dts/fsl/qoriq-dma2-1.dtsi

Scott, any comment of these two file names?


diff --git a/arch/powerpc/boot/dts/fsl/qoriq-dma2-0.dtsi 
b/arch/powerpc/boot/dts/fsl/qoriq-dma2-0.dtsi

new file mode 100644
index 000..c626c49
--- /dev/null
+++ b/arch/powerpc/boot/dts/fsl/qoriq-dma2-0.dtsi
@@ -0,0 +1,90 @@
+/*
+ * QorIQ DMA device tree stub [ controller @ offset 0x10 ]
+ *
+ * Copyright 2011-2013 Freescale Semiconductor Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following 
conditions are met:

+ * * Redistributions of source code must retain the above copyright
+ *   notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above 
copyright
+ *   notice, this list of conditions and the following 
disclaimer in the
+ *   documentation and/or other materials provided with the 
distribution.

+ * * Neither the name of Freescale Semiconductor nor the
+ *   names of its contributors may be used to endorse or promote 
products
+ *   derived from this software without specific prior written 
permission.

+ *
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms 
of the

+ * GNU General Public License (GPL) as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor ``AS IS'' 
AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE 
IMPLIED
+ * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR 
PURPOSE ARE
+ * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE 
FOR ANY
+ * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR 
CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS 
OR SERVICES;
+ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER 
CAUSED AND
+ * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT 
LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE 
USE OF THIS

+ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+dma0: dma@100300 {
+#address-cells = 1;
+#size-cells = 1;
+compatible = fsl,eloplus-dma2;
+reg = 0x100300 0x4 0x100600 0x4;
+ranges = 0x0 0x100100 0x500;
+cell-index = 0;
+dma-channel@0 {
+compatible = fsl,eloplus-dma-channel;
+reg = 0x0 0x80;
+cell-index = 0;
+interrupts = 28 2 0 0;
+};
+dma-channel@80 {
+compatible = fsl,eloplus-dma-channel;
+reg = 0x80 0x80;
+cell-index = 1;
+interrupts = 29 2 0 0;
+};
+dma-channel@100 {
+compatible = fsl,eloplus-dma-channel;
+reg = 0x100 0x80;
+cell-index = 2;
+interrupts = 30 2 0 0;
+};
+dma-channel@180 {
+compatible = fsl,eloplus-dma-channel;
+reg = 0x180 0x80;
+cell-index = 3;
+interrupts = 31 2 0 0;
+};
+dma-channel@300 {
+compatible = fsl,eloplus-dma-channel;
+reg = 0x300 0x80;
+cell-index = 4;
+interrupts = 76 2 0 0;
+};
+dma-channel@380 {
+compatible = fsl,eloplus-dma-channel;
+reg = 0x380 0x80;
+cell-index = 5;
+interrupts = 77 2 0 0;
+};
+dma-channel@400 {
+compatible = fsl,eloplus-dma-channel;
+reg = 0x400 0x80;
+cell-index = 6;
+interrupts = 78 2 0 0;
+};
+dma-channel@480 {
+compatible = fsl,eloplus-dma-channel;
+reg = 0x480 0x80;
+cell-index = 7;
+interrupts = 79 2 0 0;
+};
+};
diff --git a/arch/powerpc/boot/dts/fsl/qoriq-dma2-1.dtsi 
b/arch/powerpc/boot/dts/fsl/qoriq-dma2-1.dtsi

new file mode 100644
index 000..980ea77
--- /dev/null
+++ b/arch/powerpc/boot/dts/fsl/qoriq-dma2-1.dtsi
@@ -0,0 +1,90 @@
+/*
+ * QorIQ DMA device tree stub [ controller @ offset 0x101000 ]
+ *
+ * Copyright 2011-2013 Freescale Semiconductor Inc.
+ *
+ 

[PATCH 2/2] powerpc/mm: Fix fallthrough bug in hpte_decode

2013-07-03 Thread Aneesh Kumar K.V
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com

We should not fallthrough different case statements in hpte_decode. Add
break statement to break out of the switch. The regression is introduced by
dcda287a9b26309ae43a091d0ecde16f8f61b4c0 powerpc/mm: Simplify hpte_decode

Reported-by: Paul Mackerras pau...@samba.org
Signed-off-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
---
 arch/powerpc/mm/hash_native_64.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/arch/powerpc/mm/hash_native_64.c b/arch/powerpc/mm/hash_native_64.c
index 0de15fc..e1f9b82 100644
--- a/arch/powerpc/mm/hash_native_64.c
+++ b/arch/powerpc/mm/hash_native_64.c
@@ -560,6 +560,7 @@ static void hpte_decode(struct hash_pte *hpte, unsigned 
long slot,
seg_off |= vpi  shift;
}
*vpn = vsid  (SID_SHIFT - VPN_SHIFT) | seg_off  VPN_SHIFT;
+   break;
case MMU_SEGSIZE_1T:
/* We only have 40 - 23 bits of seg_off in avpn */
seg_off = (avpn  0x1)  23;
@@ -569,6 +570,7 @@ static void hpte_decode(struct hash_pte *hpte, unsigned 
long slot,
seg_off |= vpi  shift;
}
*vpn = vsid  (SID_SHIFT_1T - VPN_SHIFT) | seg_off  
VPN_SHIFT;
+   break;
default:
*vpn = size = 0;
}
-- 
1.8.1.2

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


[PATCH 1/2] powerpc/mm: Use the correct SLB(LLP) encoding in tlbie instruction

2013-07-03 Thread Aneesh Kumar K.V
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com

The sllp value is stored in mmu_psize_defs in such a way that we can easily OR
the value to get the operand for slbmte instruction. ie, the L and LP bits are
not contiguous. Decode the bits and use them correctly in tlbie.
regression is introduced by 1f6aaaccb1b3af8613fe45781c1aefee2ae8c6b3
powerpc: Update tlbie/tlbiel as per ISA doc

Reported-by: Paul Mackerras pau...@samba.org
Signed-off-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
---
 arch/powerpc/mm/hash_native_64.c | 10 --
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/mm/hash_native_64.c b/arch/powerpc/mm/hash_native_64.c
index 3f0c30a..0de15fc 100644
--- a/arch/powerpc/mm/hash_native_64.c
+++ b/arch/powerpc/mm/hash_native_64.c
@@ -43,6 +43,7 @@ static inline void __tlbie(unsigned long vpn, int psize, int 
apsize, int ssize)
 {
unsigned long va;
unsigned int penc;
+   unsigned long sllp;
 
/*
 * We need 14 to 65 bits of va for a tlibe of 4K page
@@ -64,7 +65,9 @@ static inline void __tlbie(unsigned long vpn, int psize, int 
apsize, int ssize)
/* clear out bits after (52) [052.63] */
va = ~((1ul  (64 - 52)) - 1);
va |= ssize  8;
-   va |= mmu_psize_defs[apsize].sllp  6;
+   sllp = ((mmu_psize_defs[apsize].sllp  0x100)  6) |
+   ((mmu_psize_defs[apsize].sllp  0x30)  4);
+   va |= sllp  5;
asm volatile(ASM_FTR_IFCLR(tlbie %0,0, PPC_TLBIE(%1,%0), %2)
 : : r (va), r(0), i (CPU_FTR_ARCH_206)
 : memory);
@@ -98,6 +101,7 @@ static inline void __tlbiel(unsigned long vpn, int psize, 
int apsize, int ssize)
 {
unsigned long va;
unsigned int penc;
+   unsigned long sllp;
 
/* VPN_SHIFT can be atmost 12 */
va = vpn  VPN_SHIFT;
@@ -113,7 +117,9 @@ static inline void __tlbiel(unsigned long vpn, int psize, 
int apsize, int ssize)
/* clear out bits after(52) [052.63] */
va = ~((1ul  (64 - 52)) - 1);
va |= ssize  8;
-   va |= mmu_psize_defs[apsize].sllp  6;
+   sllp = ((mmu_psize_defs[apsize].sllp  0x100)  6) |
+   ((mmu_psize_defs[apsize].sllp  0x30)  4);
+   va |= sllp  5;
asm volatile(.long 0x7c000224 | (%0  11) | (0  21)
 : : r(va) : memory);
break;
-- 
1.8.1.2

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


RE: [v2][PATCH 1/7] powerpc/book3e: support CONFIG_RELOCATABLE

2013-07-03 Thread Sethi Varun-B16395


 -Original Message-
 From: Linuxppc-dev [mailto:linuxppc-dev-
 bounces+varun.sethi=freescale@lists.ozlabs.org] On Behalf Of Tiejun
 Chen
 Sent: Thursday, June 20, 2013 1:23 PM
 To: b...@kernel.crashing.org
 Cc: linuxppc-dev@lists.ozlabs.org; linux-ker...@vger.kernel.org
 Subject: [v2][PATCH 1/7] powerpc/book3e: support CONFIG_RELOCATABLE
 
 book3e is different with book3s since 3s includes the exception vectors
 code in head_64.S as it relies on absolute addressing which is only
 possible within this compilation unit. So we have to get that label
 address with got.
 
 And when boot a relocated kernel, we should reset ipvr properly again
 after .relocate.
 
 Signed-off-by: Tiejun Chen tiejun.c...@windriver.com
 ---
  arch/powerpc/include/asm/exception-64e.h |8 
  arch/powerpc/kernel/exceptions-64e.S |   15 ++-
  arch/powerpc/kernel/head_64.S|   22 ++
  arch/powerpc/lib/feature-fixups.c|7 +++
  4 files changed, 51 insertions(+), 1 deletion(-)
 
 diff --git a/arch/powerpc/include/asm/exception-64e.h
 b/arch/powerpc/include/asm/exception-64e.h
 index 51fa43e..89e940d 100644
 --- a/arch/powerpc/include/asm/exception-64e.h
 +++ b/arch/powerpc/include/asm/exception-64e.h
 @@ -214,10 +214,18 @@ exc_##label##_book3e:
  #define TLB_MISS_STATS_SAVE_INFO_BOLTED  #endif
 
 +#ifndef CONFIG_RELOCATABLE
  #define SET_IVOR(vector_number, vector_offset)   \
   li  r3,vector_offset@l; \
   ori r3,r3,interrupt_base_book3e@l;  \
   mtspr   SPRN_IVOR##vector_number,r3;
 +#else
 +#define SET_IVOR(vector_number, vector_offset)   \
 + LOAD_REG_ADDR(r3,interrupt_base_book3e);\
 + rlwinm  r3,r3,0,15,0;   \
 + ori r3,r3,vector_offset@l;  \
 + mtspr   SPRN_IVOR##vector_number,r3;
 +#endif
 
[Sethi Varun-B16395] Please add a documentation note here.

  #endif /* _ASM_POWERPC_EXCEPTION_64E_H */
 
 diff --git a/arch/powerpc/kernel/exceptions-64e.S
 b/arch/powerpc/kernel/exceptions-64e.S
 index 645170a..4b23119 100644
 --- a/arch/powerpc/kernel/exceptions-64e.S
 +++ b/arch/powerpc/kernel/exceptions-64e.S
 @@ -1097,7 +1097,15 @@ skpinv:addir6,r6,1
   /* Increment */
   * r4 = MAS0 w/TLBSEL  ESEL for the temp mapping
   */
   /* Now we branch the new virtual address mapped by this entry */
 +#ifdef CONFIG_RELOCATABLE
 + /* We have to find out address from lr. */
 + bl  1f  /* Find our address */
 +1:   mflrr6
 + addir6,r6,(2f - 1b)
 + tovirt(r6,r6)
 +#else
   LOAD_REG_IMMEDIATE(r6,2f)
 +#endif
   lis r7,MSR_KERNEL@h
   ori r7,r7,MSR_KERNEL@l
   mtspr   SPRN_SRR0,r6
 @@ -1348,9 +1356,14 @@ _GLOBAL(book3e_secondary_thread_init)
   mflrr28
   b   3b
 
 -_STATIC(init_core_book3e)
 +_GLOBAL(init_core_book3e)
   /* Establish the interrupt vector base */
 +#ifdef CONFIG_RELOCATABLE
 + tovirt(r2,r2)
 + LOAD_REG_ADDR(r3, interrupt_base_book3e) #else
   LOAD_REG_IMMEDIATE(r3, interrupt_base_book3e)
 +#endif
   mtspr   SPRN_IVPR,r3
   sync
   blr
[Sethi Varun-B16395] Please add a documentation note here as well. 

 diff --git a/arch/powerpc/kernel/head_64.S
 b/arch/powerpc/kernel/head_64.S index b61363d..0942f3a 100644
 --- a/arch/powerpc/kernel/head_64.S
 +++ b/arch/powerpc/kernel/head_64.S
 @@ -414,12 +414,22 @@ _STATIC(__after_prom_start)
   /* process relocations for the final address of the kernel */
   lis r25,PAGE_OFFSET@highest /* compute virtual base of kernel */
   sldir25,r25,32
 +#if defined(CONFIG_PPC_BOOK3E)
 + tovirt(r26,r26) /* on booke, we already run at
 PAGE_OFFSET */
 +#endif
   lwz r7,__run_at_load-_stext(r26)
 +#if defined(CONFIG_PPC_BOOK3E)
 + tophys(r26,r26) /* Restore for the remains. */
 +#endif
   cmplwi  cr0,r7,1/* flagged to stay where we are ? */
   bne 1f
   add r25,r25,r26
  1:   mr  r3,r25
   bl  .relocate
 +#if defined(CONFIG_PPC_BOOK3E)
 + /* We should set ivpr again after .relocate. */
 + bl  .init_core_book3e
 +#endif
  #endif
 
[Sethi Varun-B16395] A more detailed note over here would be useful.

  /*
 @@ -447,12 +457,24 @@ _STATIC(__after_prom_start)
   * variable __run_at_load, if it is set the kernel is treated as
 relocatable
   * kernel, otherwise it will be moved to PHYSICAL_START
   */
 +#if defined(CONFIG_PPC_BOOK3E)
 + tovirt(r26,r26) /* on booke, we already run at
 PAGE_OFFSET */
 +#endif
   lwz r7,__run_at_load-_stext(r26)
 +#if defined(CONFIG_PPC_BOOK3E)
 + tophys(r26,r26) /* Restore for the remains. */
 +#endif
   cmplwi  cr0,r7,1
   bne 3f
 
 +#ifdef CONFIG_PPC_BOOK3E
 + LOAD_REG_ADDR(r5, interrupt_end_book3e)
 + LOAD_REG_ADDR(r11, _stext)
 + sub r5,r5,r11
 +#else
   /* just copy interrupts */
   

RE: [RFC PATCH 5/6] KVM: PPC: Book3E: Add ONE_REG AltiVec support

2013-07-03 Thread Caraman Mihai Claudiu-B02008
 -Original Message-
 From: Wood Scott-B07421
 Sent: Wednesday, June 05, 2013 1:40 AM
 To: Caraman Mihai Claudiu-B02008
 Cc: kvm-...@vger.kernel.org; k...@vger.kernel.org; linuxppc-
 d...@lists.ozlabs.org; Caraman Mihai Claudiu-B02008
 Subject: Re: [RFC PATCH 5/6] KVM: PPC: Book3E: Add ONE_REG AltiVec
 support
 
 On 06/03/2013 03:54:27 PM, Mihai Caraman wrote:
  Add ONE_REG support for AltiVec on Book3E.
 
  Signed-off-by: Mihai Caraman mihai.cara...@freescale.com
  ---
   arch/powerpc/kvm/booke.c |   32 
   1 files changed, 32 insertions(+), 0 deletions(-)
 
  diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c
  index 01eb635..019496d 100644
  --- a/arch/powerpc/kvm/booke.c
  +++ b/arch/powerpc/kvm/booke.c
  @@ -1570,6 +1570,22 @@ int kvm_vcpu_ioctl_get_one_reg(struct kvm_vcpu
  *vcpu, struct kvm_one_reg *reg)
  case KVM_REG_PPC_DEBUG_INST:
  val = get_reg_val(reg-id, KVMPPC_INST_EHPRIV);
  break;
  +#ifdef CONFIG_ALTIVEC
  +   case KVM_REG_PPC_VR0 ... KVM_REG_PPC_VR31:
  +   if (!cpu_has_feature(CPU_FTR_ALTIVEC)) {
  +   r = -ENXIO;
  +   break;
  +   }
  +   val.vval = vcpu-arch.vr[reg-id - KVM_REG_PPC_VR0];
  +   break;
  +   case KVM_REG_PPC_VSCR:
  +   if (!cpu_has_feature(CPU_FTR_ALTIVEC)) {
  +   r = -ENXIO;
  +   break;
  +   }
  +   val = get_reg_val(reg-id, vcpu-arch.vscr.u[3]);
  +   break;
 
 Why u[3]?

AltiVec PEM manual says: The VSCR has two defined bits, the AltiVec non-Java
mode (NJ) bit (VSCR[15]) and the AltiVec saturation (SAT) bit (VSCR[31]);
the remaining bits are reserved.

I think this is the reason Paul M. exposed KVM_REG_PPC_VSCR width as 32-bit.

-Mike




___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


RE: [PATCH 1/2] powerpc/booke64: Use common defines for AltiVec interrupts numbers

2013-07-03 Thread Caraman Mihai Claudiu-B02008
 So we can remove this hack in kvm_asm.h:

Not yet, this comment was added in the context of AltiVec RFC patches
which intended to remove a similar dependency.

 
 /*
   * TODO: Unify 32-bit and 64-bit kernel exception handlers to use same
 defines
   */
 #define BOOKE_INTERRUPT_SPE_UNAVAIL BOOKE_INTERRUPT_SPE_ALTIVEC_UNAVAIL
 #define BOOKE_INTERRUPT_SPE_FP_DATA
 BOOKE_INTERRUPT_SPE_FP_DATA_ALTIVEC_ASSIST
 #define BOOKE_INTERRUPT_ALTIVEC_UNAVAIL
 BOOKE_INTERRUPT_SPE_ALTIVEC_UNAVAIL
 #define BOOKE_INTERRUPT_ALTIVEC_ASSIST \
 
 BOOKE_INTERRUPT_SPE_FP_DATA_ALTIVEC_ASSIST
 
 It was added as a compilation fix, and it was less intrusive to
 temporarily fix it this way.
 
 I am curious why the above code wasn't removed at the end of this
 patchset. :-)

Before removing it we also need to apply at least the first patch from
the Altivec set that I will send today.

-Mike

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


Re: [PATCH][v2] powerpc/85xx: Move ePAPR paravirt initialization earlier

2013-07-03 Thread Tudor Laurentiu

On 07/02/2013 08:55 PM, Scott Wood wrote:

On 07/02/2013 07:46:29 AM, Laurentiu Tudor wrote:

diff --git a/arch/powerpc/kernel/epapr_paravirt.c
b/arch/powerpc/kernel/epapr_paravirt.c
index d44a571..d05f9da 100644
--- a/arch/powerpc/kernel/epapr_paravirt.c
+++ b/arch/powerpc/kernel/epapr_paravirt.c
@@ -30,38 +30,45 @@ extern u32 epapr_ev_idle_start[];

bool epapr_paravirt_enabled;

-static int __init epapr_paravirt_init(void)
+static int __init early_init_dt_scan_epapr(unsigned long node,
+ const char *uname,
+ int depth, void *data)
{
- struct device_node *hyper_node;
- const u32 *insts;
- int len, i;
+ const u32 *instrs;
+ unsigned long len;
+ int i;

- hyper_node = of_find_node_by_path(/hypervisor);
- if (!hyper_node)
- return -ENODEV;
+ if (!of_flat_dt_is_compatible(node, epapr,hypervisor-1))
+ return 0;


QEMU doesn't set epapr,hypervisor-1 but it still uses the same hcall
mechanism. The compatible that QEMU sets is linux,kvm. Perhaps QEMU
should change, but we'd still like to be compatible with older QEMUs.

How is this change related to moving initialization earlier?


Just a (extra-)check to see that i'm on the right node.
But considering your mention on qemu/kvm using a different compatible 
i'm thinking of dropping it and only try reading the 
hcall-instructions property.



- insts = of_get_property(hyper_node, hcall-instructions, len);
- if (!insts)
- return -ENODEV;
+#if !defined(CONFIG_64BIT) || defined(CONFIG_PPC_BOOK3E_64)
+ if (of_get_flat_dt_prop(node, has-idle, NULL))
+ ppc_md.power_save = epapr_ev_idle;
+#endif


Why are you doing this before processing hcall-instructions?



Nothing of importance. The code seemed more clear to me.

---
Best Regards, Laurentiu

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


[PATCH 1/6] KVM: PPC: Book3E: Use common defines for SPE/FP/AltiVec int numbers

2013-07-03 Thread Mihai Caraman
Use common BOOKE_IRQPRIO and BOOKE_INTERRUPT defines for SPE/FP/AltiVec
which share the same interrupts numbers.

Signed-off-by: Mihai Caraman mihai.cara...@freescale.com
---
 arch/powerpc/kvm/booke.c  |   16 
 arch/powerpc/kvm/booke.h  |4 ++--
 arch/powerpc/kvm/bookehv_interrupts.S |8 
 arch/powerpc/kvm/e500.c   |   10 ++
 arch/powerpc/kvm/e500_emulate.c   |8 
 5 files changed, 24 insertions(+), 22 deletions(-)

diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c
index d2fef74..fb47e85 100644
--- a/arch/powerpc/kvm/booke.c
+++ b/arch/powerpc/kvm/booke.c
@@ -362,8 +362,8 @@ static int kvmppc_booke_irqprio_deliver(struct kvm_vcpu 
*vcpu,
case BOOKE_IRQPRIO_ITLB_MISS:
case BOOKE_IRQPRIO_SYSCALL:
case BOOKE_IRQPRIO_FP_UNAVAIL:
-   case BOOKE_IRQPRIO_SPE_UNAVAIL:
-   case BOOKE_IRQPRIO_SPE_FP_DATA:
+   case BOOKE_IRQPRIO_SPE_ALTIVEC_UNAVAIL:
+   case BOOKE_IRQPRIO_SPE_FP_DATA_ALTIVEC_ASSIST:
case BOOKE_IRQPRIO_SPE_FP_ROUND:
case BOOKE_IRQPRIO_AP_UNAVAIL:
allowed = 1;
@@ -944,18 +944,18 @@ int kvmppc_handle_exit(struct kvm_run *run, struct 
kvm_vcpu *vcpu,
break;
 
 #ifdef CONFIG_SPE
-   case BOOKE_INTERRUPT_SPE_UNAVAIL: {
+   case BOOKE_INTERRUPT_SPE_ALTIVEC_UNAVAIL: {
if (vcpu-arch.shared-msr  MSR_SPE)
kvmppc_vcpu_enable_spe(vcpu);
else
kvmppc_booke_queue_irqprio(vcpu,
-  BOOKE_IRQPRIO_SPE_UNAVAIL);
+  
BOOKE_IRQPRIO_SPE_ALTIVEC_UNAVAIL);
r = RESUME_GUEST;
break;
}
 
-   case BOOKE_INTERRUPT_SPE_FP_DATA:
-   kvmppc_booke_queue_irqprio(vcpu, BOOKE_IRQPRIO_SPE_FP_DATA);
+   case BOOKE_INTERRUPT_SPE_FP_DATA_ALTIVEC_ASSIST:
+   kvmppc_booke_queue_irqprio(vcpu, 
BOOKE_IRQPRIO_SPE_FP_DATA_ALTIVEC_ASSIST);
r = RESUME_GUEST;
break;
 
@@ -964,7 +964,7 @@ int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu 
*vcpu,
r = RESUME_GUEST;
break;
 #else
-   case BOOKE_INTERRUPT_SPE_UNAVAIL:
+   case BOOKE_INTERRUPT_SPE_ALTIVEC_UNAVAIL:
/*
 * Guest wants SPE, but host kernel doesn't support it.  Send
 * an unimplemented operation program check to the guest.
@@ -977,7 +977,7 @@ int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu 
*vcpu,
 * These really should never happen without CONFIG_SPE,
 * as we should never enable the real MSR[SPE] in the guest.
 */
-   case BOOKE_INTERRUPT_SPE_FP_DATA:
+   case BOOKE_INTERRUPT_SPE_FP_DATA_ALTIVEC_ASSIST:
case BOOKE_INTERRUPT_SPE_FP_ROUND:
printk(KERN_CRIT %s: unexpected SPE interrupt %u at %08lx\n,
   __func__, exit_nr, vcpu-arch.pc);
diff --git a/arch/powerpc/kvm/booke.h b/arch/powerpc/kvm/booke.h
index 5fd1ba6..9e92006 100644
--- a/arch/powerpc/kvm/booke.h
+++ b/arch/powerpc/kvm/booke.h
@@ -32,8 +32,8 @@
 #define BOOKE_IRQPRIO_ALIGNMENT 2
 #define BOOKE_IRQPRIO_PROGRAM 3
 #define BOOKE_IRQPRIO_FP_UNAVAIL 4
-#define BOOKE_IRQPRIO_SPE_UNAVAIL 5
-#define BOOKE_IRQPRIO_SPE_FP_DATA 6
+#define BOOKE_IRQPRIO_SPE_ALTIVEC_UNAVAIL 5
+#define BOOKE_IRQPRIO_SPE_FP_DATA_ALTIVEC_ASSIST 6
 #define BOOKE_IRQPRIO_SPE_FP_ROUND 7
 #define BOOKE_IRQPRIO_SYSCALL 8
 #define BOOKE_IRQPRIO_AP_UNAVAIL 9
diff --git a/arch/powerpc/kvm/bookehv_interrupts.S 
b/arch/powerpc/kvm/bookehv_interrupts.S
index e8ed7d6..8d35dc0 100644
--- a/arch/powerpc/kvm/bookehv_interrupts.S
+++ b/arch/powerpc/kvm/bookehv_interrupts.S
@@ -295,9 +295,9 @@ kvm_handler BOOKE_INTERRUPT_DTLB_MISS, EX_PARAMS_TLB, \
SPRN_SRR0, SPRN_SRR1, (NEED_EMU | NEED_DEAR | NEED_ESR)
 kvm_handler BOOKE_INTERRUPT_ITLB_MISS, EX_PARAMS_TLB, \
SPRN_SRR0, SPRN_SRR1, 0
-kvm_handler BOOKE_INTERRUPT_SPE_UNAVAIL, EX_PARAMS(GEN), \
+kvm_handler BOOKE_INTERRUPT_SPE_ALTIVEC_UNAVAIL, EX_PARAMS(GEN), \
SPRN_SRR0, SPRN_SRR1, 0
-kvm_handler BOOKE_INTERRUPT_SPE_FP_DATA, EX_PARAMS(GEN), \
+kvm_handler BOOKE_INTERRUPT_SPE_FP_DATA_ALTIVEC_ASSIST, EX_PARAMS(GEN), \
SPRN_SRR0, SPRN_SRR1, 0
 kvm_handler BOOKE_INTERRUPT_SPE_FP_ROUND, EX_PARAMS(GEN), \
SPRN_SRR0, SPRN_SRR1, 0
@@ -398,8 +398,8 @@ kvm_lvl_handler BOOKE_INTERRUPT_WATCHDOG, \
 kvm_handler BOOKE_INTERRUPT_DTLB_MISS, \
SPRN_SRR0, SPRN_SRR1, (NEED_EMU | NEED_DEAR | NEED_ESR)
 kvm_handler BOOKE_INTERRUPT_ITLB_MISS, SPRN_SRR0, SPRN_SRR1, 0
-kvm_handler BOOKE_INTERRUPT_SPE_UNAVAIL, SPRN_SRR0, SPRN_SRR1, 0
-kvm_handler BOOKE_INTERRUPT_SPE_FP_DATA, SPRN_SRR0, SPRN_SRR1, 0
+kvm_handler BOOKE_INTERRUPT_SPE_ALTIVEC_UNAVAIL, SPRN_SRR0, SPRN_SRR1, 0
+kvm_handler BOOKE_INTERRUPT_SPE_FP_DATA_ALTIVEC_ASSIST, SPRN_SRR0, SPRN_SRR1, 0
 

[PATCH 4/6] KVM: PPC: Book3E: Add AltiVec support

2013-07-03 Thread Mihai Caraman
Add KVM Book3E AltiVec support. KVM Book3E FPU support gracefully reuse host
infrastructure so follow the same approach for AltiVec.

Signed-off-by: Mihai Caraman mihai.cara...@freescale.com
---
 arch/powerpc/kvm/booke.c |   72 -
 1 files changed, 70 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c
index 3cae2e3..c3c3af6 100644
--- a/arch/powerpc/kvm/booke.c
+++ b/arch/powerpc/kvm/booke.c
@@ -98,6 +98,19 @@ static inline bool kvmppc_supports_spe(void)
return false;
 }
 
+/*
+ * Always returns true is AltiVec unit is present, see
+ * kvmppc_core_check_processor_compat().
+ */
+static inline bool kvmppc_supports_altivec(void)
+{
+#ifdef CONFIG_ALTIVEC
+   if (cpu_has_feature(CPU_FTR_ALTIVEC))
+   return true;
+#endif
+   return false;
+}
+
 #ifdef CONFIG_SPE
 void kvmppc_vcpu_disable_spe(struct kvm_vcpu *vcpu)
 {
@@ -151,6 +164,21 @@ static void kvmppc_vcpu_sync_fpu(struct kvm_vcpu *vcpu)
 }
 
 /*
+ * Simulate AltiVec unavailable fault to load guest state
+ * from thread to AltiVec unit.
+ * It requires to be called with preemption disabled.
+ */
+static inline void kvmppc_load_guest_altivec(struct kvm_vcpu *vcpu)
+{
+   if (kvmppc_supports_altivec()) {
+   if (!(current-thread.regs-msr  MSR_VEC)) {
+   load_up_altivec(NULL);
+   current-thread.regs-msr |= MSR_VEC;
+   }
+   }
+}
+
+/*
  * Helper function for full MSR writes.  No need to call this if only
  * EE/CE/ME/DE/RI are changing.
  */
@@ -678,6 +706,12 @@ int kvmppc_vcpu_run(struct kvm_run *kvm_run, struct 
kvm_vcpu *vcpu)
u64 fpr[32];
 #endif
 
+#ifdef CONFIG_ALTIVEC
+   vector128 vr[32];
+   vector128 vscr;
+   int used_vr = 0;
+#endif
+
if (!vcpu-arch.sane) {
kvm_run-exit_reason = KVM_EXIT_INTERNAL_ERROR;
return -EINVAL;
@@ -716,6 +750,22 @@ int kvmppc_vcpu_run(struct kvm_run *kvm_run, struct 
kvm_vcpu *vcpu)
kvmppc_load_guest_fp(vcpu);
 #endif
 
+#ifdef CONFIG_ALTIVEC
+   if (cpu_has_feature(CPU_FTR_ALTIVEC)) {
+   /* Save userspace VEC state in stack */
+   enable_kernel_altivec();
+   memcpy(vr, current-thread.vr, sizeof(current-thread.vr));
+   vscr = current-thread.vscr;
+   used_vr = current-thread.used_vr;
+
+   /* Restore guest VEC state to thread */
+   memcpy(current-thread.vr, vcpu-arch.vr, 
sizeof(vcpu-arch.vr));
+   current-thread.vscr = vcpu-arch.vscr;
+
+   kvmppc_load_guest_altivec(vcpu);
+   }
+#endif
+
ret = __kvmppc_vcpu_run(kvm_run, vcpu);
 
/* No need for kvm_guest_exit. It's done in handle_exit.
@@ -736,6 +786,23 @@ int kvmppc_vcpu_run(struct kvm_run *kvm_run, struct 
kvm_vcpu *vcpu)
current-thread.fpexc_mode = fpexc_mode;
 #endif
 
+#ifdef CONFIG_ALTIVEC
+   if (cpu_has_feature(CPU_FTR_ALTIVEC)) {
+   /* Save AltiVec state to thread */
+   if (current-thread.regs-msr  MSR_VEC)
+   giveup_altivec(current);
+
+   /* Save guest state */
+   memcpy(vcpu-arch.vr, current-thread.vr, 
sizeof(vcpu-arch.vr));
+   vcpu-arch.vscr = current-thread.vscr;
+
+   /* Restore userspace state */
+   memcpy(current-thread.vr, vr, sizeof(current-thread.vr));
+   current-thread.vscr = vscr;
+   current-thread.used_vr = used_vr;
+   }
+#endif
+
 out:
vcpu-mode = OUTSIDE_GUEST_MODE;
return ret;
@@ -961,7 +1028,7 @@ int kvmppc_handle_exit(struct kvm_run *run, struct 
kvm_vcpu *vcpu,
break;
 
case BOOKE_INTERRUPT_SPE_ALTIVEC_UNAVAIL: {
-   if (kvmppc_supports_spe()) {
+   if (kvmppc_supports_altivec() || kvmppc_supports_spe()) {
bool enabled = false;
 
 #ifndef CONFIG_KVM_BOOKE_HV
@@ -987,7 +1054,7 @@ int kvmppc_handle_exit(struct kvm_run *run, struct 
kvm_vcpu *vcpu,
}
 
case BOOKE_INTERRUPT_SPE_FP_DATA_ALTIVEC_ASSIST:
-   if (kvmppc_supports_spe()) {
+   if (kvmppc_supports_altivec() || kvmppc_supports_spe()) {
kvmppc_booke_queue_irqprio(vcpu,
BOOKE_IRQPRIO_SPE_FP_DATA_ALTIVEC_ASSIST);
r = RESUME_GUEST;
@@ -1205,6 +1272,7 @@ int kvmppc_handle_exit(struct kvm_run *run, struct 
kvm_vcpu *vcpu,
} else {
kvmppc_lazy_ee_enable();
kvmppc_load_guest_fp(vcpu);
+   kvmppc_load_guest_altivec(vcpu);
}
}
 
-- 
1.7.3.4


___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


[PATCH 2/6] KVM: PPC: Book3E: Refactor SPE/FP exit handling

2013-07-03 Thread Mihai Caraman
SPE/FP/AltiVec interrupts share the same numbers. Refactor SPE/FP exit handling
to accommodate AltiVec later. Detect the targeted unit at run time since it can
be configured in the kernel but not featured on hardware.

Signed-off-by: Mihai Caraman mihai.cara...@freescale.com
---
 arch/powerpc/kvm/booke.c |  102 +++---
 arch/powerpc/kvm/booke.h |1 +
 2 files changed, 70 insertions(+), 33 deletions(-)

diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c
index fb47e85..113961f 100644
--- a/arch/powerpc/kvm/booke.c
+++ b/arch/powerpc/kvm/booke.c
@@ -89,6 +89,15 @@ void kvmppc_dump_vcpu(struct kvm_vcpu *vcpu)
}
 }
 
+static inline bool kvmppc_supports_spe(void)
+{
+#ifdef CONFIG_SPE
+   if (cpu_has_feature(CPU_FTR_SPE))
+   return true;
+#endif
+   return false;
+}
+
 #ifdef CONFIG_SPE
 void kvmppc_vcpu_disable_spe(struct kvm_vcpu *vcpu)
 {
@@ -99,7 +108,7 @@ void kvmppc_vcpu_disable_spe(struct kvm_vcpu *vcpu)
preempt_enable();
 }
 
-static void kvmppc_vcpu_enable_spe(struct kvm_vcpu *vcpu)
+void kvmppc_vcpu_enable_spe(struct kvm_vcpu *vcpu)
 {
preempt_disable();
enable_kernel_spe();
@@ -118,6 +127,14 @@ static void kvmppc_vcpu_sync_spe(struct kvm_vcpu *vcpu)
}
 }
 #else
+void kvmppc_vcpu_disable_spe(struct kvm_vcpu *vcpu)
+{
+}
+
+void kvmppc_vcpu_enable_spe(struct kvm_vcpu *vcpu)
+{
+}
+
 static void kvmppc_vcpu_sync_spe(struct kvm_vcpu *vcpu)
 {
 }
@@ -943,48 +960,67 @@ int kvmppc_handle_exit(struct kvm_run *run, struct 
kvm_vcpu *vcpu,
r = RESUME_GUEST;
break;
 
-#ifdef CONFIG_SPE
case BOOKE_INTERRUPT_SPE_ALTIVEC_UNAVAIL: {
-   if (vcpu-arch.shared-msr  MSR_SPE)
-   kvmppc_vcpu_enable_spe(vcpu);
-   else
-   kvmppc_booke_queue_irqprio(vcpu,
-  
BOOKE_IRQPRIO_SPE_ALTIVEC_UNAVAIL);
+   if (kvmppc_supports_spe()) {
+   bool enabled = false;
+
+#ifndef CONFIG_KVM_BOOKE_HV
+   if (vcpu-arch.shared-msr  MSR_SPE) {
+   kvmppc_vcpu_enable_spe(vcpu);
+   enabled = true;
+   }
+#endif
+   if (!enabled)
+   kvmppc_booke_queue_irqprio(vcpu,
+   BOOKE_IRQPRIO_SPE_ALTIVEC_UNAVAIL);
+   } else {
+   /*
+* Guest wants SPE, but host kernel doesn't support it.
+* Send an unimplemented operation program check to
+* the guest.
+*/
+   kvmppc_core_queue_program(vcpu, ESR_PUO | ESR_SPV);
+   }
+
r = RESUME_GUEST;
break;
}
 
case BOOKE_INTERRUPT_SPE_FP_DATA_ALTIVEC_ASSIST:
-   kvmppc_booke_queue_irqprio(vcpu, 
BOOKE_IRQPRIO_SPE_FP_DATA_ALTIVEC_ASSIST);
-   r = RESUME_GUEST;
-   break;
-
-   case BOOKE_INTERRUPT_SPE_FP_ROUND:
-   kvmppc_booke_queue_irqprio(vcpu, BOOKE_IRQPRIO_SPE_FP_ROUND);
-   r = RESUME_GUEST;
-   break;
-#else
-   case BOOKE_INTERRUPT_SPE_ALTIVEC_UNAVAIL:
-   /*
-* Guest wants SPE, but host kernel doesn't support it.  Send
-* an unimplemented operation program check to the guest.
-*/
-   kvmppc_core_queue_program(vcpu, ESR_PUO | ESR_SPV);
-   r = RESUME_GUEST;
+   if (kvmppc_supports_spe()) {
+   kvmppc_booke_queue_irqprio(vcpu,
+   BOOKE_IRQPRIO_SPE_FP_DATA_ALTIVEC_ASSIST);
+   r = RESUME_GUEST;
+   } else {
+   /*
+* These really should never happen without CONFIG_SPE,
+* as we should never enable the real MSR[SPE] in the
+* guest.
+*/
+   printk(KERN_CRIT %s: unexpected SPE interrupt %u at \
+   %08lx\n, __func__, exit_nr, vcpu-arch.pc);
+   run-hw.hardware_exit_reason = exit_nr;
+   r = RESUME_HOST;
+   }
break;
 
-   /*
-* These really should never happen without CONFIG_SPE,
-* as we should never enable the real MSR[SPE] in the guest.
-*/
-   case BOOKE_INTERRUPT_SPE_FP_DATA_ALTIVEC_ASSIST:
case BOOKE_INTERRUPT_SPE_FP_ROUND:
-   printk(KERN_CRIT %s: unexpected SPE interrupt %u at %08lx\n,
-  __func__, exit_nr, vcpu-arch.pc);
-   run-hw.hardware_exit_reason = exit_nr;
-   r = RESUME_HOST;
+   if (kvmppc_supports_spe()) {
+ 

[PATCH 0/6] KVM: PPC: Book3E: AltiVec support

2013-07-03 Thread Mihai Caraman
Add KVM Book3E AltiVec support and enable e6500 core.

Mihai Caraman (6):
  KVM: PPC: Book3E: Use common defines for SPE/FP/AltiVec int numbers
  KVM: PPC: Book3E: Refactor SPE/FP exit handling
  KVM: PPC: Book3E: Increase FPU laziness
  KVM: PPC: Book3E: Add AltiVec support
  KVM: PPC: Book3E: Add ONE_REG AltiVec support
  KVM: PPC: Book3E: Enable e6500 core

 arch/powerpc/kvm/booke.c  |  211 +++--
 arch/powerpc/kvm/booke.h  |5 +-
 arch/powerpc/kvm/bookehv_interrupts.S |8 +-
 arch/powerpc/kvm/e500.c   |   10 +-
 arch/powerpc/kvm/e500_emulate.c   |8 +-
 arch/powerpc/kvm/e500mc.c |   12 ++-
 6 files changed, 201 insertions(+), 53 deletions(-)

-- 
1.7.3.4


___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


[PATCH 6/6] KVM: PPC: Book3E: Enable e6500 core

2013-07-03 Thread Mihai Caraman
Now that AltiVec support is in place enable e6500 core.

Signed-off-by: Mihai Caraman mihai.cara...@freescale.com
---
 arch/powerpc/kvm/e500mc.c |   10 ++
 1 files changed, 10 insertions(+), 0 deletions(-)

diff --git a/arch/powerpc/kvm/e500mc.c b/arch/powerpc/kvm/e500mc.c
index 09da1ac..bec897c 100644
--- a/arch/powerpc/kvm/e500mc.c
+++ b/arch/powerpc/kvm/e500mc.c
@@ -175,6 +175,16 @@ int kvmppc_core_check_processor_compat(void)
r = 0;
else if (strcmp(cur_cpu_spec-cpu_name, e5500) == 0)
r = 0;
+#ifdef CONFIG_ALTIVEC
+   /*
+* Since guests have the priviledge to enable AltiVec, we need AltiVec
+* support in the host to save/restore their context.
+* Don't use CPU_FTR_ALTIVEC to identify cores with AltiVec unit
+* because it's cleared in the absence of CONFIG_ALTIVEC!
+*/
+   else if (strcmp(cur_cpu_spec-cpu_name, e6500) == 0)
+   r = 0;
+#endif
else
r = -ENOTSUPP;
 
-- 
1.7.3.4


___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


[PATCH 3/6] KVM: PPC: Book3E: Increase FPU laziness

2013-07-03 Thread Mihai Caraman
Increase FPU laziness by calling kvmppc_load_guest_fp() just before
returning to guest instead of each sched in. Without this improvement
an interrupt may also claim floting point corrupting guest state.

Signed-off-by: Mihai Caraman mihai.cara...@freescale.com
---
 arch/powerpc/kvm/booke.c  |1 +
 arch/powerpc/kvm/e500mc.c |2 --
 2 files changed, 1 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c
index 113961f..3cae2e3 100644
--- a/arch/powerpc/kvm/booke.c
+++ b/arch/powerpc/kvm/booke.c
@@ -1204,6 +1204,7 @@ int kvmppc_handle_exit(struct kvm_run *run, struct 
kvm_vcpu *vcpu,
r = (s  2) | RESUME_HOST | (r  RESUME_FLAG_NV);
} else {
kvmppc_lazy_ee_enable();
+   kvmppc_load_guest_fp(vcpu);
}
}
 
diff --git a/arch/powerpc/kvm/e500mc.c b/arch/powerpc/kvm/e500mc.c
index 19c8379..09da1ac 100644
--- a/arch/powerpc/kvm/e500mc.c
+++ b/arch/powerpc/kvm/e500mc.c
@@ -143,8 +143,6 @@ void kvmppc_core_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
kvmppc_e500_tlbil_all(vcpu_e500);
__get_cpu_var(last_vcpu_on_cpu) = vcpu;
}
-
-   kvmppc_load_guest_fp(vcpu);
 }
 
 void kvmppc_core_vcpu_put(struct kvm_vcpu *vcpu)
-- 
1.7.3.4


___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


Re: [PATCH 2/6] KVM: PPC: Book3E: Refactor SPE/FP exit handling

2013-07-03 Thread Alexander Graf

On 03.07.2013, at 14:42, Mihai Caraman wrote:

 SPE/FP/AltiVec interrupts share the same numbers. Refactor SPE/FP exit 
 handling
 to accommodate AltiVec later. Detect the targeted unit at run time since it 
 can
 be configured in the kernel but not featured on hardware.
 
 Signed-off-by: Mihai Caraman mihai.cara...@freescale.com
 ---
 arch/powerpc/kvm/booke.c |  102 +++---
 arch/powerpc/kvm/booke.h |1 +
 2 files changed, 70 insertions(+), 33 deletions(-)
 
 diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c
 index fb47e85..113961f 100644
 --- a/arch/powerpc/kvm/booke.c
 +++ b/arch/powerpc/kvm/booke.c
 @@ -89,6 +89,15 @@ void kvmppc_dump_vcpu(struct kvm_vcpu *vcpu)
   }
 }
 
 +static inline bool kvmppc_supports_spe(void)
 +{
 +#ifdef CONFIG_SPE
 + if (cpu_has_feature(CPU_FTR_SPE))
 + return true;
 +#endif
 + return false;
 +}
 +
 #ifdef CONFIG_SPE
 void kvmppc_vcpu_disable_spe(struct kvm_vcpu *vcpu)
 {
 @@ -99,7 +108,7 @@ void kvmppc_vcpu_disable_spe(struct kvm_vcpu *vcpu)
   preempt_enable();
 }
 
 -static void kvmppc_vcpu_enable_spe(struct kvm_vcpu *vcpu)
 +void kvmppc_vcpu_enable_spe(struct kvm_vcpu *vcpu)
 {
   preempt_disable();
   enable_kernel_spe();
 @@ -118,6 +127,14 @@ static void kvmppc_vcpu_sync_spe(struct kvm_vcpu *vcpu)
   }
 }
 #else
 +void kvmppc_vcpu_disable_spe(struct kvm_vcpu *vcpu)
 +{
 +}
 +
 +void kvmppc_vcpu_enable_spe(struct kvm_vcpu *vcpu)
 +{
 +}
 +
 static void kvmppc_vcpu_sync_spe(struct kvm_vcpu *vcpu)
 {
 }
 @@ -943,48 +960,67 @@ int kvmppc_handle_exit(struct kvm_run *run, struct 
 kvm_vcpu *vcpu,
   r = RESUME_GUEST;
   break;
 
 -#ifdef CONFIG_SPE
   case BOOKE_INTERRUPT_SPE_ALTIVEC_UNAVAIL: {
 - if (vcpu-arch.shared-msr  MSR_SPE)
 - kvmppc_vcpu_enable_spe(vcpu);
 - else
 - kvmppc_booke_queue_irqprio(vcpu,
 -
 BOOKE_IRQPRIO_SPE_ALTIVEC_UNAVAIL);
 + if (kvmppc_supports_spe()) {
 + bool enabled = false;
 +
 +#ifndef CONFIG_KVM_BOOKE_HV
 + if (vcpu-arch.shared-msr  MSR_SPE) {
 + kvmppc_vcpu_enable_spe(vcpu);
 + enabled = true;
 + }
 +#endif

Why the #ifdef? On HV capable systems kvmppc_supports_spe() will just always 
return false. And I don't really understand why HV would be special in the 
first place here. Is it because we're accessing shared-msr?

 + if (!enabled)
 + kvmppc_booke_queue_irqprio(vcpu,
 + BOOKE_IRQPRIO_SPE_ALTIVEC_UNAVAIL);
 + } else {
 + /*
 +  * Guest wants SPE, but host kernel doesn't support it.

host kernel or hardware


Alex

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


[PATCH 2/2] KVM: PPC: Book3E: Emulate MCSRR0/1 SPR and rfmci instruction

2013-07-03 Thread Mihai Caraman
Some guests are making use of return from machine check instruction
to do crazy things even though the 64-bit kernel doesn't handle yet
this interrupt. Emulate MCSRR0/1 SPR and rfmci instruction accordingly.

Signed-off-by: Mihai Caraman mihai.cara...@freescale.com
---
 arch/powerpc/include/asm/kvm_host.h |1 +
 arch/powerpc/kvm/booke_emulate.c|   25 +
 arch/powerpc/kvm/timing.c   |1 +
 3 files changed, 27 insertions(+), 0 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_host.h 
b/arch/powerpc/include/asm/kvm_host.h
index af326cd..0466789 100644
--- a/arch/powerpc/include/asm/kvm_host.h
+++ b/arch/powerpc/include/asm/kvm_host.h
@@ -148,6 +148,7 @@ enum kvm_exit_types {
EMULATED_TLBWE_EXITS,
EMULATED_RFI_EXITS,
EMULATED_RFCI_EXITS,
+   EMULATED_RFMCI_EXITS,
DEC_EXITS,
EXT_INTR_EXITS,
HALT_WAKEUP,
diff --git a/arch/powerpc/kvm/booke_emulate.c b/arch/powerpc/kvm/booke_emulate.c
index 27a4b28..aaff1b7 100644
--- a/arch/powerpc/kvm/booke_emulate.c
+++ b/arch/powerpc/kvm/booke_emulate.c
@@ -23,6 +23,7 @@
 
 #include booke.h
 
+#define OP_19_XOP_RFMCI   38
 #define OP_19_XOP_RFI 50
 #define OP_19_XOP_RFCI51
 
@@ -43,6 +44,12 @@ static void kvmppc_emul_rfci(struct kvm_vcpu *vcpu)
kvmppc_set_msr(vcpu, vcpu-arch.csrr1);
 }
 
+static void kvmppc_emul_rfmci(struct kvm_vcpu *vcpu)
+{
+   vcpu-arch.pc = vcpu-arch.mcsrr0;
+   kvmppc_set_msr(vcpu, vcpu-arch.mcsrr1);
+}
+
 int kvmppc_booke_emulate_op(struct kvm_run *run, struct kvm_vcpu *vcpu,
 unsigned int inst, int *advance)
 {
@@ -65,6 +72,12 @@ int kvmppc_booke_emulate_op(struct kvm_run *run, struct 
kvm_vcpu *vcpu,
*advance = 0;
break;
 
+   case OP_19_XOP_RFMCI:
+   kvmppc_emul_rfmci(vcpu);
+   kvmppc_set_exit_type(vcpu, EMULATED_RFMCI_EXITS);
+   *advance = 0;
+   break;
+
default:
emulated = EMULATE_FAIL;
break;
@@ -138,6 +151,12 @@ int kvmppc_booke_emulate_mtspr(struct kvm_vcpu *vcpu, int 
sprn, ulong spr_val)
case SPRN_DBCR1:
vcpu-arch.dbg_reg.dbcr1 = spr_val;
break;
+   case SPRN_MCSRR0:
+   vcpu-arch.mcsrr0 = spr_val;
+   break;
+   case SPRN_MCSRR1:
+   vcpu-arch.mcsrr1 = spr_val;
+   break;
case SPRN_DBSR:
vcpu-arch.dbsr = ~spr_val;
break;
@@ -284,6 +303,12 @@ int kvmppc_booke_emulate_mfspr(struct kvm_vcpu *vcpu, int 
sprn, ulong *spr_val)
case SPRN_DBCR1:
*spr_val = vcpu-arch.dbg_reg.dbcr1;
break;
+   case SPRN_MCSRR0:
+   *spr_val = vcpu-arch.mcsrr0;
+   break;
+   case SPRN_MCSRR1:
+   *spr_val = vcpu-arch.mcsrr1;
+   break;
case SPRN_DBSR:
*spr_val = vcpu-arch.dbsr;
break;
diff --git a/arch/powerpc/kvm/timing.c b/arch/powerpc/kvm/timing.c
index c392d26..670f63d 100644
--- a/arch/powerpc/kvm/timing.c
+++ b/arch/powerpc/kvm/timing.c
@@ -129,6 +129,7 @@ static const char 
*kvm_exit_names[__NUMBER_OF_KVM_EXIT_TYPES] = {
[EMULATED_TLBSX_EXITS] =EMUL_TLBSX,
[EMULATED_TLBWE_EXITS] =EMUL_TLBWE,
[EMULATED_RFI_EXITS] =  EMUL_RFI,
+   [EMULATED_RFMCI_EXITS] =EMUL_RFMCI,
[DEC_EXITS] =   DEC,
[EXT_INTR_EXITS] =  EXTINT,
[HALT_WAKEUP] = HALT,
-- 
1.7.3.4


___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


[PATCH 1/2] KVM: PPC: Fix kvm_exit_names array

2013-07-03 Thread Mihai Caraman
Some exit ids where left out from kvm_exit_names array.

Signed-off-by: Mihai Caraman mihai.cara...@freescale.com
---
 arch/powerpc/kvm/timing.c |4 +++-
 1 files changed, 3 insertions(+), 1 deletions(-)

diff --git a/arch/powerpc/kvm/timing.c b/arch/powerpc/kvm/timing.c
index 07b6110..c392d26 100644
--- a/arch/powerpc/kvm/timing.c
+++ b/arch/powerpc/kvm/timing.c
@@ -135,7 +135,9 @@ static const char 
*kvm_exit_names[__NUMBER_OF_KVM_EXIT_TYPES] = {
[USR_PR_INST] = USR_PR_INST,
[FP_UNAVAIL] =  FP_UNAVAIL,
[DEBUG_EXITS] = DEBUG,
-   [TIMEINGUEST] = TIMEINGUEST
+   [TIMEINGUEST] = TIMEINGUEST,
+   [DBELL_EXITS] = DBELL,
+   [GDBELL_EXITS] =GDBELL
 };
 
 static int kvmppc_exit_timing_show(struct seq_file *m, void *private)
-- 
1.7.3.4


___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


Re: [PATCH 3/6] KVM: PPC: Book3E: Increase FPU laziness

2013-07-03 Thread Alexander Graf

On 03.07.2013, at 14:42, Mihai Caraman wrote:

 Increase FPU laziness by calling kvmppc_load_guest_fp() just before
 returning to guest instead of each sched in. Without this improvement
 an interrupt may also claim floting point corrupting guest state.

Not sure I follow. Could you please describe exactly what's happening?


Alex

 
 Signed-off-by: Mihai Caraman mihai.cara...@freescale.com
 ---
 arch/powerpc/kvm/booke.c  |1 +
 arch/powerpc/kvm/e500mc.c |2 --
 2 files changed, 1 insertions(+), 2 deletions(-)
 
 diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c
 index 113961f..3cae2e3 100644
 --- a/arch/powerpc/kvm/booke.c
 +++ b/arch/powerpc/kvm/booke.c
 @@ -1204,6 +1204,7 @@ int kvmppc_handle_exit(struct kvm_run *run, struct 
 kvm_vcpu *vcpu,
   r = (s  2) | RESUME_HOST | (r  RESUME_FLAG_NV);
   } else {
   kvmppc_lazy_ee_enable();
 + kvmppc_load_guest_fp(vcpu);
   }
   }
 
 diff --git a/arch/powerpc/kvm/e500mc.c b/arch/powerpc/kvm/e500mc.c
 index 19c8379..09da1ac 100644
 --- a/arch/powerpc/kvm/e500mc.c
 +++ b/arch/powerpc/kvm/e500mc.c
 @@ -143,8 +143,6 @@ void kvmppc_core_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
   kvmppc_e500_tlbil_all(vcpu_e500);
   __get_cpu_var(last_vcpu_on_cpu) = vcpu;
   }
 -
 - kvmppc_load_guest_fp(vcpu);
 }
 
 void kvmppc_core_vcpu_put(struct kvm_vcpu *vcpu)
 -- 
 1.7.3.4
 
 
 --
 To unsubscribe from this list: send the line unsubscribe kvm-ppc in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


RE: [PATCH 2/6] KVM: PPC: Book3E: Refactor SPE/FP exit handling

2013-07-03 Thread Caraman Mihai Claudiu-B02008
  -#ifdef CONFIG_SPE
  case BOOKE_INTERRUPT_SPE_ALTIVEC_UNAVAIL: {
  -   if (vcpu-arch.shared-msr  MSR_SPE)
  -   kvmppc_vcpu_enable_spe(vcpu);
  -   else
  -   kvmppc_booke_queue_irqprio(vcpu,
  -
 BOOKE_IRQPRIO_SPE_ALTIVEC_UNAVAIL);
  +   if (kvmppc_supports_spe()) {
  +   bool enabled = false;
  +
  +#ifndef CONFIG_KVM_BOOKE_HV
  +   if (vcpu-arch.shared-msr  MSR_SPE) {
  +   kvmppc_vcpu_enable_spe(vcpu);
  +   enabled = true;
  +   }
  +#endif
 
 Why the #ifdef? On HV capable systems kvmppc_supports_spe() will just
 always return false. 

AltiVec and SPE unavailable exceptions follows the same path. While 
kvmppc_supports_spe() will always return false kvmppc_supports_altivec()
may not.

 And I don't really understand why HV would be special in the first place
 here. Is it because we're accessing shared-msr?

You are right on HV case MSP[SPV] should be always zero when an unavailabe
exception take place. The distrinction was made because on non HV the guest
doesn't have direct access to MSR[SPE]. The name of the bit (not the position)
was changed on HV cores.

 
  +   if (!enabled)
  +   kvmppc_booke_queue_irqprio(vcpu,
  +   BOOKE_IRQPRIO_SPE_ALTIVEC_UNAVAIL);
  +   } else {
  +   /*
  +* Guest wants SPE, but host kernel doesn't support it.
 
 host kernel or hardware

Ok.

-Mike

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


RE: [PATCH 3/6] KVM: PPC: Book3E: Increase FPU laziness

2013-07-03 Thread Caraman Mihai Claudiu-B02008
 -Original Message-
 From: Alexander Graf [mailto:ag...@suse.de]
 Sent: Wednesday, July 03, 2013 4:45 PM
 To: Caraman Mihai Claudiu-B02008
 Cc: kvm-...@vger.kernel.org; k...@vger.kernel.org; linuxppc-
 d...@lists.ozlabs.org
 Subject: Re: [PATCH 3/6] KVM: PPC: Book3E: Increase FPU laziness
 
 
 On 03.07.2013, at 14:42, Mihai Caraman wrote:
 
  Increase FPU laziness by calling kvmppc_load_guest_fp() just before
  returning to guest instead of each sched in. Without this improvement
  an interrupt may also claim floting point corrupting guest state.
 
 Not sure I follow. Could you please describe exactly what's happening?

This was already discussed on the list, I will forward you the thread.

-Mike

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


Re: [PATCH] of: Fix address decoding on Bimini and js2x machines

2013-07-03 Thread Rob Herring
On 07/03/2013 01:01 AM, Benjamin Herrenschmidt wrote:
  Commit:
 
   e38c0a1fbc5803cbacdaac0557c70ac8ca5152e7
   of/address: Handle #address-cells  2 specially
 
 broke real time clock access on Bimini, js2x, and similar powerpc
 machines using the maple platform. That code was indirectly relying
 on the old (broken) behaviour of the translation for the hypertransport
 to ISA bridge.
 
 This fixes it by treating hypertransport as a PCI bus
 
 Signed-off-by: Benjamin Herrenschmidt b...@kernel.crashing.org
 CC: sta...@vger.kernel.org [v3.6+]
 ---
 
 Rob, if you have no objection I will put that in powerpc -next

NP.

Acked-by: Rob Herring rob.herr...@calxeda.com

Rob

 
 diff --git a/drivers/of/address.c b/drivers/of/address.c
 index 04da786..7c8221d 100644
 --- a/drivers/of/address.c
 +++ b/drivers/of/address.c
 @@ -106,8 +106,12 @@ static unsigned int of_bus_default_get_flags(const 
 __be32 *
  
  static int of_bus_pci_match(struct device_node *np)
  {
 -   /* vci is for the /chaos bridge on 1st-gen PCI powermacs */
 -   return !strcmp(np-type, pci) || !strcmp(np-type, vci);
 +   /*
 +* vci is for the /chaos bridge on 1st-gen PCI powermacs
 +* ht is hypertransport
 +*/
 +   return !strcmp(np-type, pci) || !strcmp(np-type, vci) ||
 +   !strcmp(np-type, ht);
  }
  
  static void of_bus_pci_count_cells(struct device_node *np,
 
 

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


[PATCH][v3] powerpc/85xx: Move ePAPR paravirt initialization earlier

2013-07-03 Thread Laurentiu Tudor
At console init, when the kernel tries to flush the log buffer
the ePAPR byte-channel based console write fails silently,
losing the buffered messages.
This happens because The ePAPR para-virtualization init isn't
done early enough so that the hcall instruction to be set,
causing the byte-channel write hcall to be a nop.
To fix, change the ePAPR para-virt init to use early device
tree functions and move it in early init.

Signed-off-by: Laurentiu Tudor laurentiu.tu...@freescale.com
---
v3:
 - removed compatible check because qemu/kvm doesn't set it as per epapr
 - rearranged code
v2:
 - moved epapr init even earlier (in early init stage). context here:
 http://lists.ozlabs.org/pipermail/linuxppc-dev/2013-June/108116.html
 - reworded commit msg
 - re-based on git://git.kernel.org/pub/scm/linux/kernel/git/benh/powerpc.git 
merge

 arch/powerpc/include/asm/epapr_hcalls.h |6 ++
 arch/powerpc/kernel/epapr_paravirt.c|   28 
 arch/powerpc/kernel/setup_32.c  |4 +++-
 arch/powerpc/kernel/setup_64.c  |3 +++
 4 files changed, 28 insertions(+), 13 deletions(-)

diff --git a/arch/powerpc/include/asm/epapr_hcalls.h 
b/arch/powerpc/include/asm/epapr_hcalls.h
index d3d6342..86b0ac7 100644
--- a/arch/powerpc/include/asm/epapr_hcalls.h
+++ b/arch/powerpc/include/asm/epapr_hcalls.h
@@ -105,6 +105,12 @@
 extern bool epapr_paravirt_enabled;
 extern u32 epapr_hypercall_start[];
 
+#ifdef CONFIG_EPAPR_PARAVIRT
+int __init epapr_paravirt_early_init(void);
+#else
+static inline int epapr_paravirt_early_init(void) { return 0; }
+#endif
+
 /*
  * We use uintptr_t to define a register because it's guaranteed to be a
  * 32-bit integer on a 32-bit platform, and a 64-bit integer on a 64-bit
diff --git a/arch/powerpc/kernel/epapr_paravirt.c 
b/arch/powerpc/kernel/epapr_paravirt.c
index d44a571..6300c13 100644
--- a/arch/powerpc/kernel/epapr_paravirt.c
+++ b/arch/powerpc/kernel/epapr_paravirt.c
@@ -30,22 +30,20 @@ extern u32 epapr_ev_idle_start[];
 
 bool epapr_paravirt_enabled;
 
-static int __init epapr_paravirt_init(void)
+static int __init early_init_dt_scan_epapr(unsigned long node,
+  const char *uname,
+  int depth, void *data)
 {
-   struct device_node *hyper_node;
const u32 *insts;
-   int len, i;
+   unsigned long len;
+   int i;
 
-   hyper_node = of_find_node_by_path(/hypervisor);
-   if (!hyper_node)
-   return -ENODEV;
-
-   insts = of_get_property(hyper_node, hcall-instructions, len);
+   insts = of_get_flat_dt_prop(node, hcall-instructions, len);
if (!insts)
-   return -ENODEV;
+   return 0;
 
if (len % 4 || len  (4 * 4))
-   return -ENODEV;
+   return -1;
 
for (i = 0; i  (len / 4); i++) {
patch_instruction(epapr_hypercall_start + i, insts[i]);
@@ -55,13 +53,19 @@ static int __init epapr_paravirt_init(void)
}
 
 #if !defined(CONFIG_64BIT) || defined(CONFIG_PPC_BOOK3E_64)
-   if (of_get_property(hyper_node, has-idle, NULL))
+   if (of_get_flat_dt_prop(node, has-idle, NULL))
ppc_md.power_save = epapr_ev_idle;
 #endif
 
epapr_paravirt_enabled = true;
 
+   return 1;
+}
+
+int __init epapr_paravirt_early_init(void)
+{
+   of_scan_flat_dt(early_init_dt_scan_epapr, NULL);
+
return 0;
 }
 
-early_initcall(epapr_paravirt_init);
diff --git a/arch/powerpc/kernel/setup_32.c b/arch/powerpc/kernel/setup_32.c
index a8f54ec..a4bbcae 100644
--- a/arch/powerpc/kernel/setup_32.c
+++ b/arch/powerpc/kernel/setup_32.c
@@ -38,6 +38,7 @@
 #include asm/serial.h
 #include asm/udbg.h
 #include asm/mmu_context.h
+#include asm/epapr_hcalls.h
 
 #include setup.h
 
@@ -128,6 +129,8 @@ notrace void __init machine_init(u64 dt_ptr)
/* Do some early initialization based on the flat device tree */
early_init_devtree(__va(dt_ptr));
 
+   epapr_paravirt_early_init();
+
early_init_mmu();
 
probe_machine();
@@ -326,5 +329,4 @@ void __init setup_arch(char **cmdline_p)
 
/* Initialize the MMU context management stuff */
mmu_context_init();
-
 }
diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
index e379d3f..fd9941a 100644
--- a/arch/powerpc/kernel/setup_64.c
+++ b/arch/powerpc/kernel/setup_64.c
@@ -66,6 +66,7 @@
 #include asm/code-patching.h
 #include asm/kvm_ppc.h
 #include asm/hugetlb.h
+#include asm/epapr_hcalls.h
 
 #include setup.h
 
@@ -215,6 +216,8 @@ void __init early_setup(unsigned long dt_ptr)
 */
early_init_devtree(__va(dt_ptr));
 
+   epapr_paravirt_early_init();
+
/* Now we know the logical id of our boot cpu, setup the paca. */
setup_paca(paca[boot_cpuid]);
fixup_boot_paca();
-- 
1.7.6.5


___
Linuxppc-dev mailing list

Re: [PATCH] of: Fix address decoding on Bimini and js2x machines

2013-07-03 Thread Grant Likely
On Wed, Jul 3, 2013 at 3:10 PM, Rob Herring robherri...@gmail.com wrote:
 On 07/03/2013 01:01 AM, Benjamin Herrenschmidt wrote:
  Commit:

   e38c0a1fbc5803cbacdaac0557c70ac8ca5152e7
   of/address: Handle #address-cells  2 specially

 broke real time clock access on Bimini, js2x, and similar powerpc
 machines using the maple platform. That code was indirectly relying
 on the old (broken) behaviour of the translation for the hypertransport
 to ISA bridge.

 This fixes it by treating hypertransport as a PCI bus

 Signed-off-by: Benjamin Herrenschmidt b...@kernel.crashing.org
 CC: sta...@vger.kernel.org [v3.6+]
 ---

 Rob, if you have no objection I will put that in powerpc -next

 NP.

 Acked-by: Rob Herring rob.herr...@calxeda.com

I'll include this in my 3.11 pull request for Linus

g.
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


Re: [PATCH][v2] powerpc/85xx: Move ePAPR paravirt initialization earlier

2013-07-03 Thread Scott Wood

On 07/03/2013 07:29:43 AM, Tudor Laurentiu wrote:

On 07/02/2013 08:55 PM, Scott Wood wrote:

On 07/02/2013 07:46:29 AM, Laurentiu Tudor wrote:

- insts = of_get_property(hyper_node, hcall-instructions, len);
- if (!insts)
- return -ENODEV;
+#if !defined(CONFIG_64BIT) || defined(CONFIG_PPC_BOOK3E_64)
+ if (of_get_flat_dt_prop(node, has-idle, NULL))
+ ppc_md.power_save = epapr_ev_idle;
+#endif


Why are you doing this before processing hcall-instructions?



Nothing of importance. The code seemed more clear to me.


It seems wrong to expose epapr_ev_idle to ppc_md before the hcall has  
been patched in, even if you don't expect to actually go idle at this  
point.


-Scott
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


Re: [PATCH 3/6] KVM: PPC: Book3E: Increase FPU laziness

2013-07-03 Thread Alexander Graf

On 03.07.2013, at 15:55, Caraman Mihai Claudiu-B02008 wrote:

 -Original Message-
 From: Alexander Graf [mailto:ag...@suse.de]
 Sent: Wednesday, July 03, 2013 4:45 PM
 To: Caraman Mihai Claudiu-B02008
 Cc: kvm-...@vger.kernel.org; k...@vger.kernel.org; linuxppc-
 d...@lists.ozlabs.org
 Subject: Re: [PATCH 3/6] KVM: PPC: Book3E: Increase FPU laziness
 
 
 On 03.07.2013, at 14:42, Mihai Caraman wrote:
 
 Increase FPU laziness by calling kvmppc_load_guest_fp() just before
 returning to guest instead of each sched in. Without this improvement
 an interrupt may also claim floting point corrupting guest state.
 
 Not sure I follow. Could you please describe exactly what's happening?
 
 This was already discussed on the list, I will forward you the thread.

The only thing I've seen in that thread was some pathetic theoretical case 
where an interrupt handler would enable fp and clobber state carelessly. That's 
not something I'm worried about.

I really don't see where this patch improves anything tbh. It certainly makes 
the code flow more awkward.


Alex

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


Re: [PATCH][v2] powerpc/85xx: Move ePAPR paravirt initialization earlier

2013-07-03 Thread Tudor Laurentiu

On 07/03/2013 05:52 PM, Scott Wood wrote:

On 07/03/2013 07:29:43 AM, Tudor Laurentiu wrote:

On 07/02/2013 08:55 PM, Scott Wood wrote:

On 07/02/2013 07:46:29 AM, Laurentiu Tudor wrote:

- insts = of_get_property(hyper_node, hcall-instructions, len);
- if (!insts)
- return -ENODEV;
+#if !defined(CONFIG_64BIT) || defined(CONFIG_PPC_BOOK3E_64)
+ if (of_get_flat_dt_prop(node, has-idle, NULL))
+ ppc_md.power_save = epapr_ev_idle;
+#endif


Why are you doing this before processing hcall-instructions?



Nothing of importance. The code seemed more clear to me.


It seems wrong to expose epapr_ev_idle to ppc_md before the hcall has
been patched in, even if you don't expect to actually go idle at this
point.



Ah, now I understand your concerns. I've submitted a [v3] restoring the 
original ordering.


---
Best Regards, Laurentiu

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


Re: [PATCH 2/6] KVM: PPC: Book3E: Refactor SPE/FP exit handling

2013-07-03 Thread Alexander Graf

On 03.07.2013, at 15:53, Caraman Mihai Claudiu-B02008 wrote:

 -#ifdef CONFIG_SPE
 case BOOKE_INTERRUPT_SPE_ALTIVEC_UNAVAIL: {
 -   if (vcpu-arch.shared-msr  MSR_SPE)
 -   kvmppc_vcpu_enable_spe(vcpu);
 -   else
 -   kvmppc_booke_queue_irqprio(vcpu,
 -
 BOOKE_IRQPRIO_SPE_ALTIVEC_UNAVAIL);
 +   if (kvmppc_supports_spe()) {
 +   bool enabled = false;
 +
 +#ifndef CONFIG_KVM_BOOKE_HV
 +   if (vcpu-arch.shared-msr  MSR_SPE) {
 +   kvmppc_vcpu_enable_spe(vcpu);
 +   enabled = true;
 +   }
 +#endif
 
 Why the #ifdef? On HV capable systems kvmppc_supports_spe() will just
 always return false. 
 
 AltiVec and SPE unavailable exceptions follows the same path. While 
 kvmppc_supports_spe() will always return false kvmppc_supports_altivec()
 may not.

There is no chip that supports SPE and HV at the same time. So we'll never hit 
this anyway, since kvmppc_supports_spe() always returns false on HV capable 
systems.

Just add a comment saying so and remove the ifdef :).


Alex

 
 And I don't really understand why HV would be special in the first place
 here. Is it because we're accessing shared-msr?
 
 You are right on HV case MSP[SPV] should be always zero when an unavailabe
 exception take place. The distrinction was made because on non HV the guest
 doesn't have direct access to MSR[SPE]. The name of the bit (not the position)
 was changed on HV cores.
 
 
 +   if (!enabled)
 +   kvmppc_booke_queue_irqprio(vcpu,
 +   BOOKE_IRQPRIO_SPE_ALTIVEC_UNAVAIL);
 +   } else {
 +   /*
 +* Guest wants SPE, but host kernel doesn't support it.
 
 host kernel or hardware
 
 Ok.
 
 -Mike
 

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


Re: [PATCH 4/6] KVM: PPC: Book3E: Add AltiVec support

2013-07-03 Thread Alexander Graf

On 03.07.2013, at 14:42, Mihai Caraman wrote:

 Add KVM Book3E AltiVec support. KVM Book3E FPU support gracefully reuse host
 infrastructure so follow the same approach for AltiVec.
 
 Signed-off-by: Mihai Caraman mihai.cara...@freescale.com
 ---
 arch/powerpc/kvm/booke.c |   72 -
 1 files changed, 70 insertions(+), 2 deletions(-)
 
 diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c
 index 3cae2e3..c3c3af6 100644
 --- a/arch/powerpc/kvm/booke.c
 +++ b/arch/powerpc/kvm/booke.c
 @@ -98,6 +98,19 @@ static inline bool kvmppc_supports_spe(void)
   return false;
 }
 
 +/*
 + * Always returns true is AltiVec unit is present, see
 + * kvmppc_core_check_processor_compat().
 + */
 +static inline bool kvmppc_supports_altivec(void)
 +{
 +#ifdef CONFIG_ALTIVEC
 + if (cpu_has_feature(CPU_FTR_ALTIVEC))
 + return true;
 +#endif
 + return false;
 +}
 +
 #ifdef CONFIG_SPE
 void kvmppc_vcpu_disable_spe(struct kvm_vcpu *vcpu)
 {
 @@ -151,6 +164,21 @@ static void kvmppc_vcpu_sync_fpu(struct kvm_vcpu *vcpu)
 }
 
 /*
 + * Simulate AltiVec unavailable fault to load guest state
 + * from thread to AltiVec unit.
 + * It requires to be called with preemption disabled.
 + */
 +static inline void kvmppc_load_guest_altivec(struct kvm_vcpu *vcpu)
 +{
 + if (kvmppc_supports_altivec()) {
 + if (!(current-thread.regs-msr  MSR_VEC)) {
 + load_up_altivec(NULL);
 + current-thread.regs-msr |= MSR_VEC;

Does this ensure that the kernel saves / restores all altivec state on task 
switch? Does it load it again when it schedules us in? Would definitely be 
worth a comment.

 + }
 + }
 +}
 +
 +/*
  * Helper function for full MSR writes.  No need to call this if only
  * EE/CE/ME/DE/RI are changing.
  */
 @@ -678,6 +706,12 @@ int kvmppc_vcpu_run(struct kvm_run *kvm_run, struct 
 kvm_vcpu *vcpu)
   u64 fpr[32];
 #endif
 
 +#ifdef CONFIG_ALTIVEC
 + vector128 vr[32];
 + vector128 vscr;
 + int used_vr = 0;

bool

 +#endif
 +
   if (!vcpu-arch.sane) {
   kvm_run-exit_reason = KVM_EXIT_INTERNAL_ERROR;
   return -EINVAL;
 @@ -716,6 +750,22 @@ int kvmppc_vcpu_run(struct kvm_run *kvm_run, struct 
 kvm_vcpu *vcpu)
   kvmppc_load_guest_fp(vcpu);
 #endif
 
 +#ifdef CONFIG_ALTIVEC

/* Switch from user space Altivec to guest Altivec state */

 + if (cpu_has_feature(CPU_FTR_ALTIVEC)) {

Why not use your kvmppc_supports_altivec() helper here?

 + /* Save userspace VEC state in stack */
 + enable_kernel_altivec();

Can't you hide this in the load function? We only want to enable kernel altivec 
for a short time while we shuffle the registers around.

 + memcpy(vr, current-thread.vr, sizeof(current-thread.vr));

vr = current-thread.vr;

 + vscr = current-thread.vscr;
 + used_vr = current-thread.used_vr;
 +
 + /* Restore guest VEC state to thread */
 + memcpy(current-thread.vr, vcpu-arch.vr, 
 sizeof(vcpu-arch.vr));

current-thread.vr = vcpu-arch.vr;

 + current-thread.vscr = vcpu-arch.vscr;
 +
 + kvmppc_load_guest_altivec(vcpu);
 + }

Do we need to do this even when the guest doesn't use Altivec? Can't we just 
load it on demand then once we fault? This code path really should only be a 
prefetch enable when MSR_VEC is already set in guest context.

 +#endif
 +
   ret = __kvmppc_vcpu_run(kvm_run, vcpu);
 
   /* No need for kvm_guest_exit. It's done in handle_exit.
 @@ -736,6 +786,23 @@ int kvmppc_vcpu_run(struct kvm_run *kvm_run, struct 
 kvm_vcpu *vcpu)
   current-thread.fpexc_mode = fpexc_mode;
 #endif
 
 +#ifdef CONFIG_ALTIVEC

/* Switch from guest Altivec to user space Altivec state */

 + if (cpu_has_feature(CPU_FTR_ALTIVEC)) {
 + /* Save AltiVec state to thread */
 + if (current-thread.regs-msr  MSR_VEC)
 + giveup_altivec(current);
 +
 + /* Save guest state */
 + memcpy(vcpu-arch.vr, current-thread.vr, 
 sizeof(vcpu-arch.vr));
 + vcpu-arch.vscr = current-thread.vscr;
 +
 + /* Restore userspace state */
 + memcpy(current-thread.vr, vr, sizeof(current-thread.vr));
 + current-thread.vscr = vscr;
 + current-thread.used_vr = used_vr;
 + }

Same comments here. If the guest never touched Altivec state, don't bother 
restoring it, as it's still good.


Alex

 +#endif
 +
 out:
   vcpu-mode = OUTSIDE_GUEST_MODE;
   return ret;
 @@ -961,7 +1028,7 @@ int kvmppc_handle_exit(struct kvm_run *run, struct 
 kvm_vcpu *vcpu,
   break;
 
   case BOOKE_INTERRUPT_SPE_ALTIVEC_UNAVAIL: {
 - if (kvmppc_supports_spe()) {
 + if (kvmppc_supports_altivec() || kvmppc_supports_spe()) {
   bool enabled = false;
 
 #ifndef CONFIG_KVM_BOOKE_HV
 @@ 

RE: [PATCH 3/6] KVM: PPC: Book3E: Increase FPU laziness

2013-07-03 Thread Caraman Mihai Claudiu-B02008
  Increase FPU laziness by calling kvmppc_load_guest_fp() just before
  returning to guest instead of each sched in. Without this improvement
  an interrupt may also claim floting point corrupting guest state.
 
  Not sure I follow. Could you please describe exactly what's happening?
 
  This was already discussed on the list, I will forward you the thread.
 
 The only thing I've seen in that thread was some pathetic theoretical
 case where an interrupt handler would enable fp and clobber state
 carelessly. That's not something I'm worried about.

Neither me though I don't find it pathetic. Please refer it to Scott.

 
 I really don't see where this patch improves anything tbh. It certainly
 makes the code flow more awkward.

I was pointing you to this: The idea of FPU/AltiVec laziness that the kernel
is struggling to achieve is to reduce the number of store/restore operations.
Without this improvement we restore the unit each time we are sched it. If an
other process take the ownership of the unit (on SMP it's even worse but don't
bother with this) the kernel store the unit state to qemu task. This can happen
multiple times during handle_exit().

Do you see it now? 

-Mike


___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


RE: [PATCH 4/6] KVM: PPC: Book3E: Add AltiVec support

2013-07-03 Thread Caraman Mihai Claudiu-B02008
  + * Simulate AltiVec unavailable fault to load guest state
  + * from thread to AltiVec unit.
  + * It requires to be called with preemption disabled.
  + */
  +static inline void kvmppc_load_guest_altivec(struct kvm_vcpu *vcpu)
  +{
  +   if (kvmppc_supports_altivec()) {
  +   if (!(current-thread.regs-msr  MSR_VEC)) {
  +   load_up_altivec(NULL);
  +   current-thread.regs-msr |= MSR_VEC;
 
 Does this ensure that the kernel saves / restores all altivec state on
 task switch? Does it load it again when it schedules us in? Would
 definitely be worth a comment.

These units are _LAZY_ !!! Only SMP kernel save the state when it schedules out.

 
  +   }
  +   }
  +}
  +
  +/*
   * Helper function for full MSR writes.  No need to call this if only
   * EE/CE/ME/DE/RI are changing.
   */
  @@ -678,6 +706,12 @@ int kvmppc_vcpu_run(struct kvm_run *kvm_run,
 struct kvm_vcpu *vcpu)
  u64 fpr[32];
  #endif
 
  +#ifdef CONFIG_ALTIVEC
  +   vector128 vr[32];
  +   vector128 vscr;
  +   int used_vr = 0;
 
 bool

Why don't you ask first to change the type of used_vr member in
/include/asm/processor.h?

 
  +#endif
  +
  if (!vcpu-arch.sane) {
  kvm_run-exit_reason = KVM_EXIT_INTERNAL_ERROR;
  return -EINVAL;
  @@ -716,6 +750,22 @@ int kvmppc_vcpu_run(struct kvm_run *kvm_run,
 struct kvm_vcpu *vcpu)
  kvmppc_load_guest_fp(vcpu);
  #endif
 
  +#ifdef CONFIG_ALTIVEC
 
 /* Switch from user space Altivec to guest Altivec state */
 
  +   if (cpu_has_feature(CPU_FTR_ALTIVEC)) {
 
 Why not use your kvmppc_supports_altivec() helper here?

Give it a try ... because Linus guarded this members with CONFIG_ALTIVEC :)

 
  +   /* Save userspace VEC state in stack */
  +   enable_kernel_altivec();
 
 Can't you hide this in the load function? We only want to enable kernel
 altivec for a short time while we shuffle the registers around.
 
  +   memcpy(vr, current-thread.vr, sizeof(current-thread.vr));
 
 vr = current-thread.vr;

Are you kidding, be more careful with arrays :) 

 
  +   vscr = current-thread.vscr;
  +   used_vr = current-thread.used_vr;
  +
  +   /* Restore guest VEC state to thread */
  +   memcpy(current-thread.vr, vcpu-arch.vr, sizeof(vcpu-
 arch.vr));
 
 current-thread.vr = vcpu-arch.vr;
 
  +   current-thread.vscr = vcpu-arch.vscr;
  +
  +   kvmppc_load_guest_altivec(vcpu);
  +   }
 
 Do we need to do this even when the guest doesn't use Altivec? Can't we
 just load it on demand then once we fault? This code path really should
 only be a prefetch enable when MSR_VEC is already set in guest context.

No we can't, read 6/6. 

 
  +#endif
  +
  ret = __kvmppc_vcpu_run(kvm_run, vcpu);
 
  /* No need for kvm_guest_exit. It's done in handle_exit.
  @@ -736,6 +786,23 @@ int kvmppc_vcpu_run(struct kvm_run *kvm_run,
 struct kvm_vcpu *vcpu)
  current-thread.fpexc_mode = fpexc_mode;
  #endif
 
  +#ifdef CONFIG_ALTIVEC
 
 /* Switch from guest Altivec to user space Altivec state */
 
  +   if (cpu_has_feature(CPU_FTR_ALTIVEC)) {
  +   /* Save AltiVec state to thread */
  +   if (current-thread.regs-msr  MSR_VEC)
  +   giveup_altivec(current);
  +
  +   /* Save guest state */
  +   memcpy(vcpu-arch.vr, current-thread.vr, sizeof(vcpu-
 arch.vr));
  +   vcpu-arch.vscr = current-thread.vscr;
  +
  +   /* Restore userspace state */
  +   memcpy(current-thread.vr, vr, sizeof(current-thread.vr));
  +   current-thread.vscr = vscr;
  +   current-thread.used_vr = used_vr;
  +   }
 
 Same comments here. If the guest never touched Altivec state, don't
 bother restoring it, as it's still good.

LOL, the mighty guest kernel does whatever he wants with AltiVec and
doesn't inform us.

-Mike

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


Re: [PATCH 4/6] KVM: PPC: Book3E: Add AltiVec support

2013-07-03 Thread Alexander Graf

On 03.07.2013, at 18:09, Caraman Mihai Claudiu-B02008 wrote:

 + * Simulate AltiVec unavailable fault to load guest state
 + * from thread to AltiVec unit.
 + * It requires to be called with preemption disabled.
 + */
 +static inline void kvmppc_load_guest_altivec(struct kvm_vcpu *vcpu)
 +{
 +   if (kvmppc_supports_altivec()) {
 +   if (!(current-thread.regs-msr  MSR_VEC)) {
 +   load_up_altivec(NULL);
 +   current-thread.regs-msr |= MSR_VEC;
 
 Does this ensure that the kernel saves / restores all altivec state on
 task switch? Does it load it again when it schedules us in? Would
 definitely be worth a comment.
 
 These units are _LAZY_ !!! Only SMP kernel save the state when it schedules 
 out.

Then how do you ensure that altivec state is still consistent after another 
altivec user got scheduled? Have I missed a vcpu_load hook anywhere?

 
 
 +   }
 +   }
 +}
 +
 +/*
 * Helper function for full MSR writes.  No need to call this if only
 * EE/CE/ME/DE/RI are changing.
 */
 @@ -678,6 +706,12 @@ int kvmppc_vcpu_run(struct kvm_run *kvm_run,
 struct kvm_vcpu *vcpu)
 u64 fpr[32];
 #endif
 
 +#ifdef CONFIG_ALTIVEC
 +   vector128 vr[32];
 +   vector128 vscr;
 +   int used_vr = 0;
 
 bool
 
 Why don't you ask first to change the type of used_vr member in
 /include/asm/processor.h?

Ah, it's a copy from thread. Fair enough.

 
 
 +#endif
 +
 if (!vcpu-arch.sane) {
 kvm_run-exit_reason = KVM_EXIT_INTERNAL_ERROR;
 return -EINVAL;
 @@ -716,6 +750,22 @@ int kvmppc_vcpu_run(struct kvm_run *kvm_run,
 struct kvm_vcpu *vcpu)
 kvmppc_load_guest_fp(vcpu);
 #endif
 
 +#ifdef CONFIG_ALTIVEC
 
 /* Switch from user space Altivec to guest Altivec state */
 
 +   if (cpu_has_feature(CPU_FTR_ALTIVEC)) {
 
 Why not use your kvmppc_supports_altivec() helper here?
 
 Give it a try ... because Linus guarded this members with CONFIG_ALTIVEC :)

Huh? You already are in an #ifdef CONFIG_ALTIVEC here. I think it's a good idea 
to be consistent in helper usage. And the name you gave to the helper 
(kvmppc_supports_altivec) is actually quite nice and tells us exactly what 
we're asking for.

 
 
 +   /* Save userspace VEC state in stack */
 +   enable_kernel_altivec();
 
 Can't you hide this in the load function? We only want to enable kernel
 altivec for a short time while we shuffle the registers around.
 
 +   memcpy(vr, current-thread.vr, sizeof(current-thread.vr));
 
 vr = current-thread.vr;
 
 Are you kidding, be more careful with arrays :) 

Bleks :).

 
 
 +   vscr = current-thread.vscr;
 +   used_vr = current-thread.used_vr;
 +
 +   /* Restore guest VEC state to thread */
 +   memcpy(current-thread.vr, vcpu-arch.vr, sizeof(vcpu-
 arch.vr));
 
 current-thread.vr = vcpu-arch.vr;
 
 +   current-thread.vscr = vcpu-arch.vscr;
 +
 +   kvmppc_load_guest_altivec(vcpu);
 +   }
 
 Do we need to do this even when the guest doesn't use Altivec? Can't we
 just load it on demand then once we fault? This code path really should
 only be a prefetch enable when MSR_VEC is already set in guest context.
 
 No we can't, read 6/6. 

So we have to make sure we're completely unlazy when it comes to a KVM guest. 
Are we?


Alex

 
 
 +#endif
 +
 ret = __kvmppc_vcpu_run(kvm_run, vcpu);
 
 /* No need for kvm_guest_exit. It's done in handle_exit.
 @@ -736,6 +786,23 @@ int kvmppc_vcpu_run(struct kvm_run *kvm_run,
 struct kvm_vcpu *vcpu)
 current-thread.fpexc_mode = fpexc_mode;
 #endif
 
 +#ifdef CONFIG_ALTIVEC
 
 /* Switch from guest Altivec to user space Altivec state */
 
 +   if (cpu_has_feature(CPU_FTR_ALTIVEC)) {
 +   /* Save AltiVec state to thread */
 +   if (current-thread.regs-msr  MSR_VEC)
 +   giveup_altivec(current);
 +
 +   /* Save guest state */
 +   memcpy(vcpu-arch.vr, current-thread.vr, sizeof(vcpu-
 arch.vr));
 +   vcpu-arch.vscr = current-thread.vscr;
 +
 +   /* Restore userspace state */
 +   memcpy(current-thread.vr, vr, sizeof(current-thread.vr));
 +   current-thread.vscr = vscr;
 +   current-thread.used_vr = used_vr;
 +   }
 
 Same comments here. If the guest never touched Altivec state, don't
 bother restoring it, as it's still good.
 
 LOL, the mighty guest kernel does whatever he wants with AltiVec and
 doesn't inform us.
 
 -Mike
 

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


RE: [PATCH 4/6] KVM: PPC: Book3E: Add AltiVec support

2013-07-03 Thread Caraman Mihai Claudiu-B02008
  +
if (!vcpu-arch.sane) {
kvm_run-exit_reason = KVM_EXIT_INTERNAL_ERROR;
return -EINVAL;
  @@ -716,6 +750,22 @@ int kvmppc_vcpu_run(struct kvm_run *kvm_run,
  struct kvm_vcpu *vcpu)
kvmppc_load_guest_fp(vcpu);
  #endif
 
  +#ifdef CONFIG_ALTIVEC
 
  /* Switch from user space Altivec to guest Altivec state */
 
  + if (cpu_has_feature(CPU_FTR_ALTIVEC)) {
 
  Why not use your kvmppc_supports_altivec() helper here?
 
  Give it a try ... because Linus guarded this members with
 CONFIG_ALTIVEC :)
 
 Huh? You already are in an #ifdef CONFIG_ALTIVEC here. I think it's a
 good idea to be consistent in helper usage. And the name you gave to the
 helper (kvmppc_supports_altivec) is actually quite nice and tells us
 exactly what we're asking for.

I thought you asking to replace #ifdef CONFIG_ALTIVEC.

  Do we need to do this even when the guest doesn't use Altivec? Can't
 we
  just load it on demand then once we fault? This code path really
 should
  only be a prefetch enable when MSR_VEC is already set in guest
 context.
 
  No we can't, read 6/6.
 
 So we have to make sure we're completely unlazy when it comes to a KVM
 guest. Are we?

Yes, because MSR[SPV] is under its control.

-Mike


___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


[PATCH 2/2] KVM: PPC: Book3E: Add LRAT error exception handler

2013-07-03 Thread Mihai Caraman
With LRAT (Logical to Real Address Translation) error exception handler in 
kernel
KVM needs to add the counterpart otherwise will break the build.

Signed-off-by: Mihai Caraman mihai.cara...@freescale.com
---
 arch/powerpc/kvm/bookehv_interrupts.S |2 ++
 1 files changed, 2 insertions(+), 0 deletions(-)

diff --git a/arch/powerpc/kvm/bookehv_interrupts.S 
b/arch/powerpc/kvm/bookehv_interrupts.S
index e8ed7d6..a0d6929 100644
--- a/arch/powerpc/kvm/bookehv_interrupts.S
+++ b/arch/powerpc/kvm/bookehv_interrupts.S
@@ -319,6 +319,8 @@ kvm_handler BOOKE_INTERRUPT_DEBUG, EX_PARAMS(DBG), \
SPRN_DSRR0, SPRN_DSRR1, 0
 kvm_handler BOOKE_INTERRUPT_DEBUG, EX_PARAMS(CRIT), \
SPRN_CSRR0, SPRN_CSRR1, 0
+kvm_handler BOOKE_INTERRUPT_LRAT_ERROR, EX_PARAMS(GEN), \
+   SPRN_SRR0, SPRN_SRR1, (NEED_EMU | NEED_DEAR | NEED_ESR)
 #else
 /*
  * For input register values, see arch/powerpc/include/asm/kvm_booke_hv_asm.h
-- 
1.7.3.4


___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


[PATCH 1/2] powerpc/booke64: Add LRAT error exception handler

2013-07-03 Thread Mihai Caraman
Add LRAT (Logical to Real Address Translation) error exception handler to
Booke3E 64-bit kernel. LRAT support in KVM will follow afterwards.

Signed-off-by: Mihai Caraman mihai.cara...@freescale.com
---
 arch/powerpc/include/asm/kvm_asm.h   |1 +
 arch/powerpc/include/asm/reg_booke.h |1 +
 arch/powerpc/kernel/exceptions-64e.S |   14 ++
 3 files changed, 16 insertions(+), 0 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_asm.h 
b/arch/powerpc/include/asm/kvm_asm.h
index 851bac7..83b91e5 100644
--- a/arch/powerpc/include/asm/kvm_asm.h
+++ b/arch/powerpc/include/asm/kvm_asm.h
@@ -74,6 +74,7 @@
 #define BOOKE_INTERRUPT_GUEST_DBELL_CRIT 39
 #define BOOKE_INTERRUPT_HV_SYSCALL 40
 #define BOOKE_INTERRUPT_HV_PRIV 41
+#define BOOKE_INTERRUPT_LRAT_ERROR 42
 
 /* book3s */
 
diff --git a/arch/powerpc/include/asm/reg_booke.h 
b/arch/powerpc/include/asm/reg_booke.h
index b417de3..6b113e1 100644
--- a/arch/powerpc/include/asm/reg_booke.h
+++ b/arch/powerpc/include/asm/reg_booke.h
@@ -101,6 +101,7 @@
 #define SPRN_IVOR390x1B1   /* Interrupt Vector Offset Register 39 */
 #define SPRN_IVOR400x1B2   /* Interrupt Vector Offset Register 40 */
 #define SPRN_IVOR410x1B3   /* Interrupt Vector Offset Register 41 */
+#define SPRN_IVOR420x1B4   /* Interrupt Vector Offset Register 42 */
 #define SPRN_GIVOR20x1B8   /* Guest IVOR2 */
 #define SPRN_GIVOR30x1B9   /* Guest IVOR3 */
 #define SPRN_GIVOR40x1BA   /* Guest IVOR4 */
diff --git a/arch/powerpc/kernel/exceptions-64e.S 
b/arch/powerpc/kernel/exceptions-64e.S
index 0c379e9..e08b469 100644
--- a/arch/powerpc/kernel/exceptions-64e.S
+++ b/arch/powerpc/kernel/exceptions-64e.S
@@ -308,6 +308,7 @@ interrupt_base_book3e:  
/* fake trap */
EXCEPTION_STUB(0x2e0, guest_doorbell_crit)
EXCEPTION_STUB(0x300, hypercall)
EXCEPTION_STUB(0x320, ehpriv)
+   EXCEPTION_STUB(0x340, lrat_error)
 
.globl interrupt_end_book3e
 interrupt_end_book3e:
@@ -676,6 +677,17 @@ kernel_dbg_exc:
bl  .unknown_exception
b   .ret_from_except
 
+/* LRAT Error interrupt */
+   START_EXCEPTION(lrat_error);
+   NORMAL_EXCEPTION_PROLOG(0x340, BOOKE_INTERRUPT_LRAT_ERROR,
+   PROLOG_ADDITION_NONE)
+   EXCEPTION_COMMON(0x340, PACA_EXGEN, INTS_KEEP)
+   addir3,r1,STACK_FRAME_OVERHEAD
+   bl  .save_nvgprs
+   INTS_RESTORE_HARD
+   bl  .unknown_exception
+   b   .ret_from_except
+
 /*
  * An interrupt came in while soft-disabled; We mark paca-irq_happened
  * accordingly and if the interrupt is level sensitive, we hard disable
@@ -858,6 +870,7 @@ BAD_STACK_TRAMPOLINE(0x2e0)
 BAD_STACK_TRAMPOLINE(0x300)
 BAD_STACK_TRAMPOLINE(0x310)
 BAD_STACK_TRAMPOLINE(0x320)
+BAD_STACK_TRAMPOLINE(0x340)
 BAD_STACK_TRAMPOLINE(0x400)
 BAD_STACK_TRAMPOLINE(0x500)
 BAD_STACK_TRAMPOLINE(0x600)
@@ -1410,6 +1423,7 @@ _GLOBAL(setup_doorbell_ivors)
 _GLOBAL(setup_ehv_ivors)
SET_IVOR(40, 0x300) /* Embedded Hypervisor System Call */
SET_IVOR(41, 0x320) /* Embedded Hypervisor Privilege */
+   SET_IVOR(42, 0x340) /* LRAT Error */
SET_IVOR(38, 0x2c0) /* Guest Processor Doorbell */
SET_IVOR(39, 0x2e0) /* Guest Processor Doorbell Crit/MC */
blr
-- 
1.7.3.4


___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


Re: [PATCH 3/6] KVM: PPC: Book3E: Increase FPU laziness

2013-07-03 Thread Alexander Graf

On 03.07.2013, at 17:41, Caraman Mihai Claudiu-B02008 wrote:

 Increase FPU laziness by calling kvmppc_load_guest_fp() just before
 returning to guest instead of each sched in. Without this improvement
 an interrupt may also claim floting point corrupting guest state.
 
 Not sure I follow. Could you please describe exactly what's happening?
 
 This was already discussed on the list, I will forward you the thread.
 
 The only thing I've seen in that thread was some pathetic theoretical
 case where an interrupt handler would enable fp and clobber state
 carelessly. That's not something I'm worried about.
 
 Neither me though I don't find it pathetic. Please refer it to Scott.

If from Linux's point of view we look like a user space program with active 
floating point registers, we don't have to worry about this case. Kernel code 
that would clobber that fp state would clobber random user space's fp state too.

 
 
 I really don't see where this patch improves anything tbh. It certainly
 makes the code flow more awkward.
 
 I was pointing you to this: The idea of FPU/AltiVec laziness that the kernel
 is struggling to achieve is to reduce the number of store/restore operations.
 Without this improvement we restore the unit each time we are sched it. If an
 other process take the ownership of the unit (on SMP it's even worse but don't
 bother with this) the kernel store the unit state to qemu task. This can 
 happen
 multiple times during handle_exit().
 
 Do you see it now? 

Yup. Looks good. The code flow is very hard to follow though - there are a lot 
of implicit assumptions that don't get documented anywhere. For example the 
fact that we rely on giveup_fpu() to remove MSR_FP from our thread.


Alex

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


Re: [PATCH 2/2] DMA: Freescale: update driver to support 8-channel DMA engine

2013-07-03 Thread Scott Wood

On 07/02/2013 10:47:44 PM, Hongbo Zhang wrote:

On 07/03/2013 07:13 AM, Scott Wood wrote:
Wait a second -- how are we even getting into this code on these new  
DMA controllers?  All 85xx-family DMA controllers use  
fsldma_chan_irq directly.



Right, we are using fsldma_chan_irq, this code never run.
I just see there is such code for elo/eloplus DMA controllers, so I  
update it for the new 8-channel DMA.


That code is used for elo (e.g. mpc83xx DMA), but not eloplus.

-Scott
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


Re: [PATCH 3/6] KVM: PPC: Book3E: Increase FPU laziness

2013-07-03 Thread Scott Wood

On 07/03/2013 10:11:50 AM, Alexander Graf wrote:


On 03.07.2013, at 15:55, Caraman Mihai Claudiu-B02008 wrote:

 -Original Message-
 From: Alexander Graf [mailto:ag...@suse.de]
 Sent: Wednesday, July 03, 2013 4:45 PM
 To: Caraman Mihai Claudiu-B02008
 Cc: kvm-...@vger.kernel.org; k...@vger.kernel.org; linuxppc-
 d...@lists.ozlabs.org
 Subject: Re: [PATCH 3/6] KVM: PPC: Book3E: Increase FPU laziness


 On 03.07.2013, at 14:42, Mihai Caraman wrote:

 Increase FPU laziness by calling kvmppc_load_guest_fp() just  
before
 returning to guest instead of each sched in. Without this  
improvement

 an interrupt may also claim floting point corrupting guest state.

 Not sure I follow. Could you please describe exactly what's  
happening?


 This was already discussed on the list, I will forward you the  
thread.


The only thing I've seen in that thread was some pathetic theoretical  
case where an interrupt handler would enable fp and clobber state  
carelessly. That's not something I'm worried about.


On x86 floating point registers can be used for memcpy(), which can be  
used in interrupt handlers.  Just because it doesn't happen on PPC  
today doesn't make it a pathetic theoretical case that we should  
ignore and leave a landmine buried in the KVM code.  Even power7 is  
using something similar for copyuser (which isn't called from interrupt  
context, but it's not a huge leap from that to doing it in memcpy).


It also doesn't seem *that* farfetched that some driver for unusual  
hardware could decide it needs FP in its interrupt handler, and call  
the function that is specifically meant to ensure that.  It's frowned  
upon, but that doesn't mean nobody will ever do it.


-Scott
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


Re: [PATCH 4/6] KVM: PPC: Book3E: Add AltiVec support

2013-07-03 Thread Alexander Graf

On 03.07.2013, at 18:49, Caraman Mihai Claudiu-B02008 wrote:

 +
   if (!vcpu-arch.sane) {
   kvm_run-exit_reason = KVM_EXIT_INTERNAL_ERROR;
   return -EINVAL;
 @@ -716,6 +750,22 @@ int kvmppc_vcpu_run(struct kvm_run *kvm_run,
 struct kvm_vcpu *vcpu)
   kvmppc_load_guest_fp(vcpu);
 #endif
 
 +#ifdef CONFIG_ALTIVEC
 
 /* Switch from user space Altivec to guest Altivec state */
 
 + if (cpu_has_feature(CPU_FTR_ALTIVEC)) {
 
 Why not use your kvmppc_supports_altivec() helper here?
 
 Give it a try ... because Linus guarded this members with
 CONFIG_ALTIVEC :)
 
 Huh? You already are in an #ifdef CONFIG_ALTIVEC here. I think it's a
 good idea to be consistent in helper usage. And the name you gave to the
 helper (kvmppc_supports_altivec) is actually quite nice and tells us
 exactly what we're asking for.
 
 I thought you asking to replace #ifdef CONFIG_ALTIVEC.
 
 Do we need to do this even when the guest doesn't use Altivec? Can't
 we
 just load it on demand then once we fault? This code path really
 should
 only be a prefetch enable when MSR_VEC is already set in guest
 context.
 
 No we can't, read 6/6.
 
 So we have to make sure we're completely unlazy when it comes to a KVM
 guest. Are we?
 
 Yes, because MSR[SPV] is under its control.

Oh, sure, KVM wants it unlazy. That part is obvious. But does the kernel always 
give us unlazyness? The way I read the code, process.c goes lazy when 
!CONFIG_SMP.

So the big question is why we're manually enforcing FPU giveup, but not Altivec 
giveup? One of the 2 probably is wrong :).


Alex

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


Re: [PATCH 3/6] KVM: PPC: Book3E: Increase FPU laziness

2013-07-03 Thread Alexander Graf

On 03.07.2013, at 19:07, Scott Wood wrote:

 On 07/03/2013 10:11:50 AM, Alexander Graf wrote:
 On 03.07.2013, at 15:55, Caraman Mihai Claudiu-B02008 wrote:
  -Original Message-
  From: Alexander Graf [mailto:ag...@suse.de]
  Sent: Wednesday, July 03, 2013 4:45 PM
  To: Caraman Mihai Claudiu-B02008
  Cc: kvm-...@vger.kernel.org; k...@vger.kernel.org; linuxppc-
  d...@lists.ozlabs.org
  Subject: Re: [PATCH 3/6] KVM: PPC: Book3E: Increase FPU laziness
 
 
  On 03.07.2013, at 14:42, Mihai Caraman wrote:
 
  Increase FPU laziness by calling kvmppc_load_guest_fp() just before
  returning to guest instead of each sched in. Without this improvement
  an interrupt may also claim floting point corrupting guest state.
 
  Not sure I follow. Could you please describe exactly what's happening?
 
  This was already discussed on the list, I will forward you the thread.
 The only thing I've seen in that thread was some pathetic theoretical case 
 where an interrupt handler would enable fp and clobber state carelessly. 
 That's not something I'm worried about.
 
 On x86 floating point registers can be used for memcpy(), which can be used 
 in interrupt handlers.  Just because it doesn't happen on PPC today doesn't 
 make it a pathetic theoretical case that we should ignore and leave a 
 landmine buried in the KVM code.  Even power7 is using something similar for 
 copyuser (which isn't called from interrupt context, but it's not a huge leap 
 from that to doing it in memcpy).
 
 It also doesn't seem *that* farfetched that some driver for unusual hardware 
 could decide it needs FP in its interrupt handler, and call the function that 
 is specifically meant to ensure that.  It's frowned upon, but that doesn't 
 mean nobody will ever do it.

Oh, sure. But in that case I would strongly hope that the driver first saves 
off the current FPU state to the thread struct before it goes off and uses them 
for itself. Otherwise it would break user space, no?


Alex

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


Re: [PATCH 3/6] KVM: PPC: Book3E: Increase FPU laziness

2013-07-03 Thread Scott Wood

On 07/03/2013 11:59:45 AM, Alexander Graf wrote:


On 03.07.2013, at 17:41, Caraman Mihai Claudiu-B02008 wrote:

 Increase FPU laziness by calling kvmppc_load_guest_fp() just  
before
 returning to guest instead of each sched in. Without this  
improvement
 an interrupt may also claim floting point corrupting guest  
state.


 Not sure I follow. Could you please describe exactly what's  
happening?


 This was already discussed on the list, I will forward you the  
thread.


 The only thing I've seen in that thread was some pathetic  
theoretical

 case where an interrupt handler would enable fp and clobber state
 carelessly. That's not something I'm worried about.

 Neither me though I don't find it pathetic. Please refer it to  
Scott.


If from Linux's point of view we look like a user space program with  
active floating point registers, we don't have to worry about this  
case. Kernel code that would clobber that fp state would clobber  
random user space's fp state too.


This patch makes it closer to how it works with a user space program.   
Or rather, it reduces the time window when we don't (and can't) act  
like a normal userspace program -- and ensures that we have interrupts  
disabled during that window.  An interrupt can't randomly clobber FP  
state; it has to call enable_kernel_fp() just like KVM does.   
enable_kernel_fp() clears the userspace MSR_FP to ensure that the state  
it saves gets restored before userspace uses it again, but that won't  
have any effect on guest execution (especially in HV-mode).  Thus  
kvmppc_load_guest_fp() needs to be atomic with guest entry.   
Conceptually it's like taking an automatic FP unavailable trap when we  
enter the guest, since we can't be lazy in HV-mode.


 I really don't see where this patch improves anything tbh. It  
certainly

 makes the code flow more awkward.

 I was pointing you to this: The idea of FPU/AltiVec laziness that  
the kernel
 is struggling to achieve is to reduce the number of store/restore  
operations.
 Without this improvement we restore the unit each time we are sched  
it. If an
 other process take the ownership of the unit (on SMP it's even  
worse but don't
 bother with this) the kernel store the unit state to qemu task.  
This can happen

 multiple times during handle_exit().

 Do you see it now?

Yup. Looks good. The code flow is very hard to follow though - there  
are a lot of implicit assumptions that don't get documented anywhere.  
For example the fact that we rely on giveup_fpu() to remove MSR_FP  
from our thread.


That's not new to this patch...

-Scott
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


Re: [PATCH 3/6] KVM: PPC: Book3E: Increase FPU laziness

2013-07-03 Thread Scott Wood

On 07/03/2013 07:42:36 AM, Mihai Caraman wrote:

Increase FPU laziness by calling kvmppc_load_guest_fp() just before
returning to guest instead of each sched in. Without this improvement
an interrupt may also claim floting point corrupting guest state.

Signed-off-by: Mihai Caraman mihai.cara...@freescale.com
---
 arch/powerpc/kvm/booke.c  |1 +
 arch/powerpc/kvm/e500mc.c |2 --
 2 files changed, 1 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c
index 113961f..3cae2e3 100644
--- a/arch/powerpc/kvm/booke.c
+++ b/arch/powerpc/kvm/booke.c
@@ -1204,6 +1204,7 @@ int kvmppc_handle_exit(struct kvm_run *run,  
struct kvm_vcpu *vcpu,
 			r = (s  2) | RESUME_HOST | (r   
RESUME_FLAG_NV);

} else {
kvmppc_lazy_ee_enable();
+   kvmppc_load_guest_fp(vcpu);
}


This should go before the kvmppc_lazy_ee_enable().

-Scott
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


Re: [PATCH 3/6] KVM: PPC: Book3E: Increase FPU laziness

2013-07-03 Thread Alexander Graf

On 03.07.2013, at 19:17, Scott Wood wrote:

 On 07/03/2013 11:59:45 AM, Alexander Graf wrote:
 On 03.07.2013, at 17:41, Caraman Mihai Claudiu-B02008 wrote:
  Increase FPU laziness by calling kvmppc_load_guest_fp() just before
  returning to guest instead of each sched in. Without this improvement
  an interrupt may also claim floting point corrupting guest state.
 
  Not sure I follow. Could you please describe exactly what's happening?
 
  This was already discussed on the list, I will forward you the thread.
 
  The only thing I've seen in that thread was some pathetic theoretical
  case where an interrupt handler would enable fp and clobber state
  carelessly. That's not something I'm worried about.
 
  Neither me though I don't find it pathetic. Please refer it to Scott.
 If from Linux's point of view we look like a user space program with active 
 floating point registers, we don't have to worry about this case. Kernel 
 code that would clobber that fp state would clobber random user space's fp 
 state too.
 
 This patch makes it closer to how it works with a user space program.  Or 
 rather, it reduces the time window when we don't (and can't) act like a 
 normal userspace program -- and ensures that we have interrupts disabled 
 during that window.  An interrupt can't randomly clobber FP state; it has to 
 call enable_kernel_fp() just like KVM does.  enable_kernel_fp() clears the 
 userspace MSR_FP to ensure that the state it saves gets restored before 
 userspace uses it again, but that won't have any effect on guest execution 
 (especially in HV-mode).  Thus kvmppc_load_guest_fp() needs to be atomic with 
 guest entry.  Conceptually it's like taking an automatic FP unavailable trap 
 when we enter the guest, since we can't be lazy in HV-mode.

Yep. Once I understood that point things became clear to me :).

 
  I really don't see where this patch improves anything tbh. It certainly
  makes the code flow more awkward.
 
  I was pointing you to this: The idea of FPU/AltiVec laziness that the 
  kernel
  is struggling to achieve is to reduce the number of store/restore 
  operations.
  Without this improvement we restore the unit each time we are sched it. If 
  an
  other process take the ownership of the unit (on SMP it's even worse but 
  don't
  bother with this) the kernel store the unit state to qemu task. This can 
  happen
  multiple times during handle_exit().
 
  Do you see it now?
 Yup. Looks good. The code flow is very hard to follow though - there are a 
 lot of implicit assumptions that don't get documented anywhere. For example 
 the fact that we rely on giveup_fpu() to remove MSR_FP from our thread.
 
 That's not new to this patch...

Would be nice to fix nevertheless. I'm probably not going to be the last person 
forgetting how this works.


Alex

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


Re: [PATCH 3/6] KVM: PPC: Book3E: Increase FPU laziness

2013-07-03 Thread Alexander Graf

On 03.07.2013, at 19:18, Scott Wood wrote:

 On 07/03/2013 07:42:36 AM, Mihai Caraman wrote:
 Increase FPU laziness by calling kvmppc_load_guest_fp() just before
 returning to guest instead of each sched in. Without this improvement
 an interrupt may also claim floting point corrupting guest state.
 Signed-off-by: Mihai Caraman mihai.cara...@freescale.com
 ---
 arch/powerpc/kvm/booke.c  |1 +
 arch/powerpc/kvm/e500mc.c |2 --
 2 files changed, 1 insertions(+), 2 deletions(-)
 diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c
 index 113961f..3cae2e3 100644
 --- a/arch/powerpc/kvm/booke.c
 +++ b/arch/powerpc/kvm/booke.c
 @@ -1204,6 +1204,7 @@ int kvmppc_handle_exit(struct kvm_run *run, struct 
 kvm_vcpu *vcpu,
  r = (s  2) | RESUME_HOST | (r  RESUME_FLAG_NV);
  } else {
  kvmppc_lazy_ee_enable();
 +kvmppc_load_guest_fp(vcpu);
  }
 
 This should go before the kvmppc_lazy_ee_enable().

Why? What difference does that make? We're running with interrupts disabled 
here, right?


Alex

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


Re: [PATCH 3/6] KVM: PPC: Book3E: Increase FPU laziness

2013-07-03 Thread Scott Wood

On 07/03/2013 12:23:16 PM, Alexander Graf wrote:


On 03.07.2013, at 19:18, Scott Wood wrote:

 On 07/03/2013 07:42:36 AM, Mihai Caraman wrote:
 Increase FPU laziness by calling kvmppc_load_guest_fp() just before
 returning to guest instead of each sched in. Without this  
improvement

 an interrupt may also claim floting point corrupting guest state.
 Signed-off-by: Mihai Caraman mihai.cara...@freescale.com
 ---
 arch/powerpc/kvm/booke.c  |1 +
 arch/powerpc/kvm/e500mc.c |2 --
 2 files changed, 1 insertions(+), 2 deletions(-)
 diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c
 index 113961f..3cae2e3 100644
 --- a/arch/powerpc/kvm/booke.c
 +++ b/arch/powerpc/kvm/booke.c
 @@ -1204,6 +1204,7 @@ int kvmppc_handle_exit(struct kvm_run *run,  
struct kvm_vcpu *vcpu,
 			r = (s  2) | RESUME_HOST | (r   
RESUME_FLAG_NV);

} else {
kvmppc_lazy_ee_enable();
 +  kvmppc_load_guest_fp(vcpu);
}

 This should go before the kvmppc_lazy_ee_enable().

Why? What difference does that make? We're running with interrupts  
disabled here, right?


Yes, and we want to minimize the code we run where we have interrupts  
disabled but the lazy ee state says they're enabled.  So  
kvmppc_lazy_ee_enable() should be the last thing we do before entering  
asm code.


See http://patchwork.ozlabs.org/patch/249565/

-Scott
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


Re: [PATCH 2/6] KVM: PPC: Book3E: Refactor SPE/FP exit handling

2013-07-03 Thread Scott Wood

 On 07/03/2013 10:13:57 AM, Alexander Graf wrote:


On 03.07.2013, at 15:53, Caraman Mihai Claudiu-B02008 wrote:

 -#ifdef CONFIG_SPE
case BOOKE_INTERRUPT_SPE_ALTIVEC_UNAVAIL: {
 -  if (vcpu-arch.shared-msr  MSR_SPE)
 -  kvmppc_vcpu_enable_spe(vcpu);
 -  else
 -  kvmppc_booke_queue_irqprio(vcpu,
 -
 BOOKE_IRQPRIO_SPE_ALTIVEC_UNAVAIL);
 +  if (kvmppc_supports_spe()) {
 +  bool enabled = false;
 +
 +#ifndef CONFIG_KVM_BOOKE_HV
 +  if (vcpu-arch.shared-msr  MSR_SPE) {
 +  kvmppc_vcpu_enable_spe(vcpu);
 +  enabled = true;
 +  }
 +#endif

 Why the #ifdef? On HV capable systems kvmppc_supports_spe() will  
just

 always return false.

 AltiVec and SPE unavailable exceptions follows the same path. While
 kvmppc_supports_spe() will always return false  
kvmppc_supports_altivec()

 may not.

There is no chip that supports SPE and HV at the same time. So we'll  
never hit this anyway, since kvmppc_supports_spe() always returns  
false on HV capable systems.


Just add a comment saying so and remove the ifdef :).


kvmppc_vcpu_enable_spe isn't defined unless CONFIG_SPE is defined.   
More seriously, MSR_SPE is the same as MSR_VEC, so we shouldn't  
interpret it as SPE unless CONFIG_SPE is defined.  And we can't rely on  
the if (kvmppc_supports_spe()) here because a later patch changes it  
to if (kvmppc_supports_altivec() || kvmppc_supports_spe()).  So I  
think we still need the ifdef CONFIG_SPE here.


As for the HV ifndef, we should try not to confuse HV/PR with  
e500mc/e500v2, even if we happen to only run HV on e500mc and PR on  
e500v2.  We would not want to call kvmppc_vcpu_enable_spe() here on a  
hypothetical HV target with SPE.  And we *would* want to call  
kvmppc_vcpu_enable_fp() here on a hypothetical PR target with normal  
FP.  It's one thing to leave out the latter, since it would involve  
writing actual code that we have no way to test at this point, but  
quite another to leave out the proper conditions for when we want to  
run code that we do have.


-Scott
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


Re: [RFC PATCH 5/6] KVM: PPC: Book3E: Add ONE_REG AltiVec support

2013-07-03 Thread Scott Wood

On 07/03/2013 07:11:52 AM, Caraman Mihai Claudiu-B02008 wrote:

 -Original Message-
 From: Wood Scott-B07421
 Sent: Wednesday, June 05, 2013 1:40 AM
 To: Caraman Mihai Claudiu-B02008
 Cc: kvm-...@vger.kernel.org; k...@vger.kernel.org; linuxppc-
 d...@lists.ozlabs.org; Caraman Mihai Claudiu-B02008
 Subject: Re: [RFC PATCH 5/6] KVM: PPC: Book3E: Add ONE_REG AltiVec
 support

 On 06/03/2013 03:54:27 PM, Mihai Caraman wrote:
  Add ONE_REG support for AltiVec on Book3E.
 
  Signed-off-by: Mihai Caraman mihai.cara...@freescale.com
  ---
   arch/powerpc/kvm/booke.c |   32 
   1 files changed, 32 insertions(+), 0 deletions(-)
 
  diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c
  index 01eb635..019496d 100644
  --- a/arch/powerpc/kvm/booke.c
  +++ b/arch/powerpc/kvm/booke.c
  @@ -1570,6 +1570,22 @@ int kvm_vcpu_ioctl_get_one_reg(struct  
kvm_vcpu

  *vcpu, struct kvm_one_reg *reg)
case KVM_REG_PPC_DEBUG_INST:
val = get_reg_val(reg-id, KVMPPC_INST_EHPRIV);
break;
  +#ifdef CONFIG_ALTIVEC
  + case KVM_REG_PPC_VR0 ... KVM_REG_PPC_VR31:
  + if (!cpu_has_feature(CPU_FTR_ALTIVEC)) {
  + r = -ENXIO;
  + break;
  + }
  + val.vval = vcpu-arch.vr[reg-id - KVM_REG_PPC_VR0];
  + break;
  + case KVM_REG_PPC_VSCR:
  + if (!cpu_has_feature(CPU_FTR_ALTIVEC)) {
  + r = -ENXIO;
  + break;
  + }
  + val = get_reg_val(reg-id, vcpu-arch.vscr.u[3]);
  + break;

 Why u[3]?

AltiVec PEM manual says: The VSCR has two defined bits, the AltiVec  
non-Java
mode (NJ) bit (VSCR[15]) and the AltiVec saturation (SAT) bit  
(VSCR[31]);

the remaining bits are reserved.

I think this is the reason Paul M. exposed KVM_REG_PPC_VSCR width as  
32-bit.


Ugh.  It's documented as a 32-bit register in the ISA, but it can only  
be accessed via a vector register (seems like an odd design choice, but  
whatever).  And the kernel chose to represent it as a 128-bit vector,  
while KVM chose to represent it as the register (not the access  
thereto) is architected.  It would have been nice to be consistent...   
At least put in a comment explaining this.


-Scott
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


Re: [PATCH 4/6] KVM: PPC: Book3E: Add AltiVec support

2013-07-03 Thread Scott Wood

On 07/03/2013 12:07:30 PM, Alexander Graf wrote:


On 03.07.2013, at 18:49, Caraman Mihai Claudiu-B02008 wrote:

 Do we need to do this even when the guest doesn't use Altivec?  
Can't

 we
 just load it on demand then once we fault? This code path really
 should
 only be a prefetch enable when MSR_VEC is already set in guest
 context.

 No we can't, read 6/6.

 So we have to make sure we're completely unlazy when it comes to a  
KVM

 guest. Are we?

 Yes, because MSR[SPV] is under its control.

Oh, sure, KVM wants it unlazy. That part is obvious. But does the  
kernel always give us unlazyness? The way I read the code, process.c  
goes lazy when !CONFIG_SMP.


So the big question is why we're manually enforcing FPU giveup, but  
not Altivec giveup? One of the 2 probably is wrong :).


Why do you think we're not enforcing it for Altivec?  Is there some  
specific piece of code you're referring to that is different in this  
regard?


-Scott
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


Re: [PATCH 3/6] KVM: PPC: Book3E: Increase FPU laziness

2013-07-03 Thread Scott Wood

On 07/03/2013 07:42:36 AM, Mihai Caraman wrote:

Increase FPU laziness by calling kvmppc_load_guest_fp() just before
returning to guest instead of each sched in. Without this improvement
an interrupt may also claim floting point corrupting guest state.

Signed-off-by: Mihai Caraman mihai.cara...@freescale.com
---
 arch/powerpc/kvm/booke.c  |1 +
 arch/powerpc/kvm/e500mc.c |2 --
 2 files changed, 1 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c
index 113961f..3cae2e3 100644
--- a/arch/powerpc/kvm/booke.c
+++ b/arch/powerpc/kvm/booke.c
@@ -1204,6 +1204,7 @@ int kvmppc_handle_exit(struct kvm_run *run,  
struct kvm_vcpu *vcpu,
 			r = (s  2) | RESUME_HOST | (r   
RESUME_FLAG_NV);

} else {
kvmppc_lazy_ee_enable();
+   kvmppc_load_guest_fp(vcpu);
}
}

diff --git a/arch/powerpc/kvm/e500mc.c b/arch/powerpc/kvm/e500mc.c
index 19c8379..09da1ac 100644
--- a/arch/powerpc/kvm/e500mc.c
+++ b/arch/powerpc/kvm/e500mc.c
@@ -143,8 +143,6 @@ void kvmppc_core_vcpu_load(struct kvm_vcpu *vcpu,  
int cpu)

kvmppc_e500_tlbil_all(vcpu_e500);
__get_cpu_var(last_vcpu_on_cpu) = vcpu;
}
-
-   kvmppc_load_guest_fp(vcpu);
 }

 void kvmppc_core_vcpu_put(struct kvm_vcpu *vcpu)


Can we now remove vcpu-fpu_active, and the comment that says Kernel  
usage of FP (via
enable_kernel_fp()) in this thread must not occur while  
vcpu-fpu_active is set.?


-Scott
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


Re: [PATCH 4/6] KVM: PPC: Book3E: Add AltiVec support

2013-07-03 Thread Scott Wood

On 07/03/2013 07:42:37 AM, Mihai Caraman wrote:
Add KVM Book3E AltiVec support. KVM Book3E FPU support gracefully  
reuse host

infrastructure so follow the same approach for AltiVec.

Signed-off-by: Mihai Caraman mihai.cara...@freescale.com
---
 arch/powerpc/kvm/booke.c |   72  
-

 1 files changed, 70 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c
index 3cae2e3..c3c3af6 100644
--- a/arch/powerpc/kvm/booke.c
+++ b/arch/powerpc/kvm/booke.c
@@ -98,6 +98,19 @@ static inline bool kvmppc_supports_spe(void)
return false;
 }

+/*
+ * Always returns true is AltiVec unit is present, see
+ * kvmppc_core_check_processor_compat().
+ */
+static inline bool kvmppc_supports_altivec(void)
+{
+#ifdef CONFIG_ALTIVEC
+   if (cpu_has_feature(CPU_FTR_ALTIVEC))
+   return true;
+#endif
+   return false;
+}


Whitespace.

-Scott
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


Re: [PATCH 3/6] KVM: PPC: Book3E: Increase FPU laziness

2013-07-03 Thread Alexander Graf

On 03.07.2013, at 19:44, Scott Wood wrote:

 On 07/03/2013 12:23:16 PM, Alexander Graf wrote:
 On 03.07.2013, at 19:18, Scott Wood wrote:
  On 07/03/2013 07:42:36 AM, Mihai Caraman wrote:
  Increase FPU laziness by calling kvmppc_load_guest_fp() just before
  returning to guest instead of each sched in. Without this improvement
  an interrupt may also claim floting point corrupting guest state.
  Signed-off-by: Mihai Caraman mihai.cara...@freescale.com
  ---
  arch/powerpc/kvm/booke.c  |1 +
  arch/powerpc/kvm/e500mc.c |2 --
  2 files changed, 1 insertions(+), 2 deletions(-)
  diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c
  index 113961f..3cae2e3 100644
  --- a/arch/powerpc/kvm/booke.c
  +++ b/arch/powerpc/kvm/booke.c
  @@ -1204,6 +1204,7 @@ int kvmppc_handle_exit(struct kvm_run *run, struct 
  kvm_vcpu *vcpu,
r = (s  2) | RESUME_HOST | (r  RESUME_FLAG_NV);
} else {
kvmppc_lazy_ee_enable();
  + kvmppc_load_guest_fp(vcpu);
}
 
  This should go before the kvmppc_lazy_ee_enable().
 Why? What difference does that make? We're running with interrupts disabled 
 here, right?
 
 Yes, and we want to minimize the code we run where we have interrupts 
 disabled but the lazy ee state says they're enabled.  So 
 kvmppc_lazy_ee_enable() should be the last thing we do before entering asm 
 code.
 
 See http://patchwork.ozlabs.org/patch/249565/

Ah, cool. So we should add a comment saying that this should be the last thing 
before entering asm code then :). That way we make sure nobody else repeats the 
same mistake.


Alex

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


Re: [PATCH 3/6] KVM: PPC: Book3E: Increase FPU laziness

2013-07-03 Thread Alexander Graf

On 03.07.2013, at 20:37, Scott Wood wrote:

 On 07/03/2013 07:42:36 AM, Mihai Caraman wrote:
 Increase FPU laziness by calling kvmppc_load_guest_fp() just before
 returning to guest instead of each sched in. Without this improvement
 an interrupt may also claim floting point corrupting guest state.
 Signed-off-by: Mihai Caraman mihai.cara...@freescale.com
 ---
 arch/powerpc/kvm/booke.c  |1 +
 arch/powerpc/kvm/e500mc.c |2 --
 2 files changed, 1 insertions(+), 2 deletions(-)
 diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c
 index 113961f..3cae2e3 100644
 --- a/arch/powerpc/kvm/booke.c
 +++ b/arch/powerpc/kvm/booke.c
 @@ -1204,6 +1204,7 @@ int kvmppc_handle_exit(struct kvm_run *run, struct 
 kvm_vcpu *vcpu,
  r = (s  2) | RESUME_HOST | (r  RESUME_FLAG_NV);
  } else {
  kvmppc_lazy_ee_enable();
 +kvmppc_load_guest_fp(vcpu);
  }
  }
 diff --git a/arch/powerpc/kvm/e500mc.c b/arch/powerpc/kvm/e500mc.c
 index 19c8379..09da1ac 100644
 --- a/arch/powerpc/kvm/e500mc.c
 +++ b/arch/powerpc/kvm/e500mc.c
 @@ -143,8 +143,6 @@ void kvmppc_core_vcpu_load(struct kvm_vcpu *vcpu, int 
 cpu)
  kvmppc_e500_tlbil_all(vcpu_e500);
  __get_cpu_var(last_vcpu_on_cpu) = vcpu;
  }
 -
 -kvmppc_load_guest_fp(vcpu);
 }
 void kvmppc_core_vcpu_put(struct kvm_vcpu *vcpu)
 
 Can we now remove vcpu-fpu_active, and the comment that says Kernel usage 
 of FP (via
 enable_kernel_fp()) in this thread must not occur while vcpu-fpu_active is 
 set.?

I think so, yes.


Alex

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


Re: [PATCH 2/6] KVM: PPC: Book3E: Refactor SPE/FP exit handling

2013-07-03 Thread Alexander Graf

On 03.07.2013, at 20:28, Scott Wood wrote:

 On 07/03/2013 10:13:57 AM, Alexander Graf wrote:
 On 03.07.2013, at 15:53, Caraman Mihai Claudiu-B02008 wrote:
  -#ifdef CONFIG_SPE
   case BOOKE_INTERRUPT_SPE_ALTIVEC_UNAVAIL: {
  -if (vcpu-arch.shared-msr  MSR_SPE)
  -kvmppc_vcpu_enable_spe(vcpu);
  -else
  -kvmppc_booke_queue_irqprio(vcpu,
  -
  BOOKE_IRQPRIO_SPE_ALTIVEC_UNAVAIL);
  +if (kvmppc_supports_spe()) {
  +bool enabled = false;
  +
  +#ifndef CONFIG_KVM_BOOKE_HV
  +if (vcpu-arch.shared-msr  MSR_SPE) {
  +kvmppc_vcpu_enable_spe(vcpu);
  +enabled = true;
  +}
  +#endif
 
  Why the #ifdef? On HV capable systems kvmppc_supports_spe() will just
  always return false.
 
  AltiVec and SPE unavailable exceptions follows the same path. While
  kvmppc_supports_spe() will always return false kvmppc_supports_altivec()
  may not.
 There is no chip that supports SPE and HV at the same time. So we'll never 
 hit this anyway, since kvmppc_supports_spe() always returns false on HV 
 capable systems.
 Just add a comment saying so and remove the ifdef :).
 
 kvmppc_vcpu_enable_spe isn't defined unless CONFIG_SPE is defined.  More 
 seriously, MSR_SPE is the same as MSR_VEC, so we shouldn't interpret it as 
 SPE unless CONFIG_SPE is defined.  And we can't rely on the if 
 (kvmppc_supports_spe()) here because a later patch changes it to if 
 (kvmppc_supports_altivec() || kvmppc_supports_spe()).  So I think we still 
 need the ifdef CONFIG_SPE here.
 
 As for the HV ifndef, we should try not to confuse HV/PR with e500mc/e500v2, 
 even if we happen to only run HV on e500mc and PR on e500v2.  We would not 
 want to call kvmppc_vcpu_enable_spe() here on a hypothetical HV target with 
 SPE.  And we *would* want to call kvmppc_vcpu_enable_fp() here on a 
 hypothetical PR target with normal FP.  It's one thing to leave out the 
 latter, since it would involve writing actual code that we have no way to 
 test at this point, but quite another to leave out the proper conditions for 
 when we want to run code that we do have.

So we should make this an #ifdef CONFIG_SPE rather than #ifndef 
CONFIG_KVM_BOOKE_HV?


Alex

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


Re: [PATCH 2/6] KVM: PPC: Book3E: Refactor SPE/FP exit handling

2013-07-03 Thread Scott Wood

On 07/03/2013 01:42:12 PM, Alexander Graf wrote:


On 03.07.2013, at 20:28, Scott Wood wrote:

 On 07/03/2013 10:13:57 AM, Alexander Graf wrote:
 There is no chip that supports SPE and HV at the same time. So  
we'll never hit this anyway, since kvmppc_supports_spe() always  
returns false on HV capable systems.

 Just add a comment saying so and remove the ifdef :).

 kvmppc_vcpu_enable_spe isn't defined unless CONFIG_SPE is defined.   
More seriously, MSR_SPE is the same as MSR_VEC, so we shouldn't  
interpret it as SPE unless CONFIG_SPE is defined.  And we can't rely  
on the if (kvmppc_supports_spe()) here because a later patch  
changes it to if (kvmppc_supports_altivec() ||  
kvmppc_supports_spe()).  So I think we still need the ifdef  
CONFIG_SPE here.


 As for the HV ifndef, we should try not to confuse HV/PR with  
e500mc/e500v2, even if we happen to only run HV on e500mc and PR on  
e500v2.  We would not want to call kvmppc_vcpu_enable_spe() here on a  
hypothetical HV target with SPE.  And we *would* want to call  
kvmppc_vcpu_enable_fp() here on a hypothetical PR target with normal  
FP.  It's one thing to leave out the latter, since it would involve  
writing actual code that we have no way to test at this point, but  
quite another to leave out the proper conditions for when we want to  
run code that we do have.


So we should make this an #ifdef CONFIG_SPE rather than #ifndef  
CONFIG_KVM_BOOKE_HV?


I think it should be #if !defined(CONFIG_KVM_BOOKE_HV)   
defined(CONFIG_SPE) for the reasons I described in my second paragraph.


-Scott
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


Re: [PATCH 4/6] KVM: PPC: Book3E: Add AltiVec support

2013-07-03 Thread Alexander Graf

On 03.07.2013, at 20:36, Scott Wood wrote:

 On 07/03/2013 12:07:30 PM, Alexander Graf wrote:
 On 03.07.2013, at 18:49, Caraman Mihai Claudiu-B02008 wrote:
  Do we need to do this even when the guest doesn't use Altivec? Can't
  we
  just load it on demand then once we fault? This code path really
  should
  only be a prefetch enable when MSR_VEC is already set in guest
  context.
 
  No we can't, read 6/6.
 
  So we have to make sure we're completely unlazy when it comes to a KVM
  guest. Are we?
 
  Yes, because MSR[SPV] is under its control.
 Oh, sure, KVM wants it unlazy. That part is obvious. But does the kernel 
 always give us unlazyness? The way I read the code, process.c goes lazy when 
 !CONFIG_SMP.
 So the big question is why we're manually enforcing FPU giveup, but not 
 Altivec giveup? One of the 2 probably is wrong :).
 
 Why do you think we're not enforcing it for Altivec?  Is there some specific 
 piece of code you're referring to that is different in this regard?

Well, apparently because I misread the code :). All is well.


Alex

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


Re: [PATCH 1/2] DMA: Freescale: Add new 8-channel DMA engine device tree nodes

2013-07-03 Thread Scott Wood

On 07/03/2013 02:48:59 AM, Hongbo Zhang wrote:

On 07/03/2013 11:53 AM, Hongbo Zhang wrote:

hmm...add the devicetree-disc...@lists.ozlabs.org into list.

Note that we are discussing a better naming for this new compatible  
property in the corresponding [PATCH 2/2], so I will resend a v2 of  
this patch.



On 07/01/2013 11:46 AM, hongbo.zh...@freescale.com wrote:

From: Hongbo Zhang hongbo.zh...@freescale.com

Freescale QorIQ T4 and B4 introduce new 8-channel DMA engines, this  
patch add

the device tree nodes for them.

Signed-off-by: Hongbo Zhang hongbo.zh...@freescale.com
---
  arch/powerpc/boot/dts/fsl/qoriq-dma2-0.dtsi |   90  
+++
  arch/powerpc/boot/dts/fsl/qoriq-dma2-1.dtsi |   90  
+++

  arch/powerpc/boot/dts/fsl/t4240si-post.dtsi |4 +-
  3 files changed, 182 insertions(+), 2 deletions(-)
  create mode 100644 arch/powerpc/boot/dts/fsl/qoriq-dma2-0.dtsi
  create mode 100644 arch/powerpc/boot/dts/fsl/qoriq-dma2-1.dtsi

Scott, any comment of these two file names?


There's dma2 again...

How about elo3-dma-n.dtsi?

-Scott
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


Re: [PATCH 2/2] powerpc/85xx: add the P1020RDB-PD DTS support

2013-07-03 Thread Scott Wood

On 06/30/2013 11:12:23 PM, Haijun Zhang wrote:

From: Haijun.Zhang haijun.zh...@freescale.com

Overview of P1020RDB-PD device:
- DDR3 2GB
- NOR flash 64MB
- NAND flash 128MB
- SPI flash 16MB
- I2C EEPROM 256Kb
- eTSEC1 (RGMII PHY) connected to VSC7385 L2 switch
- eTSEC2 (SGMII PHY)
- eTSEC3 (RGMII PHY)
- SDHC
- 2 USB ports
- 4 TDM ports
- PCIe

Signed-off-by: Haijun Zhang haijun.zh...@freescale.com
Signed-off-by: Jerry Huang chang-ming.hu...@freescale.com
CC: Scott Wood scottw...@freescale.com
---
 arch/powerpc/boot/dts/p1020rdb-pd.dtsi| 257  
++

 arch/powerpc/boot/dts/p1020rdb-pd_32b.dts |  90 +++
 2 files changed, 347 insertions(+)
 create mode 100644 arch/powerpc/boot/dts/p1020rdb-pd.dtsi
 create mode 100644 arch/powerpc/boot/dts/p1020rdb-pd_32b.dts


What about 36b?


+   cpld@2,0 {
+   #address-cells = 1;
+   #size-cells = 1;
+   compatible = cpld;
+   reg = 0x2 0x0 0x2;
+   read-only;
+   };


Where does cpld as a compatible come from (it's way too vague)?  What  
is read-only supposed to mean here?


Why do you have #address-cells/#size-cells if there are no child nodes?

-Scott
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


Re: [PATCH 2/2] KVM: PPC: Book3E: Add LRAT error exception handler

2013-07-03 Thread Scott Wood

On 07/03/2013 11:56:06 AM, Mihai Caraman wrote:
With LRAT (Logical to Real Address Translation) error exception  
handler in kernel

KVM needs to add the counterpart otherwise will break the build.

Signed-off-by: Mihai Caraman mihai.cara...@freescale.com
---
 arch/powerpc/kvm/bookehv_interrupts.S |2 ++
 1 files changed, 2 insertions(+), 0 deletions(-)


Please combine these two patches to avoid breaking bisectability.

-Scott
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


[PATCH] rapidio: change endpoint device name format

2013-07-03 Thread Alexandre Bounine
Change endpoint device name format to use a component tag value instead of
device destination ID.

RapidIO specification defines a component tag to be a unique identifier
for devices in a network. RapidIO switches already use component tag as
part of their device name and also use it for device identification when
processing error management event notifications.

Forming an endpoint's device name using its component tag instead of 
destination ID
allows to keep sysfs device directories unchanged in case if a routing
process dynamically changes endpoint's destination ID as a result of route
optimization.

This change should not affect any existing users because a valid device
destination ID always should be obtained by reading destid attribute and
not by parsing device name.

This patch also removes switchid member from struct rio_switch because it
simply duplicates the component tag and does not have other use than in
device name generation.

Signed-off-by: Alexandre Bounine alexandre.boun...@idt.com
Cc: Matt Porter mpor...@kernel.crashing.org
Cc: Li Yang le...@freescale.com
Cc: Kumar Gala ga...@kernel.crashing.org
Cc: Andre van Herk andre.van.h...@prodrive.nl
Cc: Micha Nelissen micha.nelis...@prodrive.nl
Cc: Stef van Os stef.van...@prodrive.nl
---
 drivers/rapidio/rio-scan.c |5 ++---
 include/linux/rio.h|2 --
 2 files changed, 2 insertions(+), 5 deletions(-)

diff --git a/drivers/rapidio/rio-scan.c b/drivers/rapidio/rio-scan.c
index 0e86569..c744800 100644
--- a/drivers/rapidio/rio-scan.c
+++ b/drivers/rapidio/rio-scan.c
@@ -433,7 +433,6 @@ static struct rio_dev *rio_setup_device(struct rio_net *net,
/* If a PE has both switch and other functions, show it as a switch */
if (rio_is_switch(rdev)) {
rswitch = rdev-rswitch;
-   rswitch-switchid = rdev-comp_tag  RIO_CTAG_UDEVID;
rswitch-port_ok = 0;
spin_lock_init(rswitch-lock);
rswitch-route_table = kzalloc(sizeof(u8)*
@@ -446,7 +445,7 @@ static struct rio_dev *rio_setup_device(struct rio_net *net,
rdid++)
rswitch-route_table[rdid] = RIO_INVALID_ROUTE;
dev_set_name(rdev-dev, %02x:s:%04x, rdev-net-id,
-rswitch-switchid);
+rdev-comp_tag  RIO_CTAG_UDEVID);
 
if (do_enum)
rio_route_clr_table(rdev, RIO_GLOBAL_TABLE, 0);
@@ -459,7 +458,7 @@ static struct rio_dev *rio_setup_device(struct rio_net *net,
rio_enable_rx_tx_port(port, 0, destid, hopcount, 0);
 
dev_set_name(rdev-dev, %02x:e:%04x, rdev-net-id,
-rdev-destid);
+rdev-comp_tag  RIO_CTAG_UDEVID);
}
 
rio_attach_device(rdev);
diff --git a/include/linux/rio.h b/include/linux/rio.h
index e2faf7b..b71d573 100644
--- a/include/linux/rio.h
+++ b/include/linux/rio.h
@@ -92,7 +92,6 @@ union rio_pw_msg;
 /**
  * struct rio_switch - RIO switch info
  * @node: Node in global list of switches
- * @switchid: Switch ID that is unique across a network
  * @route_table: Copy of switch routing table
  * @port_ok: Status of each port (one bit per port) - OK=1 or UNINIT=0
  * @ops: pointer to switch-specific operations
@@ -101,7 +100,6 @@ union rio_pw_msg;
  */
 struct rio_switch {
struct list_head node;
-   u16 switchid;
u8 *route_table;
u32 port_ok;
struct rio_switch_ops *ops;
-- 
1.7.8.4

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


Re: [PATCH 1/2] powerpc/booke64: Add LRAT error exception handler

2013-07-03 Thread Scott Wood

On 07/03/2013 11:56:05 AM, Mihai Caraman wrote:

@@ -1410,6 +1423,7 @@ _GLOBAL(setup_doorbell_ivors)
 _GLOBAL(setup_ehv_ivors)
SET_IVOR(40, 0x300) /* Embedded Hypervisor System Call */
SET_IVOR(41, 0x320) /* Embedded Hypervisor Privilege */
+   SET_IVOR(42, 0x340) /* LRAT Error */


What happens if we write to IVOR42 on e5500?  If the answer is no-op,  
is that behavior guaranteed on any CPU with E.HV but not LRAT?


-Scott
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


Re: [PATCH 1/2] powerpc: enable the relocatable support for the fsl booke 32bit kernel

2013-07-03 Thread Scott Wood

On 07/02/2013 10:00:44 PM, Kevin Hao wrote:

On Tue, Jul 02, 2013 at 05:39:18PM -0500, Scott Wood wrote:
 How much overhead (space and time) is this really?

The following is the additional sections when relocatable is enabled  
for

a p2020rdb board.
   sectionsize
  .dynsym   07f0
  .dynstr   0926
  .dynamic  0080
  .hash 0388
  .interp   0011
  .rela.dyn 00215250

The time for the relocation is about 32ms on a p2020rdb board.


Hmm... more relocations than I expected.  What percentage is this of  
the total image size?


-Scott
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


[PATCH] Emulate sync instruction variants

2013-07-03 Thread James.Yang
From: James Yang james.y...@freescale.com

Reserved fields of the sync instruction have been used for other
instructions (e.g. lwsync).  On processors that do not support variants
of the sync instruction, emulate it by executing a sync to subsume the
effect of the intended instruction.

Signed-off-by: James Yang james.y...@freescale.com
---
 arch/powerpc/include/asm/ppc-opcode.h |2 ++
 arch/powerpc/kernel/traps.c   |7 +++
 2 files changed, 9 insertions(+), 0 deletions(-)

diff --git a/arch/powerpc/include/asm/ppc-opcode.h 
b/arch/powerpc/include/asm/ppc-opcode.h
index eccfc16..0142eb2 100644
--- a/arch/powerpc/include/asm/ppc-opcode.h
+++ b/arch/powerpc/include/asm/ppc-opcode.h
@@ -96,6 +96,8 @@
 #define PPC_INST_LSWX  0x7c00042a
 #define PPC_INST_LWARX 0x7c28
 #define PPC_INST_LWSYNC0x7c2004ac
+#define PPC_INST_SYNC  0x7c0004ac
+#define PPC_INST_SYNC_MASK 0xfc0007fe
 #define PPC_INST_LXVD2X0x7c000698
 #define PPC_INST_MCRXR 0x7c000400
 #define PPC_INST_MCRXR_MASK0xfc0007fe
diff --git a/arch/powerpc/kernel/traps.c b/arch/powerpc/kernel/traps.c
index a98adc7..c3ceaa2 100644
--- a/arch/powerpc/kernel/traps.c
+++ b/arch/powerpc/kernel/traps.c
@@ -1018,6 +1018,13 @@ static int emulate_instruction(struct pt_regs *regs)
return emulate_isel(regs, instword);
}
 
+   /* Emulate sync instruction variants */
+   if ((instword  PPC_INST_SYNC_MASK) == PPC_INST_SYNC) {
+   PPC_WARN_EMULATED(sync, regs);
+   asm volatile (sync);
+   return 0;
+   }
+
 #ifdef CONFIG_PPC64
/* Emulate the mfspr rD, DSCR. */
if instword  PPC_INST_MFSPR_DSCR_USER_MASK) ==
-- 
1.7.0.4


___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


Re: [PATCH 2/2] KVM: PPC: Book3E: Add LRAT error exception handler

2013-07-03 Thread Alexander Graf

On 03.07.2013, at 22:16, Scott Wood wrote:

 On 07/03/2013 11:56:06 AM, Mihai Caraman wrote:
 With LRAT (Logical to Real Address Translation) error exception handler in 
 kernel
 KVM needs to add the counterpart otherwise will break the build.
 Signed-off-by: Mihai Caraman mihai.cara...@freescale.com
 ---
 arch/powerpc/kvm/bookehv_interrupts.S |2 ++
 1 files changed, 2 insertions(+), 0 deletions(-)
 
 Please combine these two patches to avoid breaking bisectability.

Why does the split break bisectability?


Alex

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


Re: [PATCH 2/2] KVM: PPC: Book3E: Add LRAT error exception handler

2013-07-03 Thread Scott Wood

On 07/03/2013 04:42:40 PM, Alexander Graf wrote:


On 03.07.2013, at 22:16, Scott Wood wrote:

 On 07/03/2013 11:56:06 AM, Mihai Caraman wrote:
 With LRAT (Logical to Real Address Translation) error exception  
handler in kernel

 KVM needs to add the counterpart otherwise will break the build.
 Signed-off-by: Mihai Caraman mihai.cara...@freescale.com
 ---
 arch/powerpc/kvm/bookehv_interrupts.S |2 ++
 1 files changed, 2 insertions(+), 0 deletions(-)

 Please combine these two patches to avoid breaking bisectability.

Why does the split break bisectability?


Same reason as the altivec build breakage from earlier.  If we add a  
new exception type but not a KVM handler for it, the kernel won't link.


-Scott
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


[git pull] Please pull powerpc.git next branch

2013-07-03 Thread Benjamin Herrenschmidt
Hi Linus !

This is the powerpc changes for the 3.11 merge window. In addition to
the usual bug fixes and small updates, the main highlights are:

 - Support for transparent huge pages by Aneesh Kumar for 64-bit server
processors. This allows the use of 16M pages as transparent huge pages
on kernels compiled with a 64K base page size.

 - Base VFIO support for KVM on power by 
Alexey Kardashevskiy

 - Wiring up of our nvram to the pstore infrastructure, including
putting compressed oopses in there by Aruna Balakrishnaiah

 - Move, rework and improve our EEH (basically PCI error handling
and recovery) infrastructure. It is no longer specific to pseries but is
now usable by the new powernv platform as well (no hypervisor) by
Gavin Shan.

 - I fixed some bugs in our math-emu instruction decoding and made it
usable to emulate some optional FP instructions on processors with hard
FP that lack them (such as fsqrt on Freescale embedded processors).

 - Support for Power8 Event Based Branch facility by Michael Ellerman.
This facility allows what is basically userspace interrupts for
performance monitor events.

 - A bunch of Transactional Memory vs. Signals bug fixes and HW
breakpoint/watchpoint fixes by Michael Neuling.

And more ... I appologize in advance if I've failed to highlight
something that somebody deemed worth it.

Cheers,
Ben.

The following changes since commit 8bb495e3f02401ee6f76d1b1d77f3ac9f079e376:

  Linux 3.10 (2013-06-30 15:13:29 -0700)

are available in the git repository at:

  git://git.kernel.org/pub/scm/linux/kernel/git/benh/powerpc.git next

for you to fetch changes up to 1d8b368ab4aacfc3f864655baad4d31a3028ec1a:

  pstore: Add hsize argument in write_buf call of pstore_ftrace_call 
(2013-07-02 18:39:37 +1000)


Aaro Koskinen (1):
  powerpc/windfarm: Fix overtemperature clearing

Alexey Kardashevskiy (3):
  powerpc/vfio: Enable on PowerNV platform
  powerpc/vfio: Implement IOMMU driver for VFIO
  powerpc/vfio: Enable on pSeries platform

Alistair Popple (3):
  powerpc: Add a configuration option for early BootX/OpenFirmware debug
  powerpc: Update default configurations
  powerpc: Update currituck pci/usb fixup for new board revision

Anatolij Gustschin (1):
  powerpc/mpc512x: enable USB support in defconfig

Aneesh Kumar K.V (20):
  mm/thp: use the correct function when updating access flags
  mm/THP: add pmd args to pgtable deposit and withdraw APIs
  mm/THP: withdraw the pgtable after pmdp related operations
  mm/THP: don't use HPAGE_SHIFT in transparent hugepage code
  mm/THP: deposit the transpare huge pgtable before set_pmd
  powerpc/mm: handle hugepage size correctly when invalidating hpte entries
  powerpc/THP: Double the PMD table size for THP
  powerpc/THP: Implement transparent hugepages for ppc64
  powerpc: move find_linux_pte_or_hugepte and gup_hugepte to common code
  powerpc: Update find_linux_pte_or_hugepte to handle transparent hugepages
  powerpc: Replace find_linux_pte with find_linux_pte_or_hugepte
  powerpc/kvm: Handle transparent hugepage in KVM
  powerpc: Update gup_pmd_range to handle transparent hugepages
  powerpc/THP: Add code to handle HPTE faults for hugepages
  powerpc: Make linux pagetable walk safe with THP enabled
  powerpc: Prevent gcc to re-read the pagetables
  powerpc: disable assert_pte_locked for collapse_huge_page
  powerpc: split hugepage when using subpage protection
  powerpc/THP: Enable THP on PPC64
  powerpc: Optimize hugepage invalidate

Anton Blanchard (1):
  powerpc: Align thread-fpr to 16 bytes

Aruna Balakrishnaiah (13):
  powerpc/pseries: Remove syslog prefix in uncompressed oops text
  powerpc/pseries: Add version and timestamp to oops header
  powerpc/pseries: Introduce generic read function to read nvram-partitions
  powerpc/pseries: Read/Write oops nvram partition via pstore
  powerpc/pseries: Read rtas partition via pstore
  powerpc/pseries: Distinguish between a os-partition and non-os partition
  powerpc/pseries: Read of-config partition via pstore
  powerpc/pseries: Read common partition via pstore
  powerpc/pseries: Enable PSTORE in pseries_defconfig
  pstore: Pass header size in the pstore write callback
  powerpc/pseries: Re-organise the oops compression code
  powerpc/pseries: Support compression of oops text via pstore
  pstore: Add hsize argument in write_buf call of pstore_ftrace_call

Benjamin Herrenschmidt (8):
  powerpc/math-emu: Fix decoding of some instructions
  powerpc/math-emu: Allow math-emu to be used for HW FPU
  powerpc/8xx: Remove 8xx specific minimal FPU emulation
  powerpc/powernv: Fix iommu initialization again
  powerpc: Handle both new style and old style reserve maps

Bharat Bhushan (2):
  powerpc: Debug control and status registers are 32bit
  

[PATCH v3 04/25] powerpc: Change how dentry's d_lock field is accessed

2013-07-03 Thread Waiman Long
Because of the changes made in dcache.h header file, files that
use the d_lock field of the dentry structure need to be changed
accordingly. All the d_lock's spin_lock() and spin_unlock() calls
are replaced by the corresponding d_lock() and d_unlock() calls.
There is no change in logic and everything should just work.

Signed-off-by: Waiman Long waiman.l...@hp.com
---
 arch/powerpc/platforms/cell/spufs/inode.c |6 +++---
 1 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/platforms/cell/spufs/inode.c 
b/arch/powerpc/platforms/cell/spufs/inode.c
index 35f77a4..3597c4b 100644
--- a/arch/powerpc/platforms/cell/spufs/inode.c
+++ b/arch/powerpc/platforms/cell/spufs/inode.c
@@ -165,18 +165,18 @@ static void spufs_prune_dir(struct dentry *dir)
 
mutex_lock(dir-d_inode-i_mutex);
list_for_each_entry_safe(dentry, tmp, dir-d_subdirs, d_u.d_child) {
-   spin_lock(dentry-d_lock);
+   d_lock(dentry);
if (!(d_unhashed(dentry))  dentry-d_inode) {
dget_dlock(dentry);
__d_drop(dentry);
-   spin_unlock(dentry-d_lock);
+   d_unlock(dentry);
simple_unlink(dir-d_inode, dentry);
/* XXX: what was dcache_lock protecting here? Other
 * filesystems (IB, configfs) release dcache_lock
 * before unlink */
dput(dentry);
} else {
-   spin_unlock(dentry-d_lock);
+   d_unlock(dentry);
}
}
shrink_dcache_parent(dir);
-- 
1.7.1

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


Re: [PATCH 1/2] powerpc: enable the relocatable support for the fsl booke 32bit kernel

2013-07-03 Thread Kevin Hao
On Wed, Jul 03, 2013 at 03:38:27PM -0500, Scott Wood wrote:
 On 07/02/2013 10:00:44 PM, Kevin Hao wrote:
 On Tue, Jul 02, 2013 at 05:39:18PM -0500, Scott Wood wrote:
  How much overhead (space and time) is this really?
 
 The following is the additional sections when relocatable is
 enabled for
 a p2020rdb board.
sectionsize
   .dynsym   07f0
   .dynstr   0926
   .dynamic  0080
   .hash 0388
   .interp   0011
   .rela.dyn 00215250
 
 The time for the relocation is about 32ms on a p2020rdb board.
 
 Hmm... more relocations than I expected.  What percentage is this of
 the total image size?

The size of vmlinux.bin is about 10M. The percentage of the relocation
section is about 20%. But look on the bright side of thing, all the relocation
stuff are in init section and should be discarded at runtime. :-)

Thanks,
Kevin

 
 -Scott


pgpl5Pe7s7Uw0.pgp
Description: PGP signature
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

答复: [PATCH 2/2] powerpc/85xx: add the P1020RDB-PD DTS support

2013-07-03 Thread Zhang Haijun-B42677


Regards  Thanks

Haijun.


发件人: Wood Scott-B07421
发送时间: 2013年7月3日 19:09
收件人: Zhang Haijun-B42677
Cc: ga...@kernel.crashing.org; linuxppc-dev@lists.ozlabs.org; Zhang 
Haijun-B42677; Huang Changming-R66093
主题: Re: [PATCH 2/2] powerpc/85xx: add the P1020RDB-PD DTS support

On 06/30/2013 11:12:23 PM, Haijun Zhang wrote:
 From: Haijun.Zhang haijun.zh...@freescale.com

 Overview of P1020RDB-PD device:
 - DDR3 2GB
 - NOR flash 64MB
 - NAND flash 128MB
 - SPI flash 16MB
 - I2C EEPROM 256Kb
 - eTSEC1 (RGMII PHY) connected to VSC7385 L2 switch
 - eTSEC2 (SGMII PHY)
 - eTSEC3 (RGMII PHY)
 - SDHC
 - 2 USB ports
 - 4 TDM ports
 - PCIe

 Signed-off-by: Haijun Zhang haijun.zh...@freescale.com
 Signed-off-by: Jerry Huang chang-ming.hu...@freescale.com
 CC: Scott Wood scottw...@freescale.com
 ---
  arch/powerpc/boot/dts/p1020rdb-pd.dtsi| 257
 ++
  arch/powerpc/boot/dts/p1020rdb-pd_32b.dts |  90 +++
  2 files changed, 347 insertions(+)
  create mode 100644 arch/powerpc/boot/dts/p1020rdb-pd.dtsi
  create mode 100644 arch/powerpc/boot/dts/p1020rdb-pd_32b.dts

What about 36b?

Haijun: 2G DDR was fix on P1020RDB-PD board. No need 36bit support. Also no 
36bit uboot support.


 + cpld@2,0 {
 + #address-cells = 1;
 + #size-cells = 1;
 + compatible = cpld;
 + reg = 0x2 0x0 0x2;
 + read-only;
 + };

Where does cpld as a compatible come from (it's way too vague)?  What
is read-only supposed to mean here?

Haijun: In fact almost all board has its special CPLD. This node just copy from 
p1020rdb-pc board.

So, change it to :

cpld@2,0 {
  compatible = fsl, p1020rdb-cpld;
  reg = 0x2 0x0 0x2;
  read-only;
};

CPLD was just like a EEPROM, it contain the config information about TDM, LED, 
POWER, watchdog, FXO, FXS and so on.
We changed it under uboot, when kernel setup we only need to read, supposed not 
be changed.

Why do you have #address-cells/#size-cells if there are no child nodes?

I'll remove this.


-Scott
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

[PATCH] powerpc: Use ibm,chip-id property to compute cpu_core_mask if available

2013-07-03 Thread Paul Mackerras
Some systems have an ibm,chip-id property in the cpu nodes in the
device tree.  On these systems, we now use that to compute the
cpu_core_mask (i.e. the set of core siblings) rather than looking
at cache properties.

Signed-off-by: Paul Mackerras pau...@samba.org
---
diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c
index b72d8c9..3b7a118 100644
--- a/arch/powerpc/kernel/smp.c
+++ b/arch/powerpc/kernel/smp.c
@@ -587,6 +587,32 @@ int cpu_first_thread_of_core(int core)
 }
 EXPORT_SYMBOL_GPL(cpu_first_thread_of_core);
 
+static void traverse_siblings_chip_id(int cpu, int add, int chipid)
+{
+   const struct cpumask *mask;
+   struct device_node *np;
+   int i, plen;
+   const int *prop;
+
+   mask = add ? cpu_online_mask : cpu_present_mask;
+   for_each_cpu(i, mask) {
+   np = of_get_cpu_node(i, NULL);
+   if (!np)
+   continue;
+   prop = of_get_property(np, ibm,chip-id, plen);
+   if (prop  plen == sizeof(int)  *prop == chipid) {
+   if (add) {
+   cpumask_set_cpu(cpu, cpu_core_mask(i));
+   cpumask_set_cpu(i, cpu_core_mask(cpu));
+   } else {
+   cpumask_clear_cpu(cpu, cpu_core_mask(i));
+   cpumask_clear_cpu(i, cpu_core_mask(cpu));
+   }
+   }
+   of_node_put(np);
+   }
+}
+
 /* Must be called when no change can occur to cpu_present_mask,
  * i.e. during cpu online or offline.
  */
@@ -611,14 +637,29 @@ static struct device_node *cpu_to_l2cache(int cpu)
 
 static void traverse_core_siblings(int cpu, int add)
 {
-   struct device_node *l2_cache;
+   struct device_node *l2_cache, *np;
const struct cpumask *mask;
-   int i;
+   int i, chip, plen;
+   const int *prop;
+
+   /* First see if we have ibm,chip-id properties in cpu nodes */
+   np = of_get_cpu_node(cpu, NULL);
+   if (np) {
+   chip = -1;
+   prop = of_get_property(np, ibm,chip-id, plen);
+   if (prop  plen == sizeof(int))
+   chip = *(int *)prop;
+   of_node_put(np);
+   if (chip = 0) {
+   traverse_siblings_chip_id(cpu, add, chip);
+   return;
+   }
+   }
 
l2_cache = cpu_to_l2cache(cpu);
mask = add ? cpu_online_mask : cpu_present_mask;
for_each_cpu(i, mask) {
-   struct device_node *np = cpu_to_l2cache(i);
+   np = cpu_to_l2cache(i);
if (!np)
continue;
if (np == l2_cache) {
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


compile custom kernel for custom board with mpc5200b

2013-07-03 Thread neorf3k
Hello, i'm developing an embedded linux system on a custom mpc5200b board, at 
University.

We have  a problem with a custom version of kernel: 2.6.23 and 2.6.33. We can't 
use newer version at the moment.

We are able to compile and load the kernel on Freescale Lite5200b and on Custom 
Board.

So, we have tested ethernet on both boards. 
On Freescale, ethernet connection is ok, just tested with ping.

But on Custom Board we are having problems:
with kernel 2.6.23 we lost 98% packet loss. 
We have tried to disable XLB PIPELINE in: 
arch/powerpc/platforms/52xx/mpc52xx_common.c
(just commenting these lines):

/* Disable XLB pipelining */
 /* (cfr errate 292. We could do this only just before ATA PIO
 transaction and re-enable it afterwards ...) */
 //out_be32(xlb-config, in_be32(xlb-config) | MPC52xx_XLB_CFG_PLDIS);

And with this, we have 10% packet loss.

In kernel 2.6.33, we have tried the same… but disabling or not the pipeline, we 
have 55% packet loss.

Where could i find some solutions?

Thank you

neorf___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

[PATCH -V2] powerpc/mm: Use the correct SLB(LLP) encoding in tlbie instruction

2013-07-03 Thread Aneesh Kumar K.V
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com

The sllp value is stored in mmu_psize_defs in such a way that we can easily OR
the value to get the operand for slbmte instruction. ie, the L and LP bits are
not contiguous. Decode the bits and use them correctly in tlbie.
regression is introduced by 1f6aaaccb1b3af8613fe45781c1aefee2ae8c6b3
powerpc: Update tlbie/tlbiel as per ISA doc

Reported-by: Paul Mackerras pau...@samba.org
Signed-off-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
---
Changes from V1:
* use #define constants instead of opencoded values

 arch/powerpc/mm/hash_native_64.c | 10 --
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/mm/hash_native_64.c b/arch/powerpc/mm/hash_native_64.c
index 0530ff7..c33d939 100644
--- a/arch/powerpc/mm/hash_native_64.c
+++ b/arch/powerpc/mm/hash_native_64.c
@@ -43,6 +43,7 @@ static inline void __tlbie(unsigned long vpn, int psize, int 
apsize, int ssize)
 {
unsigned long va;
unsigned int penc;
+   unsigned long sllp;
 
/*
 * We need 14 to 65 bits of va for a tlibe of 4K page
@@ -64,7 +65,9 @@ static inline void __tlbie(unsigned long vpn, int psize, int 
apsize, int ssize)
/* clear out bits after (52) [052.63] */
va = ~((1ul  (64 - 52)) - 1);
va |= ssize  8;
-   va |= mmu_psize_defs[apsize].sllp  6;
+   sllp = ((mmu_psize_defs[apsize].sllp  SLB_VSID_L)  6) |
+   ((mmu_psize_defs[apsize].sllp  SLB_VSID_LP)  4);
+   va |= sllp  5;
asm volatile(ASM_FTR_IFCLR(tlbie %0,0, PPC_TLBIE(%1,%0), %2)
 : : r (va), r(0), i (CPU_FTR_ARCH_206)
 : memory);
@@ -98,6 +101,7 @@ static inline void __tlbiel(unsigned long vpn, int psize, 
int apsize, int ssize)
 {
unsigned long va;
unsigned int penc;
+   unsigned long sllp;
 
/* VPN_SHIFT can be atmost 12 */
va = vpn  VPN_SHIFT;
@@ -113,7 +117,9 @@ static inline void __tlbiel(unsigned long vpn, int psize, 
int apsize, int ssize)
/* clear out bits after(52) [052.63] */
va = ~((1ul  (64 - 52)) - 1);
va |= ssize  8;
-   va |= mmu_psize_defs[apsize].sllp  6;
+   sllp = ((mmu_psize_defs[apsize].sllp  SLB_VSID_L)  6) |
+   ((mmu_psize_defs[apsize].sllp  SLB_VSID_LP)  4);
+   va |= sllp  5;
asm volatile(.long 0x7c000224 | (%0  11) | (0  21)
 : : r(va) : memory);
break;
-- 
1.8.1.2

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev