[PATCH] mtd: m25p80: Make the name of mtd_info fixed

2014-01-06 Thread Hou Zhiqiang
To give spi flash layout using mtdparts=... in cmdline, we must
give mtd_info a fixed name,because the cmdlinepart's parser will
match the name given in cmdline with the mtd_info.

Now, if use OF node, mtd_info's name will be spi-dev-name. It
consists of spi_master-bus_num, and the spi_master-bus_num maybe
dynamically fetched.
So, give the mtd_info a new fiexd name name.cs, name is name of
spi_device_id and cs is chip-select in spi_dev.

Signed-off-by: Hou Zhiqiang b48...@freescale.com
---
 drivers/mtd/devices/m25p80.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/mtd/devices/m25p80.c b/drivers/mtd/devices/m25p80.c
index eb558e8..d1ed480 100644
--- a/drivers/mtd/devices/m25p80.c
+++ b/drivers/mtd/devices/m25p80.c
@@ -1012,7 +1012,8 @@ static int m25p_probe(struct spi_device *spi)
if (data  data-name)
flash-mtd.name = data-name;
else
-   flash-mtd.name = dev_name(spi-dev);
+   flash-mtd.name = kasprintf(GFP_KERNEL, %s.%d,
+   id-name, spi-chip_select);
 
flash-mtd.type = MTD_NORFLASH;
flash-mtd.writesize = 1;
-- 
1.8.4.1


___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


[PATCH -V3 2/2] powerpc: thp: Fix crash on mremap

2014-01-06 Thread Aneesh Kumar K.V
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com

This patch fix the below crash

NIP [c004cee4] .__hash_page_thp+0x2a4/0x440
LR [c00439ac] .hash_page+0x18c/0x5e0
...
Call Trace:
[c00736103c40] [1b00] 0x1b00(unreliable)
[437908.479693] [c00736103d50] [c00439ac] .hash_page+0x18c/0x5e0
[437908.479699] [c00736103e30] [c000924c] .do_hash_page+0x4c/0x58

On ppc64 we use the pgtable for storing the hpte slot information and
store address to the pgtable at a constant offset (PTRS_PER_PMD) from
pmd. On mremap, when we switch the pmd, we need to withdraw and deposit
the pgtable again, so that we find the pgtable at PTRS_PER_PMD offset
from new pmd.

We also want to move the withdraw and deposit before the set_pmd so
that, when page fault find the pmd as trans huge we can be sure that
pgtable can be located at the offset.

Signed-off-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
---
 arch/powerpc/include/asm/pgtable-ppc64.h | 14 ++
 include/asm-generic/pgtable.h| 12 
 mm/huge_memory.c | 14 +-
 3 files changed, 31 insertions(+), 9 deletions(-)

diff --git a/arch/powerpc/include/asm/pgtable-ppc64.h 
b/arch/powerpc/include/asm/pgtable-ppc64.h
index 9935e9b79524..ff3afce40f3b 100644
--- a/arch/powerpc/include/asm/pgtable-ppc64.h
+++ b/arch/powerpc/include/asm/pgtable-ppc64.h
@@ -12,6 +12,7 @@
 
 #ifndef __ASSEMBLY__
 
+#include linux/spinlock.h
 /*
  * This is the default implementation of various PTE accessors, it's
  * used in all cases except Book3S with 64K pages where we have a
@@ -459,5 +460,18 @@ extern pgtable_t pgtable_trans_huge_withdraw(struct 
mm_struct *mm, pmd_t *pmdp);
 #define __HAVE_ARCH_PMDP_INVALIDATE
 extern void pmdp_invalidate(struct vm_area_struct *vma, unsigned long address,
pmd_t *pmdp);
+
+#define pmd_move_must_withdraw pmd_move_must_withdraw
+static inline int pmd_move_must_withdraw(spinlock_t *new_pmd_ptl,
+spinlock_t *old_pmd_ptl)
+{
+   /*
+* Archs like ppc64 use pgtable to store per pmd
+* specific information. So when we switch the pmd,
+* we should also withdraw and deposit the pgtable
+*/
+   return true;
+}
+
 #endif /* __ASSEMBLY__ */
 #endif /* _ASM_POWERPC_PGTABLE_PPC64_H_ */
diff --git a/include/asm-generic/pgtable.h b/include/asm-generic/pgtable.h
index db0923458940..8e4f41d9af4d 100644
--- a/include/asm-generic/pgtable.h
+++ b/include/asm-generic/pgtable.h
@@ -558,6 +558,18 @@ static inline pmd_t pmd_read_atomic(pmd_t *pmdp)
 }
 #endif
 
+#ifndef pmd_move_must_withdraw
+static inline int pmd_move_must_withdraw(spinlock_t *new_pmd_ptl,
+spinlock_t *old_pmd_ptl)
+{
+   /*
+* With split pmd lock we also need to move preallocated
+* PTE page table if new_pmd is on different PMD page table.
+*/
+   return new_pmd_ptl != old_pmd_ptl;
+}
+#endif
+
 /*
  * This function is meant to be used by sites walking pagetables with
  * the mmap_sem hold in read mode to protect against MADV_DONTNEED and
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 9c0b17295ba0..b77bb5df4db9 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1502,19 +1502,15 @@ int move_huge_pmd(struct vm_area_struct *vma, struct 
vm_area_struct *new_vma,
spin_lock_nested(new_ptl, SINGLE_DEPTH_NESTING);
pmd = pmdp_get_and_clear(mm, old_addr, old_pmd);
VM_BUG_ON(!pmd_none(*new_pmd));
-   set_pmd_at(mm, new_addr, new_pmd, pmd_mksoft_dirty(pmd));
-   if (new_ptl != old_ptl) {
-   pgtable_t pgtable;
 
-   /*
-* Move preallocated PTE page table if new_pmd is on
-* different PMD page table.
-*/
+   if (pmd_move_must_withdraw(new_ptl, old_ptl)) {
+   pgtable_t pgtable;
pgtable = pgtable_trans_huge_withdraw(mm, old_pmd);
pgtable_trans_huge_deposit(mm, new_pmd, pgtable);
-
-   spin_unlock(new_ptl);
}
+   set_pmd_at(mm, new_addr, new_pmd, pmd_mksoft_dirty(pmd));
+   if (new_ptl != old_ptl)
+   spin_unlock(new_ptl);
spin_unlock(old_ptl);
}
 out:
-- 
1.8.3.2

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


[PATCH -V3 1/2] powerpc: mm: Move ppc64 page table range definitions to separate header

2014-01-06 Thread Aneesh Kumar K.V
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com

This avoid mmu-hash64.h including pagetable-ppc64.h. That inclusion
cause issues like

  CC  arch/powerpc/kernel/asm-offsets.s
In file included from 
/home/aneesh/linus/arch/powerpc/include/asm/mmu-hash64.h:23:0,
 from /home/aneesh/linus/arch/powerpc/include/asm/mmu.h:196,
 from /home/aneesh/linus/arch/powerpc/include/asm/lppaca.h:36,
 from /home/aneesh/linus/arch/powerpc/include/asm/paca.h:21,
 from /home/aneesh/linus/arch/powerpc/include/asm/hw_irq.h:41,
 from /home/aneesh/linus/arch/powerpc/include/asm/irqflags.h:11,
 from include/linux/irqflags.h:15,
 from include/linux/spinlock.h:53,
 from include/linux/seqlock.h:35,
 from include/linux/time.h:5,
 from include/uapi/linux/timex.h:56,
 from include/linux/timex.h:56,
 from include/linux/sched.h:17,
 from arch/powerpc/kernel/asm-offsets.c:17:
/home/aneesh/linus/arch/powerpc/include/asm/pgtable-ppc64.h:563:42: error: 
unknown type name ‘spinlock_t’
 static inline int pmd_move_must_withdraw(spinlock_t *new_pmd_ptl,

Signed-off-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
---

NOTE: We can either do this or stuck a typdef struct spinlock spinlock_t; in 
pgtable-ppc64.h 

 arch/powerpc/include/asm/mmu-hash64.h  |   2 +-
 arch/powerpc/include/asm/pgtable-ppc64-range.h | 101 +
 arch/powerpc/include/asm/pgtable-ppc64.h   | 101 +
 3 files changed, 103 insertions(+), 101 deletions(-)
 create mode 100644 arch/powerpc/include/asm/pgtable-ppc64-range.h

diff --git a/arch/powerpc/include/asm/mmu-hash64.h 
b/arch/powerpc/include/asm/mmu-hash64.h
index 807014dde821..895b4df31fec 100644
--- a/arch/powerpc/include/asm/mmu-hash64.h
+++ b/arch/powerpc/include/asm/mmu-hash64.h
@@ -20,7 +20,7 @@
  * need for various slices related matters. Note that this isn't the
  * complete pgtable.h but only a portion of it.
  */
-#include asm/pgtable-ppc64.h
+#include asm/pgtable-ppc64-range.h
 #include asm/bug.h
 
 /*
diff --git a/arch/powerpc/include/asm/pgtable-ppc64-range.h 
b/arch/powerpc/include/asm/pgtable-ppc64-range.h
new file mode 100644
index ..b48b089fb209
--- /dev/null
+++ b/arch/powerpc/include/asm/pgtable-ppc64-range.h
@@ -0,0 +1,101 @@
+#ifndef _ASM_POWERPC_PGTABLE_PPC64_RANGE_H_
+#define _ASM_POWERPC_PGTABLE_PPC64_RANGE_H_
+/*
+ * This file contains the functions and defines necessary to modify and use
+ * the ppc64 hashed page table.
+ */
+
+#ifdef CONFIG_PPC_64K_PAGES
+#include asm/pgtable-ppc64-64k.h
+#else
+#include asm/pgtable-ppc64-4k.h
+#endif
+#include asm/barrier.h
+
+#define FIRST_USER_ADDRESS 0
+
+/*
+ * Size of EA range mapped by our pagetables.
+ */
+#define PGTABLE_EADDR_SIZE (PTE_INDEX_SIZE + PMD_INDEX_SIZE + \
+   PUD_INDEX_SIZE + PGD_INDEX_SIZE + PAGE_SHIFT)
+#define PGTABLE_RANGE (ASM_CONST(1)  PGTABLE_EADDR_SIZE)
+
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+#define PMD_CACHE_INDEX(PMD_INDEX_SIZE + 1)
+#else
+#define PMD_CACHE_INDEXPMD_INDEX_SIZE
+#endif
+/*
+ * Define the address range of the kernel non-linear virtual area
+ */
+
+#ifdef CONFIG_PPC_BOOK3E
+#define KERN_VIRT_START ASM_CONST(0x8000)
+#else
+#define KERN_VIRT_START ASM_CONST(0xD000)
+#endif
+#define KERN_VIRT_SIZE ASM_CONST(0x1000)
+
+/*
+ * The vmalloc space starts at the beginning of that region, and
+ * occupies half of it on hash CPUs and a quarter of it on Book3E
+ * (we keep a quarter for the virtual memmap)
+ */
+#define VMALLOC_START  KERN_VIRT_START
+#ifdef CONFIG_PPC_BOOK3E
+#define VMALLOC_SIZE   (KERN_VIRT_SIZE  2)
+#else
+#define VMALLOC_SIZE   (KERN_VIRT_SIZE  1)
+#endif
+#define VMALLOC_END(VMALLOC_START + VMALLOC_SIZE)
+
+/*
+ * The second half of the kernel virtual space is used for IO mappings,
+ * it's itself carved into the PIO region (ISA and PHB IO space) and
+ * the ioremap space
+ *
+ *  ISA_IO_BASE = KERN_IO_START, 64K reserved area
+ *  PHB_IO_BASE = ISA_IO_BASE + 64K to ISA_IO_BASE + 2G, PHB IO spaces
+ * IOREMAP_BASE = ISA_IO_BASE + 2G to VMALLOC_START + PGTABLE_RANGE
+ */
+#define KERN_IO_START  (KERN_VIRT_START + (KERN_VIRT_SIZE  1))
+#define FULL_IO_SIZE   0x8000ul
+#define  ISA_IO_BASE   (KERN_IO_START)
+#define  ISA_IO_END(KERN_IO_START + 0x1ul)
+#define  PHB_IO_BASE   (ISA_IO_END)
+#define  PHB_IO_END(KERN_IO_START + FULL_IO_SIZE)
+#define IOREMAP_BASE   (PHB_IO_END)
+#define IOREMAP_END(KERN_VIRT_START + KERN_VIRT_SIZE)
+
+
+/*
+ * Region IDs
+ */
+#define REGION_SHIFT   60UL
+#define REGION_MASK(0xfUL  REGION_SHIFT)
+#define REGION_ID(ea)  (((unsigned long)(ea))  REGION_SHIFT)
+
+#define VMALLOC_REGION_ID  (REGION_ID(VMALLOC_START))
+#define KERNEL_REGION_ID   

Re: Build regressions/improvements in v3.13-rc7

2014-01-06 Thread Geert Uytterhoeven
On Mon, Jan 6, 2014 at 10:01 AM, Geert Uytterhoeven
ge...@linux-m68k.org wrote:
 JFYI, when comparing v3.13-rc7[1] to v3.13-rc6[3], the summaries are:
   - build errors: +14/-4

  + /scratch/kisskb/src/arch/sh/mm/cache-sh4.c: error:
'cached_to_uncached' undeclared (first use in this function):  =
99:17
  + /scratch/kisskb/src/arch/sh/mm/cache-sh4.c: error: implicit
declaration of function 'cpu_context'
[-Werror=implicit-function-declaration]:  = 192:2
  + /scratch/kisskb/src/drivers/mtd/maps/vmu-flash.c: error: (near
initialization for 'vmu_flash_driver.drv'):  = 805:3, 803:3, 804:3
  + /scratch/kisskb/src/drivers/mtd/maps/vmu-flash.c: error: expected
declaration specifiers or '...' before string constant:  = 824:20,
822:16, 823:15
  + /scratch/kisskb/src/drivers/mtd/maps/vmu-flash.c: error: field
name not in record or union initializer:  = 805:3, 803:3, 804:3
  + /scratch/kisskb/src/include/linux/maple.h: error: field 'dev' has
incomplete type:  = 80:16
  + /scratch/kisskb/src/include/linux/maple.h: error: field 'drv' has
incomplete type:  = 85:23

sh-randconfig

  + /scratch/kisskb/src/drivers/tty/serial/nwpserial.c: error:
implicit declaration of function 'udelay'
[-Werror=implicit-function-declaration]:  = 53:3
  + error: No rule to make target drivers/scsi/aic7xxx/aicasm/*.[chyl]:  = N/A

powerpc-randconfig

  + error: No rule to make target /etc/sound/msndinit.bin:  = N/A
  + error: No rule to make target /etc/sound/msndperm.bin:  = N/A
  + error: No rule to make target /etc/sound/pndsperm.bin:  = N/A
  + error: No rule to make target /etc/sound/pndspini.bin:  = N/A

i386-randconfig

 [1] http://kisskb.ellerman.id.au/kisskb/head/7037/ (119 out of 120 configs)
 [3] http://kisskb.ellerman.id.au/kisskb/head/7026/ (119 out of 120 configs)

Gr{oetje,eeting}s,

Geert

--
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- ge...@linux-m68k.org

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say programmer or something like that.
-- Linus Torvalds
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


linux-next: build failure after merge of the final tree

2014-01-06 Thread Stephen Rothwell
Hi all,

After merging the final tree, today's linux-next build (powerpc
allyesconfig) failed like this:

arch/powerpc/kernel/exceptions-64s.S: Assembler messages:
arch/powerpc/kernel/exceptions-64s.S:1312: Error: attempt to move .org backwards

The last time I got this error, I needed to apply patch powerpc: Fix
attempt to move .org backwards error, but that has been included in
the powerpc tree now, so I guess something else has added code in a
critical place. :-(

I have just left this broken for today.
-- 
Cheers,
Stephen Rothwell s...@canb.auug.org.au


pgp1XF9k6ifpC.pgp
Description: PGP signature
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

Re: [PATCH v2 0/9] cpuidle: rework device state count handling

2014-01-06 Thread Rafael J. Wysocki
On Friday, December 20, 2013 07:47:22 PM Bartlomiej Zolnierkiewicz wrote:
 Hi,
 
 Some cpuidle drivers assume that cpuidle core will handle cases where
 device-state_count is smaller than driver-state_count, unfortunately
 currently this is untrue (device-state_count is used only for handling
 cpuidle state sysfs entries and driver-state_count is used for all
 other cases) and will not be fixed in the future as device-state_count
 is planned to be removed [1].
 
 This patchset fixes such drivers (ARM EXYNOS cpuidle driver and ACPI
 cpuidle driver), removes superflous device-state_count initialization
 from drivers for which device-state_count equals driver-state_count
 (POWERPC pseries cpuidle driver and intel_idle driver) and finally
 removes state_count field from struct cpuidle_device.
 
 Additionaly (while at it) this patchset fixes C1E promotion disable
 quirk handling (in intel_idle driver) and converts cpuidle drivers code
 to use the common cpuidle_[un]register() routines (in POWERPC pseries
 cpuidle driver and intel_idle driver).
 
 [1] http://permalink.gmane.org/gmane.linux.power-management.general/36908
 
 Reference to v1:
   http://comments.gmane.org/gmane.linux.power-management.general/37390
 
 Changes since v1:
 - synced patch series with next-20131220
 - added ACKs from Daniel Lezcano

I've queued up the series for 3.14, thanks!

 Best regards,
 --
 Bartlomiej Zolnierkiewicz
 Samsung RD Institute Poland
 Samsung Electronics
 
 
 Bartlomiej Zolnierkiewicz (9):
   ARM: EXYNOS: cpuidle: fix AFTR mode check
   POWERPC: pseries: cpuidle: remove superfluous dev-state_count
 initialization
   POWERPC: pseries: cpuidle: use the common cpuidle_[un]register()
 routines
   ACPI / cpuidle: fix max idle state handling with hotplug CPU support
   ACPI / cpuidle: remove dev-state_count setting
   intel_idle: do C1E promotion disable quirk for hotplugged CPUs
   intel_idle: remove superfluous dev-state_count initialization
   intel_idle: use the common cpuidle_[un]register() routines
   cpuidle: remove state_count field from struct cpuidle_device
 
  arch/arm/mach-exynos/cpuidle.c  |   8 +-
  arch/powerpc/platforms/pseries/processor_idle.c |  59 +-
  drivers/acpi/processor_idle.c   |  29 +++--
  drivers/cpuidle/cpuidle.c   |   3 -
  drivers/cpuidle/sysfs.c |   5 +-
  drivers/idle/intel_idle.c   | 140 
 +---
  include/linux/cpuidle.h |   1 -
  7 files changed, 51 insertions(+), 194 deletions(-)
 
 

-- 
I speak only for myself.
Rafael J. Wysocki, Intel Open Source Technology Center.
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


Re: [RFC] linux/pci: move pci_platform_pm_ops to linux/pci.h

2014-01-06 Thread Rafael J. Wysocki
On Friday, December 20, 2013 09:42:59 AM Bjorn Helgaas wrote:
 On Fri, Dec 20, 2013 at 3:03 AM, Dongsheng Wang
 dongsheng.w...@freescale.com wrote:
  From: Wang Dongsheng dongsheng.w...@freescale.com
 
  make Freescale platform use pci_platform_pm_ops struct.
 
 This changelog doesn't say anything about what the patch does.
 
 I infer that you want to use pci_platform_pm_ops from some Freescale
 code.  This patch should be posted along with the patches that add
 that Freescale code, so we can see how you intend to use it.
 
 The existing use is in drivers/pci/pci-acpi.c, so it's possible that
 your new use should be added in the same way, in drivers/pci, so we
 don't have to make pci_platform_pm_ops part of the public PCI
 interface in include/linux/pci.h.
 
 That said, if Raphael thinks this makes sense, it's OK with me.

Well, I'd like to know why exactly the change is needed in the first place.

Thanks!

-- 
I speak only for myself.
Rafael J. Wysocki, Intel Open Source Technology Center.
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


Re: [PATCH] ASoC: fsl_ssi: Fix printing return code on clk error

2014-01-06 Thread Mark Brown
On Sun, Jan 05, 2014 at 10:21:16AM +0400, Alexander Shiyan wrote:
 Signed-off-by: Alexander Shiyan shc_w...@mail.ru

Applied, thanks.


signature.asc
Description: Digital signature
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

Re: [PATCH 1/2] powerpc: Fix the setup of CPU-to-Node mappings during CPU online

2014-01-06 Thread Srivatsa S. Bhat
On 12/30/2013 05:05 PM, Srivatsa S. Bhat wrote:
 On POWER platforms, the hypervisor can notify the guest kernel about dynamic
 changes in the cpu-numa associativity (VPHN topology update). Hence the
 cpu-to-node mappings that we got from the firmware during boot, may no longer
 be valid after such updates. This is handled using the 
 arch_update_cpu_topology()
 hook in the scheduler, and the sched-domains are rebuilt according to the new
 mappings.
 
 But unfortunately, at the moment, CPU hotplug ignores these updated mappings
 and instead queries the firmware for the cpu-to-numa relationships and uses
 them during CPU online. So the kernel can end up assigning wrong NUMA nodes
 to CPUs during subsequent CPU hotplug online operations (after booting).
 
 Further, a particularly problematic scenario can result from this bug:
 On POWER platforms, the SMT mode can be switched between 1, 2, 4 (and even 8)
 threads per core. The switch to Single-Threaded (ST) mode is performed by
 offlining all except the first CPU thread in each core. Switching back to
 SMT mode involves onlining those other threads back, in each core.
 
 Now consider this scenario:
 
 1. During boot, the kernel gets the cpu-to-node mappings from the firmware
and assigns the CPUs to NUMA nodes appropriately, during CPU online.
 
 2. Later on, the hypervisor updates the cpu-to-node mappings dynamically and
communicates this update to the kernel. The kernel in turn updates its
cpu-to-node associations and rebuilds its sched domains. Everything is
fine so far.
 
 3. Now, the user switches the machine from SMT to ST mode (say, by running
ppc64_cpu --smt=1). This involves offlining all except 1 thread in each
core.
 
 4. The user then tries to switch back from ST to SMT mode (say, by running
ppc64_cpu --smt=4), and this involves onlining those threads back. Since
CPU hotplug ignores the new mappings, it queries the firmware and tries to
associate the newly onlined sibling threads to the old NUMA nodes. This
results in sibling threads within the same core getting associated with
different NUMA nodes, which is incorrect.
 
The scheduler's build-sched-domains code gets thoroughly confused with this
and enters an infinite loop and causes soft-lockups, as explained in detail
in commit 3be7db6ab (powerpc: VPHN topology change updates all siblings).
 
 
 So to fix this, use the numa_cpu_lookup_table to remember the updated
 cpu-to-node mappings, and use them during CPU hotplug online operations.
 Further, we also need to ensure that all threads in a core are assigned to a
 common NUMA node, irrespective of whether all those threads were online during
 the topology update. To achieve this, we take care not to use 
 cpu_sibling_mask()
 since it is not hotplug invariant. Instead, we use cpu_first_sibling_thread()
 and set up the mappings manually using the 'threads_per_core' value for that
 particular platform. This helps us ensure that we don't hit this bug with any
 combination of CPU hotplug and SMT mode switching.
 
 Cc: sta...@vger.kernel.org
 Signed-off-by: Srivatsa S. Bhat srivatsa.b...@linux.vnet.ibm.com
 ---


Any thoughts about these patches?

Regards,
Srivatsa S. Bhat

 
  arch/powerpc/include/asm/topology.h |   10 +
  arch/powerpc/mm/numa.c  |   70 
 ++-
  2 files changed, 76 insertions(+), 4 deletions(-)
 
 diff --git a/arch/powerpc/include/asm/topology.h 
 b/arch/powerpc/include/asm/topology.h
 index 89e3ef2..d0b5fca 100644
 --- a/arch/powerpc/include/asm/topology.h
 +++ b/arch/powerpc/include/asm/topology.h
 @@ -22,7 +22,15 @@ struct device_node;
 
  static inline int cpu_to_node(int cpu)
  {
 - return numa_cpu_lookup_table[cpu];
 + int nid;
 +
 + nid = numa_cpu_lookup_table[cpu];
 +
 + /*
 +  * During early boot, the numa-cpu lookup table might not have been
 +  * setup for all CPUs yet. In such cases, default to node 0.
 +  */
 + return (nid  0) ? 0 : nid;
  }
 
  #define parent_node(node)(node)
 diff --git a/arch/powerpc/mm/numa.c b/arch/powerpc/mm/numa.c
 index 078d3e0..6847d50 100644
 --- a/arch/powerpc/mm/numa.c
 +++ b/arch/powerpc/mm/numa.c
 @@ -31,6 +31,8 @@
  #include asm/sparsemem.h
  #include asm/prom.h
  #include asm/smp.h
 +#include asm/cputhreads.h
 +#include asm/topology.h
  #include asm/firmware.h
  #include asm/paca.h
  #include asm/hvcall.h
 @@ -152,9 +154,22 @@ static void __init get_node_active_region(unsigned long 
 pfn,
   }
  }
 
 -static void map_cpu_to_node(int cpu, int node)
 +static void reset_numa_cpu_lookup_table(void)
 +{
 + unsigned int cpu;
 +
 + for_each_possible_cpu(cpu)
 + numa_cpu_lookup_table[cpu] = -1;
 +}
 +
 +static void update_numa_cpu_lookup_table(unsigned int cpu, int node)
  {
   numa_cpu_lookup_table[cpu] = node;
 +}
 +
 +static void map_cpu_to_node(int cpu, int node)
 +{
 + update_numa_cpu_lookup_table(cpu, node);
 
   

Re: [question] Can the execution of the atomtic operation instruction pair lwarx/stwcx be interrrupted by local HW interruptions?

2014-01-06 Thread Scott Wood
On Mon, 2014-01-06 at 13:27 +0800, wyang wrote:
 
 On 01/06/2014 11:41 AM, Gavin Hu wrote:
 
  Thanks your response.  :) 
  But that means that these optimitive operations like atomic_add()
  aren't optimitive actully in PPC architecture, right? Becuase they
  can be interrupted by loacl HW interrupts. Theoretically, the ISR
  also can access the atomic gloable variable.
  
 
 Nope, my understand is that if you wanna sync kernel primitive code
 with ISR, you have responsibility to disable local interrupts.
 atomic_add does not guarantee to handle such case.

atomic_add() and other atomics do handle that case.  Interrupts are not
disabled, but there's a stwcx. in the interrupt return code to make sure
the reservation gets cleared.

-Scott


___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


Re: linux-next: build failure after merge of the final tree

2014-01-06 Thread Benjamin Herrenschmidt
On Mon, 2014-01-06 at 20:28 +1100, Stephen Rothwell wrote:
 Hi all,
 
 After merging the final tree, today's linux-next build (powerpc
 allyesconfig) failed like this:
 
 arch/powerpc/kernel/exceptions-64s.S: Assembler messages:
 arch/powerpc/kernel/exceptions-64s.S:1312: Error: attempt to move .org 
 backwards
 
 The last time I got this error, I needed to apply patch powerpc: Fix
 attempt to move .org backwards error, but that has been included in
 the powerpc tree now, so I guess something else has added code in a
 critical place. :-(
 
 I have just left this broken for today.

I had to modify that patch when applying it, it's possible that the
new version isn't making as much room. Without that change it would
fail the build on some of my configs due to some of the asm for the
maskable exception handling being too far from the conditional branches
that calls it.

I think it's time we do a bit of re-org of that file to figure out
precisely what has to be where and move things out more aggressively.

Cheers,
Ben.


___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


Re: [PATCH -V3 1/2] powerpc: mm: Move ppc64 page table range definitions to separate header

2014-01-06 Thread Benjamin Herrenschmidt
On Mon, 2014-01-06 at 14:33 +0530, Aneesh Kumar K.V wrote:
 From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
 
 This avoid mmu-hash64.h including pagetable-ppc64.h. That inclusion
 cause issues like

I don't like this. We have that stuff split into too many includes
already it's a mess.

Why do we need to include it from mmu*.h ?

Cheers,
Ben.

   CC  arch/powerpc/kernel/asm-offsets.s
 In file included from 
 /home/aneesh/linus/arch/powerpc/include/asm/mmu-hash64.h:23:0,
  from /home/aneesh/linus/arch/powerpc/include/asm/mmu.h:196,
  from /home/aneesh/linus/arch/powerpc/include/asm/lppaca.h:36,
  from /home/aneesh/linus/arch/powerpc/include/asm/paca.h:21,
  from /home/aneesh/linus/arch/powerpc/include/asm/hw_irq.h:41,
  from 
 /home/aneesh/linus/arch/powerpc/include/asm/irqflags.h:11,
  from include/linux/irqflags.h:15,
  from include/linux/spinlock.h:53,
  from include/linux/seqlock.h:35,
  from include/linux/time.h:5,
  from include/uapi/linux/timex.h:56,
  from include/linux/timex.h:56,
  from include/linux/sched.h:17,
  from arch/powerpc/kernel/asm-offsets.c:17:
 /home/aneesh/linus/arch/powerpc/include/asm/pgtable-ppc64.h:563:42: error: 
 unknown type name ‘spinlock_t’
  static inline int pmd_move_must_withdraw(spinlock_t *new_pmd_ptl,
 
 Signed-off-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
 ---
 
 NOTE: We can either do this or stuck a typdef struct spinlock spinlock_t; in 
 pgtable-ppc64.h 
 
  arch/powerpc/include/asm/mmu-hash64.h  |   2 +-
  arch/powerpc/include/asm/pgtable-ppc64-range.h | 101 
 +
  arch/powerpc/include/asm/pgtable-ppc64.h   | 101 
 +
  3 files changed, 103 insertions(+), 101 deletions(-)
  create mode 100644 arch/powerpc/include/asm/pgtable-ppc64-range.h
 
 diff --git a/arch/powerpc/include/asm/mmu-hash64.h 
 b/arch/powerpc/include/asm/mmu-hash64.h
 index 807014dde821..895b4df31fec 100644
 --- a/arch/powerpc/include/asm/mmu-hash64.h
 +++ b/arch/powerpc/include/asm/mmu-hash64.h
 @@ -20,7 +20,7 @@
   * need for various slices related matters. Note that this isn't the
   * complete pgtable.h but only a portion of it.
   */
 -#include asm/pgtable-ppc64.h
 +#include asm/pgtable-ppc64-range.h
  #include asm/bug.h
  
  /*
 diff --git a/arch/powerpc/include/asm/pgtable-ppc64-range.h 
 b/arch/powerpc/include/asm/pgtable-ppc64-range.h
 new file mode 100644
 index ..b48b089fb209
 --- /dev/null
 +++ b/arch/powerpc/include/asm/pgtable-ppc64-range.h
 @@ -0,0 +1,101 @@
 +#ifndef _ASM_POWERPC_PGTABLE_PPC64_RANGE_H_
 +#define _ASM_POWERPC_PGTABLE_PPC64_RANGE_H_
 +/*
 + * This file contains the functions and defines necessary to modify and use
 + * the ppc64 hashed page table.
 + */
 +
 +#ifdef CONFIG_PPC_64K_PAGES
 +#include asm/pgtable-ppc64-64k.h
 +#else
 +#include asm/pgtable-ppc64-4k.h
 +#endif
 +#include asm/barrier.h
 +
 +#define FIRST_USER_ADDRESS   0
 +
 +/*
 + * Size of EA range mapped by our pagetables.
 + */
 +#define PGTABLE_EADDR_SIZE (PTE_INDEX_SIZE + PMD_INDEX_SIZE + \
 + PUD_INDEX_SIZE + PGD_INDEX_SIZE + PAGE_SHIFT)
 +#define PGTABLE_RANGE (ASM_CONST(1)  PGTABLE_EADDR_SIZE)
 +
 +#ifdef CONFIG_TRANSPARENT_HUGEPAGE
 +#define PMD_CACHE_INDEX  (PMD_INDEX_SIZE + 1)
 +#else
 +#define PMD_CACHE_INDEX  PMD_INDEX_SIZE
 +#endif
 +/*
 + * Define the address range of the kernel non-linear virtual area
 + */
 +
 +#ifdef CONFIG_PPC_BOOK3E
 +#define KERN_VIRT_START ASM_CONST(0x8000)
 +#else
 +#define KERN_VIRT_START ASM_CONST(0xD000)
 +#endif
 +#define KERN_VIRT_SIZE   ASM_CONST(0x1000)
 +
 +/*
 + * The vmalloc space starts at the beginning of that region, and
 + * occupies half of it on hash CPUs and a quarter of it on Book3E
 + * (we keep a quarter for the virtual memmap)
 + */
 +#define VMALLOC_STARTKERN_VIRT_START
 +#ifdef CONFIG_PPC_BOOK3E
 +#define VMALLOC_SIZE (KERN_VIRT_SIZE  2)
 +#else
 +#define VMALLOC_SIZE (KERN_VIRT_SIZE  1)
 +#endif
 +#define VMALLOC_END  (VMALLOC_START + VMALLOC_SIZE)
 +
 +/*
 + * The second half of the kernel virtual space is used for IO mappings,
 + * it's itself carved into the PIO region (ISA and PHB IO space) and
 + * the ioremap space
 + *
 + *  ISA_IO_BASE = KERN_IO_START, 64K reserved area
 + *  PHB_IO_BASE = ISA_IO_BASE + 64K to ISA_IO_BASE + 2G, PHB IO spaces
 + * IOREMAP_BASE = ISA_IO_BASE + 2G to VMALLOC_START + PGTABLE_RANGE
 + */
 +#define KERN_IO_START(KERN_VIRT_START + (KERN_VIRT_SIZE  1))
 +#define FULL_IO_SIZE 0x8000ul
 +#define  ISA_IO_BASE (KERN_IO_START)
 +#define  ISA_IO_END  (KERN_IO_START + 0x1ul)
 +#define  PHB_IO_BASE (ISA_IO_END)
 +#define  PHB_IO_END  (KERN_IO_START + FULL_IO_SIZE)
 +#define IOREMAP_BASE (PHB_IO_END)
 +#define IOREMAP_END  

Re: [question] Can the execution of the atomtic operation instruction pair lwarx/stwcx be interrrupted by local HW interruptions?

2014-01-06 Thread wyang


On 01/07/2014 06:05 AM, Scott Wood wrote:

On Mon, 2014-01-06 at 13:27 +0800, wyang wrote:

On 01/06/2014 11:41 AM, Gavin Hu wrote:


Thanks your response.  :)
But that means that these optimitive operations like atomic_add()
aren't optimitive actully in PPC architecture, right? Becuase they
can be interrupted by loacl HW interrupts. Theoretically, the ISR
also can access the atomic gloable variable.


Nope, my understand is that if you wanna sync kernel primitive code
with ISR, you have responsibility to disable local interrupts.
atomic_add does not guarantee to handle such case.

atomic_add() and other atomics do handle that case.  Interrupts are not
disabled, but there's a stwcx. in the interrupt return code to make sure
the reservation gets cleared.


Yeah, Can you provide more detail info about why they can handle that 
case? The following is my understand:


Let us assume that there is a atomic global variable(var_a) and its 
initial value is 0.


The kernel attempts to execute atomic_add(1, var_a), after lwarx a async 
interrupt happens, and the ISR also accesses var_a variable and 
executes atomic_add.


static __inline__ void atomic_add(int a, atomic_t *v)
{
int t;

__asm__ __volatile__(
1:lwarx%0,0,%3# atomic_add\n\
--  --- interrupt 
happens---ISR also operates this global variable var_a 
such as also executing atomic_add(1, var_a). so the

  var_a would is 1.
add%0,%2,%0\n
PPC405_ERR77(0,%3)
stwcx.%0,0,%3 \n\ - After interrupt code returns, the 
reservation is cleared. so CR0 is not equal to 0, and then jump the 1 
label. the var_a will be 2.

bne-1b
: =r (t), +m (v-counter)
: r (a), r (v-counter)
: cc);
}

So the value of var_a is 2 rather than 1. Thats why i said that 
atomic_add does not handle such case. If I miss something, please 
correct me.:-)


Wei


-Scott





___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


Re: [PATCH -V3 1/2] powerpc: mm: Move ppc64 page table range definitions to separate header

2014-01-06 Thread Aneesh Kumar K.V
Benjamin Herrenschmidt b...@kernel.crashing.org writes:

 On Mon, 2014-01-06 at 14:33 +0530, Aneesh Kumar K.V wrote:
 From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
 
 This avoid mmu-hash64.h including pagetable-ppc64.h. That inclusion
 cause issues like

 I don't like this. We have that stuff split into too many includes
 already it's a mess.

I understand. Let me know, if you have any suggestion on cleaning that
up. I can do that.


 Why do we need to include it from mmu*.h ?

in mmu-hash64.h added by me via 78f1dbde9fd020419313c2a0c3b602ea2427118f

/*
 * This is necessary to get the definition of PGTABLE_RANGE which we
 * need for various slices related matters. Note that this isn't the
 * complete pgtable.h but only a portion of it.
 */
#include asm/pgtable-ppc64.h

-aneesh

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


Re: [PATCH] slub: Don't throw away partial remote slabs if there is no local memory

2014-01-06 Thread Wanpeng Li
On Tue, Jan 07, 2014 at 01:21:00PM +1100, Anton Blanchard wrote:

We noticed a huge amount of slab memory consumed on a large ppc64 box:

Slab:2094336 kB

Almost 2GB. This box is not balanced and some nodes do not have local
memory, causing slub to be very inefficient in its slab usage.

Each time we call kmem_cache_alloc_node slub checks the per cpu slab,
sees it isn't node local, deactivates it and tries to allocate a new
slab. On empty nodes we will allocate a new remote slab and use the
first slot, but as explained above when we get called a second time
we will just deactivate that slab and retry.

As such we end up only using 1 entry in each slab:

slabmem  objects
   used   active

kmalloc-16384   1404 MB4.90%
task_struct  668 MB2.90%
kmalloc-128  193 MB3.61%
kmalloc-192  152 MB5.23%
kmalloc-8192  72 MB   23.40%
kmalloc-1664 MB7.43%
kmalloc-512   33 MB   22.41%

The patch below checks that a node is not empty before deactivating a
slab and trying to allocate it again. With this patch applied we now
use about 352MB:

Slab: 360192 kB

And our efficiency is much better:

slabmem  objects
   used   active

kmalloc-16384 92 MB   74.27%
task_struct   23 MB   83.46%
idr_layer_cache   18 MB  100.00%
pgtable-2^12  17 MB  100.00%
kmalloc-65536 15 MB  100.00%
inode_cache   14 MB  100.00%
kmalloc-256   14 MB   97.81%
kmalloc-8192  14 MB   85.71%

Signed-off-by: Anton Blanchard an...@samba.org

Reviewed-by: Wanpeng Li liw...@linux.vnet.ibm.com

---

Thoughts? It seems like we could hit a similar situation if a machine
is balanced but we run out of memory on a single node.

Index: b/mm/slub.c
===
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -2278,10 +2278,17 @@ redo:

   if (unlikely(!node_match(page, node))) {
   stat(s, ALLOC_NODE_MISMATCH);
-  deactivate_slab(s, page, c-freelist);
-  c-page = NULL;
-  c-freelist = NULL;
-  goto new_slab;
+
+  /*
+   * If the node contains no memory there is no point in trying
+   * to allocate a new node local slab
+   */
+  if (node_spanned_pages(node)) {

s/node_spanned_pages/node_present_pages 

+  deactivate_slab(s, page, c-freelist);
+  c-page = NULL;
+  c-freelist = NULL;
+  goto new_slab;
+  }
   }

   /*

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majord...@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: a href=mailto:d...@kvack.org; em...@kvack.org /a

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


[PATCH] powerpc/mpic: supply a .disable callback

2014-01-06 Thread Dongsheng Wang
From: Wang Dongsheng dongsheng.w...@freescale.com

Currently MPIC provides .mask, but not .disable.  This means that
effectively disable_irq() soft-disables the interrupt, and you get
a .mask call if an interrupt actually occurs.

I'm not sure if this was intended as a performance benefit (it seems common
to omit .disable on powerpc interrupt controllers, but nowhere else), but it
interacts badly with threaded/workqueue interrupts (including KVM
reflection).  In such cases, where the real interrupt handler does a
disable_irq_nosync(), schedules defered handling, and returns, we get two
interrupts for every real interrupt.  The second interrupt does nothing
but see that IRQ_DISABLED is set, and decide that it would be a good
idea to actually call .mask.

Signed-off-by: Scott Wood scottw...@freescale.com
Signed-off-by: Wang Dongsheng dongsheng.w...@freescale.com

diff --git a/arch/powerpc/sysdev/mpic.c b/arch/powerpc/sysdev/mpic.c
index 0e166ed..dd7564b 100644
--- a/arch/powerpc/sysdev/mpic.c
+++ b/arch/powerpc/sysdev/mpic.c
@@ -975,6 +975,7 @@ void mpic_set_destination(unsigned int virq, unsigned int 
cpuid)
 }
 
 static struct irq_chip mpic_irq_chip = {
+   .irq_disable= mpic_mask_irq,
.irq_mask   = mpic_mask_irq,
.irq_unmask = mpic_unmask_irq,
.irq_eoi= mpic_end_irq,
@@ -984,6 +985,7 @@ static struct irq_chip mpic_irq_chip = {
 
 #ifdef CONFIG_SMP
 static struct irq_chip mpic_ipi_chip = {
+   .irq_disable= mpic_mask_ipi,
.irq_mask   = mpic_mask_ipi,
.irq_unmask = mpic_unmask_ipi,
.irq_eoi= mpic_end_ipi,
@@ -991,6 +993,7 @@ static struct irq_chip mpic_ipi_chip = {
 #endif /* CONFIG_SMP */
 
 static struct irq_chip mpic_tm_chip = {
+   .irq_disable= mpic_mask_tm,
.irq_mask   = mpic_mask_tm,
.irq_unmask = mpic_unmask_tm,
.irq_eoi= mpic_end_irq,
@@ -1001,6 +1004,7 @@ static struct irq_chip mpic_tm_chip = {
 static struct irq_chip mpic_irq_ht_chip = {
.irq_startup= mpic_startup_ht_irq,
.irq_shutdown   = mpic_shutdown_ht_irq,
+   .irq_disable= mpic_mask_irq,
.irq_mask   = mpic_mask_irq,
.irq_unmask = mpic_unmask_ht_irq,
.irq_eoi= mpic_end_ht_irq,
-- 
1.8.5


___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


Re: [PATCH] powerpc/mpic: supply a .disable callback

2014-01-06 Thread Benjamin Herrenschmidt
On Tue, 2014-01-07 at 13:38 +0800, Dongsheng Wang wrote:
 From: Wang Dongsheng dongsheng.w...@freescale.com
 
 Currently MPIC provides .mask, but not .disable.  This means that
 effectively disable_irq() soft-disables the interrupt, and you get
 a .mask call if an interrupt actually occurs.
 
 I'm not sure if this was intended as a performance benefit (it seems common
 to omit .disable on powerpc interrupt controllers, but nowhere else), but it
 interacts badly with threaded/workqueue interrupts (including KVM
 reflection).  In such cases, where the real interrupt handler does a
 disable_irq_nosync(), schedules defered handling, and returns, we get two
 interrupts for every real interrupt.  The second interrupt does nothing
 but see that IRQ_DISABLED is set, and decide that it would be a good
 idea to actually call .mask.

We probably don't want to do that for edge, only level interrupts.

Cheers,
Ben.

 
 Signed-off-by: Scott Wood scottw...@freescale.com
 Signed-off-by: Wang Dongsheng dongsheng.w...@freescale.com
 
 diff --git a/arch/powerpc/sysdev/mpic.c b/arch/powerpc/sysdev/mpic.c
 index 0e166ed..dd7564b 100644
 --- a/arch/powerpc/sysdev/mpic.c
 +++ b/arch/powerpc/sysdev/mpic.c
 @@ -975,6 +975,7 @@ void mpic_set_destination(unsigned int virq, unsigned int 
 cpuid)
  }
  
  static struct irq_chip mpic_irq_chip = {
 + .irq_disable= mpic_mask_irq,
   .irq_mask   = mpic_mask_irq,
   .irq_unmask = mpic_unmask_irq,
   .irq_eoi= mpic_end_irq,
 @@ -984,6 +985,7 @@ static struct irq_chip mpic_irq_chip = {
  
  #ifdef CONFIG_SMP
  static struct irq_chip mpic_ipi_chip = {
 + .irq_disable= mpic_mask_ipi,
   .irq_mask   = mpic_mask_ipi,
   .irq_unmask = mpic_unmask_ipi,
   .irq_eoi= mpic_end_ipi,
 @@ -991,6 +993,7 @@ static struct irq_chip mpic_ipi_chip = {
  #endif /* CONFIG_SMP */
  
  static struct irq_chip mpic_tm_chip = {
 + .irq_disable= mpic_mask_tm,
   .irq_mask   = mpic_mask_tm,
   .irq_unmask = mpic_unmask_tm,
   .irq_eoi= mpic_end_irq,
 @@ -1001,6 +1004,7 @@ static struct irq_chip mpic_tm_chip = {
  static struct irq_chip mpic_irq_ht_chip = {
   .irq_startup= mpic_startup_ht_irq,
   .irq_shutdown   = mpic_shutdown_ht_irq,
 + .irq_disable= mpic_mask_irq,
   .irq_mask   = mpic_mask_irq,
   .irq_unmask = mpic_unmask_ht_irq,
   .irq_eoi= mpic_end_ht_irq,


___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


[PATCH] ASoC: fsl_ssi: Fixed wrong printf format identifier

2014-01-06 Thread Alexander Shiyan
sound/soc/fsl/fsl_ssi.c: In function 'fsl_ssi_probe':
sound/soc/fsl/fsl_ssi.c:1180:6: warning: format '%d' expects argument
of type 'int', but argument 3 has type 'long int' [-Wformat=]

Reported-by: kbuild test robot fengguang...@intel.com
Signed-off-by: Alexander Shiyan shc_w...@mail.ru
---
 sound/soc/fsl/fsl_ssi.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/sound/soc/fsl/fsl_ssi.c b/sound/soc/fsl/fsl_ssi.c
index 3d74477a..c9d567c 100644
--- a/sound/soc/fsl/fsl_ssi.c
+++ b/sound/soc/fsl/fsl_ssi.c
@@ -1192,7 +1192,7 @@ static int fsl_ssi_probe(struct platform_device *pdev)
 */
ssi_private-baudclk = devm_clk_get(pdev-dev, baud);
if (IS_ERR(ssi_private-baudclk))
-   dev_warn(pdev-dev, could not get baud clock: %d\n,
+   dev_warn(pdev-dev, could not get baud clock: %ld\n,
 PTR_ERR(ssi_private-baudclk));
else
clk_prepare_enable(ssi_private-baudclk);
-- 
1.8.3.2

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


[PATCH 1/2] powerpc/dts: fix lbc lack of error interrupt

2014-01-06 Thread Dongsheng Wang
From: Wang Dongsheng dongsheng.w...@freescale.com

P1020, P1021, P1022, P1023 when the lbc get error, the error
interrupt will be triggered. The corresponding interrupt is
internal IRQ0. So system have to process the lbc IRQ0 interrupt.

The corresponding lbc general interrupt is internal IRQ3.

Signed-off-by: Wang Dongsheng dongsheng.w...@freescale.com

diff --git a/arch/powerpc/boot/dts/fsl/p1020si-post.dtsi 
b/arch/powerpc/boot/dts/fsl/p1020si-post.dtsi
index 68cc5e7..13f209f 100644
--- a/arch/powerpc/boot/dts/fsl/p1020si-post.dtsi
+++ b/arch/powerpc/boot/dts/fsl/p1020si-post.dtsi
@@ -36,7 +36,8 @@
#address-cells = 2;
#size-cells = 1;
compatible = fsl,p1020-elbc, fsl,elbc, simple-bus;
-   interrupts = 19 2 0 0;
+   interrupts = 19 2 0 0
+ 16 2 0 0;
 };
 
 /* controller at 0x9000 */
diff --git a/arch/powerpc/boot/dts/fsl/p1021si-post.dtsi 
b/arch/powerpc/boot/dts/fsl/p1021si-post.dtsi
index adb82fd..cffc93e 100644
--- a/arch/powerpc/boot/dts/fsl/p1021si-post.dtsi
+++ b/arch/powerpc/boot/dts/fsl/p1021si-post.dtsi
@@ -36,7 +36,8 @@
#address-cells = 2;
#size-cells = 1;
compatible = fsl,p1021-elbc, fsl,elbc, simple-bus;
-   interrupts = 19 2 0 0;
+   interrupts = 19 2 0 0
+ 16 2 0 0;
 };
 
 /* controller at 0x9000 */
diff --git a/arch/powerpc/boot/dts/fsl/p1022si-post.dtsi 
b/arch/powerpc/boot/dts/fsl/p1022si-post.dtsi
index e179803..979670d 100644
--- a/arch/powerpc/boot/dts/fsl/p1022si-post.dtsi
+++ b/arch/powerpc/boot/dts/fsl/p1022si-post.dtsi
@@ -40,7 +40,8 @@
 * pin muxing when the DIU is enabled.
 */
compatible = fsl,p1022-elbc, fsl,elbc;
-   interrupts = 19 2 0 0;
+   interrupts = 19 2 0 0
+ 16 2 0 0;
 };
 
 /* controller at 0x9000 */
diff --git a/arch/powerpc/boot/dts/fsl/p1023si-post.dtsi 
b/arch/powerpc/boot/dts/fsl/p1023si-post.dtsi
index f1105bf..f5f5043 100644
--- a/arch/powerpc/boot/dts/fsl/p1023si-post.dtsi
+++ b/arch/powerpc/boot/dts/fsl/p1023si-post.dtsi
@@ -36,7 +36,8 @@
#address-cells = 2;
#size-cells = 1;
compatible = fsl,p1023-elbc, fsl,elbc, simple-bus;
-   interrupts = 19 2 0 0;
+   interrupts = 19 2 0 0
+ 16 2 0 0;
 };
 
 /* controller at 0xa000 */
-- 
1.8.5


___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


[PATCH 2/2] powerpc/85xx: handle the eLBC error interrupt if it exist in dts

2014-01-06 Thread Dongsheng Wang
From: Wang Dongsheng dongsheng.w...@freescale.com

On P3041, P1020, P1021, P1022, P1023 eLBC event interrupts are routed
to Int9(P3041)  Int3(P102x) while ELBC error interrupts are routed to
Int0, we need to call request_irq for each.

Signed-off-by: Shaohui Xie shaohui@freescale.com
Signed-off-by: Wang Dongsheng dongsheng.w...@freescale.com
Signed-off-by: Kumar Gala ga...@kernel.crashing.org

diff --git a/arch/powerpc/include/asm/fsl_lbc.h 
b/arch/powerpc/include/asm/fsl_lbc.h
index 420b453..067fb0d 100644
--- a/arch/powerpc/include/asm/fsl_lbc.h
+++ b/arch/powerpc/include/asm/fsl_lbc.h
@@ -285,7 +285,7 @@ struct fsl_lbc_ctrl {
/* device info */
struct device   *dev;
struct fsl_lbc_regs __iomem *regs;
-   int irq;
+   int irq[2];
wait_queue_head_t   irq_wait;
spinlock_t  lock;
void*nand;
diff --git a/arch/powerpc/sysdev/fsl_lbc.c b/arch/powerpc/sysdev/fsl_lbc.c
index 6bc5a54..d631022 100644
--- a/arch/powerpc/sysdev/fsl_lbc.c
+++ b/arch/powerpc/sysdev/fsl_lbc.c
@@ -214,10 +214,14 @@ static irqreturn_t fsl_lbc_ctrl_irq(int irqno, void *data)
struct fsl_lbc_ctrl *ctrl = data;
struct fsl_lbc_regs __iomem *lbc = ctrl-regs;
u32 status;
+   unsigned long flags;
 
+   spin_lock_irqsave(fsl_lbc_lock, flags);
status = in_be32(lbc-ltesr);
-   if (!status)
+   if (!status) {
+   spin_unlock_irqrestore(fsl_lbc_lock, flags);
return IRQ_NONE;
+   }
 
out_be32(lbc-ltesr, LTESR_CLEAR);
out_be32(lbc-lteatr, 0);
@@ -260,6 +264,7 @@ static irqreturn_t fsl_lbc_ctrl_irq(int irqno, void *data)
if (status  ~LTESR_MASK)
dev_err(ctrl-dev, Unknown error: 
LTESR 0x%08X\n, status);
+   spin_unlock_irqrestore(fsl_lbc_lock, flags);
return IRQ_HANDLED;
 }
 
@@ -298,8 +303,8 @@ static int fsl_lbc_ctrl_probe(struct platform_device *dev)
goto err;
}
 
-   fsl_lbc_ctrl_dev-irq = irq_of_parse_and_map(dev-dev.of_node, 0);
-   if (fsl_lbc_ctrl_dev-irq == NO_IRQ) {
+   fsl_lbc_ctrl_dev-irq[0] = irq_of_parse_and_map(dev-dev.of_node, 0);
+   if (!fsl_lbc_ctrl_dev-irq[0]) {
dev_err(dev-dev, failed to get irq resource\n);
ret = -ENODEV;
goto err;
@@ -311,20 +316,34 @@ static int fsl_lbc_ctrl_probe(struct platform_device *dev)
if (ret  0)
goto err;
 
-   ret = request_irq(fsl_lbc_ctrl_dev-irq, fsl_lbc_ctrl_irq, 0,
+   ret = request_irq(fsl_lbc_ctrl_dev-irq[0], fsl_lbc_ctrl_irq, 0,
fsl-lbc, fsl_lbc_ctrl_dev);
if (ret != 0) {
dev_err(dev-dev, failed to install irq (%d)\n,
-   fsl_lbc_ctrl_dev-irq);
-   ret = fsl_lbc_ctrl_dev-irq;
+   fsl_lbc_ctrl_dev-irq[0]);
+   ret = fsl_lbc_ctrl_dev-irq[0];
goto err;
}
 
+   fsl_lbc_ctrl_dev-irq[1] = irq_of_parse_and_map(dev-dev.of_node, 1);
+   if (fsl_lbc_ctrl_dev-irq[1]) {
+   ret = request_irq(fsl_lbc_ctrl_dev-irq[1], fsl_lbc_ctrl_irq,
+   IRQF_SHARED, fsl-lbc-err, fsl_lbc_ctrl_dev);
+   if (ret) {
+   dev_err(dev-dev, failed to install irq (%d)\n,
+   fsl_lbc_ctrl_dev-irq[1]);
+   ret = fsl_lbc_ctrl_dev-irq[1];
+   goto err1;
+   }
+   }
+
/* Enable interrupts for any detected events */
out_be32(fsl_lbc_ctrl_dev-regs-lteir, LTEIR_ENABLE);
 
return 0;
 
+err1:
+   free_irq(fsl_lbc_ctrl_dev-irq[0], fsl_lbc_ctrl_dev);
 err:
iounmap(fsl_lbc_ctrl_dev-regs);
kfree(fsl_lbc_ctrl_dev);
-- 
1.8.5


___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


Re: [question] Can the execution of the atomtic operation instruction pair lwarx/stwcx be interrrupted by local HW interruptions?

2014-01-06 Thread Scott Wood
On Tue, 2014-01-07 at 09:00 +0800, wyang wrote:
 Yeah, Can you provide more detail info about why they can handle that 
 case? The following is my understand:
 
 Let us assume that there is a atomic global variable(var_a) and its 
 initial value is 0.
 
 The kernel attempts to execute atomic_add(1, var_a), after lwarx a async 
 interrupt happens, and the ISR also accesses var_a variable and 
 executes atomic_add.
 
 static __inline__ void atomic_add(int a, atomic_t *v)
 {
  int t;
 
  __asm__ __volatile__(
 1:lwarx%0,0,%3# atomic_add\n\
 --  --- interrupt 
 happens---ISR also operates this global variable var_a 
 such as also executing atomic_add(1, var_a). so the
var_a would is 1.
  add%0,%2,%0\n
  PPC405_ERR77(0,%3)
 stwcx.%0,0,%3 \n\ - After interrupt code returns, the 
 reservation is cleared. so CR0 is not equal to 0, and then jump the 1 
 label. the var_a will be 2.
  bne-1b
  : =r (t), +m (v-counter)
  : r (a), r (v-counter)
  : cc);
 }
 
 So the value of var_a is 2 rather than 1. Thats why i said that 
 atomic_add does not handle such case. If I miss something, please 
 correct me.:-)

2 is the correct result, since atomic_add(1, var_a) was called twice
(once in the ISR, once in the interrupted context).

-Scott


___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


Re: [PATCH] powerpc/mpic: supply a .disable callback

2014-01-06 Thread Scott Wood
On Tue, 2014-01-07 at 13:38 +0800, Dongsheng Wang wrote:
 From: Wang Dongsheng dongsheng.w...@freescale.com

Why did you change the author field?

-Scott


___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


Re: [PATCH] slub: Don't throw away partial remote slabs if there is no local memory

2014-01-06 Thread Andi Kleen
Anton Blanchard an...@samba.org writes:

 Thoughts? It seems like we could hit a similar situation if a machine
 is balanced but we run out of memory on a single node.

Yes I agree, but your patch doesn't seem to attempt to handle this?

-Andi

 Index: b/mm/slub.c
 ===
 --- a/mm/slub.c
 +++ b/mm/slub.c
 @@ -2278,10 +2278,17 @@ redo:
  
   if (unlikely(!node_match(page, node))) {
   stat(s, ALLOC_NODE_MISMATCH);
 - deactivate_slab(s, page, c-freelist);
 - c-page = NULL;
 - c-freelist = NULL;
 - goto new_slab;
 +
 + /*
 +  * If the node contains no memory there is no point in trying
 +  * to allocate a new node local slab
 +  */
 + if (node_spanned_pages(node)) {
 + deactivate_slab(s, page, c-freelist);
 + c-page = NULL;
 + c-freelist = NULL;
 + goto new_slab;
 + }
   }
  
   /*
-- 
a...@linux.intel.com -- Speaking for myself only
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


Re: [question] Can the execution of the atomtic operation instruction pair lwarx/stwcx be interrrupted by local HW interruptions?

2014-01-06 Thread wyang


On 01/07/2014 02:35 PM, Scott Wood wrote:

On Tue, 2014-01-07 at 09:00 +0800, wyang wrote:

Yeah, Can you provide more detail info about why they can handle that
case? The following is my understand:

Let us assume that there is a atomic global variable(var_a) and its
initial value is 0.

The kernel attempts to execute atomic_add(1, var_a), after lwarx a async
interrupt happens, and the ISR also accesses var_a variable and
executes atomic_add.

static __inline__ void atomic_add(int a, atomic_t *v)
{
  int t;

  __asm__ __volatile__(
1:lwarx%0,0,%3# atomic_add\n\
--  --- interrupt
happens---ISR also operates this global variable var_a
such as also executing atomic_add(1, var_a). so the
var_a would is 1.
  add%0,%2,%0\n
  PPC405_ERR77(0,%3)
stwcx.%0,0,%3 \n\ - After interrupt code returns, the
reservation is cleared. so CR0 is not equal to 0, and then jump the 1
label. the var_a will be 2.
  bne-1b
  : =r (t), +m (v-counter)
  : r (a), r (v-counter)
  : cc);
}

So the value of var_a is 2 rather than 1. Thats why i said that
atomic_add does not handle such case. If I miss something, please
correct me.:-)

2 is the correct result, since atomic_add(1, var_a) was called twice
(once in the ISR, once in the interrupted context).
Scott, thanks for your confirmation. I guess that Gavin thought that 1 
is a correct result. So thats why I said that if he wanna get 1,
he should have responsibility to disable local interrupts. I mean that 
atomic_add is not able to guarantee that 1 is a correct result.:-)


Wei


-Scott





___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev