Re: [PATCH] crypto: fix FTBFS with ARM SHA1-asm and THUMB2_KERNEL

2013-01-21 Thread Jussi Kivilinna

Quoting Jussi Kivilinna :


Quoting Matt Sealey :


This question is to the implementor/committer (Dave McCullough), how
exactly did you measure the benchmark and can we reproduce it on some
other ARM box?

If it's long and laborious and not so important to test the IPsec
tunnel use-case, what would be the simplest possible benchmark to see
if the C vs. assembly version is faster for a particular ARM device? I
can get hold of pretty much any Cortex-A8 or Cortex-A9 that matters, I
have access to a Chromebook for A15, and maybe an i.MX27 or i.MX35 and
a couple Marvell boards (ARMv6) if I set my mind to it... that much
testing implies we find a pretty concise benchmark though with a
fairly common kernel version we can spread around (i.MX, OMAP and the
Chromebook, I can handle, the rest I'm a little wary of bothering to
spend too much time on). I think that could cover a good swath of
not-ARMv5 use cases from lower speeds to quad core monsters.. but I
might stick to i.MX to start with..


There is 'tcrypt' module in crypto/ for quick benchmarking.  
'modprobe tcrypt mode=500 sec=1' tests AES in various cipher-modes,  
using different buffer sizes and outputs results to kernel log.




Actually mode=200 might be better, as mode=500 is for asynchronous  
implementations and might use hardware crypto if such device/module is  
available.


-Jussi


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 33/33] ASoC: Convert to devm_ioremap_resource()

2013-01-21 Thread Thierry Reding
On Tue, Jan 22, 2013 at 04:48:26PM +0900, Mark Brown wrote:
> On Mon, Jan 21, 2013 at 11:09:26AM +0100, Thierry Reding wrote:
> > Convert all uses of devm_request_and_ioremap() to the newly introduced
> > devm_ioremap_resource() which provides more consistent error handling.
> 
> Applied, thanks.

It's probably too early to apply this yet since the first patch in the
series, which introduces the new function, hasn't been merged yet. I
seem to have handled this poorly, as David Miller already pointed out,
by not Cc'ing everyone involved on the first patch.

Thierry


pgp1nOjy32A0U.pgp
Description: PGP signature


Re: [PATCH] crypto: fix FTBFS with ARM SHA1-asm and THUMB2_KERNEL

2013-01-21 Thread Jussi Kivilinna

Quoting Matt Sealey :


This question is to the implementor/committer (Dave McCullough), how
exactly did you measure the benchmark and can we reproduce it on some
other ARM box?

If it's long and laborious and not so important to test the IPsec
tunnel use-case, what would be the simplest possible benchmark to see
if the C vs. assembly version is faster for a particular ARM device? I
can get hold of pretty much any Cortex-A8 or Cortex-A9 that matters, I
have access to a Chromebook for A15, and maybe an i.MX27 or i.MX35 and
a couple Marvell boards (ARMv6) if I set my mind to it... that much
testing implies we find a pretty concise benchmark though with a
fairly common kernel version we can spread around (i.MX, OMAP and the
Chromebook, I can handle, the rest I'm a little wary of bothering to
spend too much time on). I think that could cover a good swath of
not-ARMv5 use cases from lower speeds to quad core monsters.. but I
might stick to i.MX to start with..


There is 'tcrypt' module in crypto/ for quick benchmarking. 'modprobe  
tcrypt mode=500 sec=1' tests AES in various cipher-modes, using  
different buffer sizes and outputs results to kernel log.


-Jussi

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] lib: vsprintf: Add %pa format specifier for phys_addr_t types

2013-01-21 Thread Joe Perches
On Tue, 2013-01-22 at 09:29 +0200, Andy Shevchenko wrote:
> On Mon, 2013-01-21 at 21:47 -0800, Stepan Moskovchenko wrote: 
> > Add the %pa format specifier for printing a phys_addr_t
> > type, since the physical address size on some platforms
> > can vary based on build options, regardless of the native
> > integer type.
[]
> > diff --git a/lib/vsprintf.c b/lib/vsprintf.c
[]
>  @@ -1112,6 +1113,12 @@ char *pointer(const char *fmt, char *buf, char *end, 
> void *ptr,
> > return netdev_feature_string(buf, end, ptr, spec);
> > }
> > break;
> > +   case 'a':
> > +   spec.flags |= SPECIAL | SMALL | ZEROPAD;
> > +   spec.field_width = sizeof(phys_addr_t) * 2;

I believe this should be:

spec.field_width = sizeof(phys_addr_t) * 2 + 2;

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH] serial:ifx6x60: Remove memset for SPI frame

2013-01-21 Thread channing

There is no need to memset 0 to SPI frame memory before preparing
transfer frame bits, because SPI frame header are encoded with valid
data size, so don't need to worry about adopting dirty bits, more,
memset zero for each SPI frame may impact the spi throughput efficiency.

Signed-off-by: Chen Jun 
Signed-off-by: channing 
---
 drivers/tty/serial/ifx6x60.c |1 -
 1 files changed, 0 insertions(+), 1 deletions(-)

diff --git a/drivers/tty/serial/ifx6x60.c b/drivers/tty/serial/ifx6x60.c
index 8cb6d8d..fa4ec7e 100644
--- a/drivers/tty/serial/ifx6x60.c
+++ b/drivers/tty/serial/ifx6x60.c
@@ -481,7 +481,6 @@ static int ifx_spi_prepare_tx_buffer(struct ifx_spi_device 
*ifx_dev)
unsigned char *tx_buffer;
 
tx_buffer = ifx_dev->tx_buffer;
-   memset(tx_buffer, 0, IFX_SPI_TRANSFER_SIZE);
 
/* make room for required SPI header */
tx_buffer += IFX_SPI_HEADER_OVERHEAD;
-- 
1.7.1



--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH V3 RESEND RFC 0/2] kvm: Improving undercommit scenarios

2013-01-21 Thread Raghavendra K T
 In some special scenarios like #vcpu <= #pcpu, PLE handler may
prove very costly, because there is no need to iterate over vcpus
and do unsuccessful yield_to burning CPU.

 The first patch optimizes all the yield_to by bailing out when there
 is no need to continue in yield_to (i.e., when there is only one task 
 in source and target rq).

 Second patch uses that in PLE handler. Further when a yield_to fails
 we do not immediately go out of PLE handler instead we try thrice 
 to have better statistical possibility of false return. Otherwise that
 would affect moderate overcommit cases.
 
 Result on 3.7.0-rc6 kernel shows around 140% improvement for ebizzy 1x and
 around 51% for dbench 1x  with 32 core PLE machine with 32 vcpu guest.


base = 3.7.0-rc6 
machine: 32 core mx3850 x5 PLE mc

--+---+---+---++---+
   ebizzy (rec/sec higher is beter)
--+---+---+---++---+
basestdev   patched stdev   %improve 
--+---+---+---++---+
1x   2511.300021.54096051.8000   170.2592   140.98276   
2x   2679.4000   332.44822692.3000   251.4005 0.48145
3x   2253.5000   266.42432192.1667   178.9753-2.72169
--+---+---+---++---+

--+---+---+---++---+
dbench (throughput in MB/sec. higher is better)
--+---+---+---++---+
basestdev   patched stdev   %improve 
--+---+---+---++---+
1x  6677.4080   638.504810098.0060   3449.7026 51.22643
2x  2012.676064.76422019.0440 62.6702   0.31639
3x  1302.078340.83361292.7517 27.0515  -0.71629
--+---+---+---++---+

Here is the refernce of no ple result.
 ebizzy-1x_nople 7592.6000 rec/sec
 dbench_1x_nople 7853.6960 MB/sec

The result says we can still improve by 60% for ebizzy, but overall we are
getting impressive performance with the patches.

 Changes Since V2:
 - Dropped global measures usage patch (Peter Zilstra)
 - Do not bail out on first failure (Avi Kivity)
 - Try thrice for the failure of yield_to to get statistically more correct
   behaviour.

 Changes since V1:
 - Discard the idea of exporting nrrunning and optimize in core scheduler 
(Peter)
 - Use yield() instead of schedule in overcommit scenarios (Rik)
 - Use loadavg knowledge to detect undercommit/overcommit

 Peter Zijlstra (1):
  Bail out of yield_to when source and target runqueue has one task

 Raghavendra K T (1):
  Handle yield_to failure return for potential undercommit case

 Please let me know your comments and suggestions.

 Link for the discussion of V3 original:
 https://lkml.org/lkml/2012/11/26/166

 Link for V2:
 https://lkml.org/lkml/2012/10/29/287

 Link for V1:
 https://lkml.org/lkml/2012/9/21/168

 kernel/sched/core.c | 25 +++--
 virt/kvm/kvm_main.c | 26 --
 2 files changed, 35 insertions(+), 16 deletions(-)

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH v5 08/45] CPU hotplug: Convert preprocessor macros to static inline functions

2013-01-21 Thread Srivatsa S. Bhat
On 12/05/2012 06:10 AM, Andrew Morton wrote:
"static inline C functions would be preferred if possible.  Feel free to
fix up the wrong crufty surrounding code as well ;-)"

Convert the macros in the CPU hotplug code to static inline C functions.

Signed-off-by: Srivatsa S. Bhat 
---

 include/linux/cpu.h |8 
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/include/linux/cpu.h b/include/linux/cpu.h
index cf24da1..eb79f47 100644
--- a/include/linux/cpu.h
+++ b/include/linux/cpu.h
@@ -198,10 +198,10 @@ static inline void cpu_hotplug_driver_unlock(void)
 
 #else  /* CONFIG_HOTPLUG_CPU */
 
-#define get_online_cpus()  do { } while (0)
-#define put_online_cpus()  do { } while (0)
-#define get_online_cpus_atomic()   do { } while (0)
-#define put_online_cpus_atomic()   do { } while (0)
+static inline void get_online_cpus(void) {}
+static inline void put_online_cpus(void) {}
+static inline void get_online_cpus_atomic(void) {}
+static inline void put_online_cpus_atomic(void) {}
 #define hotcpu_notifier(fn, pri)   do { (void)(fn); } while (0)
 /* These aren't inline functions due to a GCC bug. */
 #define register_hotcpu_notifier(nb)   ({ (void)(nb); 0; })

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH v5 22/45] infiniband: ehca: Use get/put_online_cpus_atomic() to prevent CPU offline

2013-01-21 Thread Srivatsa S. Bhat
Once stop_machine() is gone from the CPU offline path, we won't be able to
depend on preempt_disable() or local_irq_disable() to prevent CPUs from
going offline from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going offline,
while invoking from atomic context.

Cc: Roland Dreier 
Signed-off-by: Srivatsa S. Bhat 
---

 drivers/infiniband/hw/ehca/ehca_irq.c |8 
 1 file changed, 8 insertions(+)

diff --git a/drivers/infiniband/hw/ehca/ehca_irq.c 
b/drivers/infiniband/hw/ehca/ehca_irq.c
index 8615d7c..d61936c 100644
--- a/drivers/infiniband/hw/ehca/ehca_irq.c
+++ b/drivers/infiniband/hw/ehca/ehca_irq.c
@@ -43,6 +43,7 @@
 
 #include 
 #include 
+#include 
 
 #include "ehca_classes.h"
 #include "ehca_irq.h"
@@ -653,6 +654,9 @@ void ehca_tasklet_eq(unsigned long data)
ehca_process_eq((struct ehca_shca*)data, 1);
 }
 
+/*
+ * Must be called under get_online_cpus_atomic() and put_online_cpus_atomic().
+ */
 static int find_next_online_cpu(struct ehca_comp_pool *pool)
 {
int cpu;
@@ -703,6 +707,7 @@ static void queue_comp_task(struct ehca_cq *__cq)
int cq_jobs;
unsigned long flags;
 
+   get_online_cpus_atomic();
cpu_id = find_next_online_cpu(pool);
BUG_ON(!cpu_online(cpu_id));
 
@@ -720,6 +725,7 @@ static void queue_comp_task(struct ehca_cq *__cq)
BUG_ON(!cct || !thread);
}
__queue_comp_task(__cq, cct, thread);
+   put_online_cpus_atomic();
 }
 
 static void run_comp_task(struct ehca_cpu_comp_task *cct)
@@ -759,6 +765,7 @@ static void comp_task_park(unsigned int cpu)
list_splice_init(>cq_list, );
spin_unlock_irq(>task_lock);
 
+   get_online_cpus_atomic();
cpu = find_next_online_cpu(pool);
target = per_cpu_ptr(pool->cpu_comp_tasks, cpu);
thread = *per_cpu_ptr(pool->cpu_comp_threads, cpu);
@@ -768,6 +775,7 @@ static void comp_task_park(unsigned int cpu)
__queue_comp_task(cq, target, thread);
}
spin_unlock_irq(>task_lock);
+   put_online_cpus_atomic();
 }
 
 static void comp_task_stop(unsigned int cpu, bool online)

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH v5 07/45] CPU hotplug: Provide APIs to prevent CPU offline from atomic context

2013-01-21 Thread Srivatsa S. Bhat
There are places where preempt_disable() or local_irq_disable() are used
to prevent any CPU from going offline during the critical section. Let us
call them as "atomic hotplug readers" ("atomic" because they run in atomic,
non-preemptible contexts).

Today, preempt_disable() or its equivalent works because the hotplug writer
uses stop_machine() to take CPUs offline. But once stop_machine() is gone
from the CPU hotplug offline path, the readers won't be able to prevent
CPUs from going offline using preempt_disable().

So the intent here is to provide synchronization APIs for such atomic hotplug
readers, to prevent (any) CPUs from going offline, without depending on
stop_machine() at the writer-side. The new APIs will look something like
this:  get_online_cpus_atomic() and put_online_cpus_atomic()

Some important design requirements and considerations:
-

1. Scalable synchronization at the reader-side, especially in the fast-path

   Any synchronization at the atomic hotplug readers side must be highly
   scalable - avoid global single-holder locks/counters etc. Because, these
   paths currently use the extremely fast preempt_disable(); our replacement
   to preempt_disable() should not become ridiculously costly and also should
   not serialize the readers among themselves needlessly.

   At a minimum, the new APIs must be extremely fast at the reader side
   atleast in the fast-path, when no CPU offline writers are active.

2. preempt_disable() was recursive. The replacement should also be recursive.

3. No (new) lock-ordering restrictions

   preempt_disable() was super-flexible. It didn't impose any ordering
   restrictions or rules for nesting. Our replacement should also be equally
   flexible and usable.

4. No deadlock possibilities

   Regular per-cpu locking is not the way to go if we want to have relaxed
   rules for lock-ordering. Because, we can end up in circular-locking
   dependencies as explained in https://lkml.org/lkml/2012/12/6/290

   So, avoid the usual per-cpu locking schemes (per-cpu locks/per-cpu atomic
   counters with spin-on-contention etc) as much as possible, to avoid
   numerous deadlock possibilities from creeping in.


Implementation of the design:


We use per-CPU reader-writer locks for synchronization because:

  a. They are quite fast and scalable in the fast-path (when no writers are
 active), since they use fast per-cpu counters in those paths.

  b. They are recursive at the reader side.

  c. They provide a good amount of safety against deadlocks; they don't
 spring new deadlock possibilities on us from out of nowhere. As a
 result, they have relaxed locking rules and are quite flexible, and
 thus are best suited for replacing usages of preempt_disable() or
 local_irq_disable() at the reader side.

Together, these satisfy all the requirements mentioned above.

I'm indebted to Michael Wang and Xiao Guangrong for their numerous thoughtful
suggestions and ideas, which inspired and influenced many of the decisions in
this as well as previous designs. Thanks a lot Michael and Xiao!

Cc: Russell King 
Cc: Mike Frysinger 
Cc: Tony Luck 
Cc: Ralf Baechle 
Cc: David Howells 
Cc: "James E.J. Bottomley" 
Cc: Benjamin Herrenschmidt 
Cc: Martin Schwidefsky 
Cc: Paul Mundt 
Cc: "David S. Miller" 
Cc: "H. Peter Anvin" 
Cc: x...@kernel.org
Cc: linux-arm-ker...@lists.infradead.org
Cc: uclinux-dist-de...@blackfin.uclinux.org
Cc: linux-i...@vger.kernel.org
Cc: linux-m...@linux-mips.org
Cc: linux-am33-l...@redhat.com
Cc: linux-par...@vger.kernel.org
Cc: linuxppc-...@lists.ozlabs.org
Cc: linux-s...@vger.kernel.org
Cc: linux...@vger.kernel.org
Cc: sparcli...@vger.kernel.org
Signed-off-by: Srivatsa S. Bhat 
---

 arch/arm/Kconfig  |1 +
 arch/blackfin/Kconfig |1 +
 arch/ia64/Kconfig |1 +
 arch/mips/Kconfig |1 +
 arch/mn10300/Kconfig  |1 +
 arch/parisc/Kconfig   |1 +
 arch/powerpc/Kconfig  |1 +
 arch/s390/Kconfig |1 +
 arch/sh/Kconfig   |1 +
 arch/sparc/Kconfig|1 +
 arch/x86/Kconfig  |1 +
 include/linux/cpu.h   |4 +++
 kernel/cpu.c  |   57 ++---
 13 files changed, 69 insertions(+), 3 deletions(-)

diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index 67874b8..cb6b94b 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -1616,6 +1616,7 @@ config NR_CPUS
 config HOTPLUG_CPU
bool "Support for hot-pluggable CPUs"
depends on SMP && HOTPLUG
+   select PERCPU_RWLOCK
help
  Say Y here to experiment with turning CPUs off and on.  CPUs
  can be controlled through /sys/devices/system/cpu.
diff --git a/arch/blackfin/Kconfig b/arch/blackfin/Kconfig
index b6f3ad5..83d9882 100644
--- a/arch/blackfin/Kconfig
+++ b/arch/blackfin/Kconfig
@@ -261,6 +261,7 @@ config NR_CPUS
 config HOTPLUG_CPU
bool "Support for hot-pluggable CPUs"
  

Re: [PATCH v2 29/76] ARC: Boot #1: low-level, setup_arch(), /proc/cpuinfo, mem init

2013-01-21 Thread Vineet Gupta
On Friday 18 January 2013 08:15 PM, Arnd Bergmann wrote:
> On Friday 18 January 2013, Vineet Gupta wrote:
>> +   /* setup bootmem allocator */
>> +   bootmap_sz = init_bootmem_node(NODE_DATA(0),
>> +  first_free_pfn,/* bitmap start */
>> +  min_low_pfn,   /* First pg to track */
>> +  max_low_pfn);  /* Last pg to track */
>> +
>> +   /*
>> +* init_bootmem above marks all tracked Page-frames as inuse 
>> "allocated"
>> +* This includes pages occupied by kernel's elf segments.
>> +* Beyond that, excluding bootmem bitmap itself, mark the rest of
>> +* free-mem as "allocatable"
>> +*/
>> +   alloc_start = kernel_img_end + bootmap_sz;
>> +   free_bootmem(alloc_start, end_mem - alloc_start);
>> +
>> +   memset(zones_size, 0, sizeof(zones_size));
>> +   zones_size[ZONE_NORMAL] = num_physpages;
>> +
> IIRC, the bootmem allocator is no longer recommended for new architecture.
> You should use the "memblock" interface instead, as arm64 and tile do.
>
> I just saw that this is still listed as TODO for openrisc, sorry if I
> put you on the wrong track there by recommending to copy from openrisc.
>
>   Arnd

How does the following look like. This is RFC only and I'll squash it into Boot 
#1
patch.

From: Vineet Gupta 
Date: Tue, 22 Jan 2013 13:03:50 +0530
Subject: [PATCH] RFC: Convert ARC port from bootmem to memblock

Signed-off-by: Vineet Gupta 
---
 arch/arc/Kconfig  |2 ++
 arch/arc/kernel/devtree.c |4 ++--
 arch/arc/mm/init.c|   35 +--
 3 files changed, 13 insertions(+), 28 deletions(-)

diff --git a/arch/arc/Kconfig b/arch/arc/Kconfig
index e8947c7..76ead56 100644
--- a/arch/arc/Kconfig
+++ b/arch/arc/Kconfig
@@ -29,11 +29,13 @@ config ARC
 select HAVE_IRQ_WORK
 select HAVE_KPROBES
 select HAVE_KRETPROBES
+select HAVE_MEMBLOCK
 select HAVE_MOD_ARCH_SPECIFIC if ARC_DW2_UNWIND
 select HAVE_OPROFILE
 select HAVE_PERF_EVENTS
 select IRQ_DOMAIN
 select MODULES_USE_ELF_RELA
+select NO_BOOTMEM
 select OF
 select OF_EARLY_FLATTREE
 select PERF_USE_VMALLOC
diff --git a/arch/arc/kernel/devtree.c b/arch/arc/kernel/devtree.c
index 626fba14..051ec8b 100644
--- a/arch/arc/kernel/devtree.c
+++ b/arch/arc/kernel/devtree.c
@@ -14,7 +14,7 @@
 #include 
 #include 
 #include 
-#include 
+#include 
 #include 
 #include 
 #include 
@@ -26,7 +26,7 @@
 /* called from unflatten_device_tree() to bootstrap devicetree itself */
 void * __init early_init_dt_alloc_memory_arch(u64 size, u64 align)
 {
-return alloc_bootmem_align(size, align);
+return __va(memblock_alloc(size, align));
 }
 
 /**
diff --git a/arch/arc/mm/init.c b/arch/arc/mm/init.c
index 8c173f9..80dbdc7 100644
--- a/arch/arc/mm/init.c
+++ b/arch/arc/mm/init.c
@@ -9,6 +9,7 @@
 #include 
 #include 
 #include 
+#include 
 #ifdef CONFIG_BLOCK_DEV_RAM
 #include 
 #endif
@@ -42,6 +43,7 @@ void __init early_init_dt_add_memory_arch(u64 base, u64 size)
 {
 arc_mem_sz = size & PAGE_MASK;
 pr_info("Memory size set via devicetree %ldM\n", TO_MB(arc_mem_sz));
+memblock_add(CONFIG_LINUX_LINK_BASE, arc_mem_sz);
 }
 
 /*
@@ -52,9 +54,6 @@ void __init early_init_dt_add_memory_arch(u64 base, u64 size)
  */
 void __init setup_arch_memory(void)
 {
-int bootmap_sz;
-unsigned int first_free_pfn;
-unsigned long kernel_img_end, alloc_start;
 unsigned long zones_size[MAX_NR_ZONES] = { 0, 0 };
 unsigned long end_mem = CONFIG_LINUX_LINK_BASE + arc_mem_sz;
 
@@ -63,39 +62,23 @@ void __init setup_arch_memory(void)
 init_mm.end_data = (unsigned long)_edata;
 init_mm.brk = (unsigned long)_end;
 
-/* _end needs to be page aligned */
-kernel_img_end = (unsigned long)_end;
-BUG_ON(kernel_img_end & ~PAGE_MASK);
+/*- externs in mm need setting up ---*/
 
 /* first page of system - kernel .vector starts here */
 min_low_pfn = PFN_DOWN(CONFIG_LINUX_LINK_BASE);
 
-/* First free page beyond kernel image */
-first_free_pfn = PFN_DOWN(kernel_img_end);
-
-/*
- * Last usable page of low mem (no HIGHMEM yet for ARC port)
- * -must be BASE + SIZE
- */
+/* Last usable page of low mem (no HIGHMEM yet for ARC port) */
 max_low_pfn = max_pfn = PFN_DOWN(end_mem);
 
 max_mapnr = num_physpages = max_low_pfn - min_low_pfn;
 
-/* setup bootmem allocator */
-bootmap_sz = init_bootmem_node(NODE_DATA(0),
-   first_free_pfn,/* bitmap start */
-   min_low_pfn,   /* First pg to track */
-   max_low_pfn);  /* Last pg to track */
+/*- reserve kernel image ---*/
+memblock_reserve(CONFIG_LINUX_LINK_BASE,
+ __pa(_end) - CONFIG_LINUX_LINK_BASE);
 
-/*
- * init_bootmem above marks all tracked Page-frames as 

Re: [PATCH v3 09/22] sched: compute runnable load avg in cpu_load and cpu_avg_load_per_task

2013-01-21 Thread Alex Shi
On 01/22/2013 02:55 PM, Mike Galbraith wrote:
> On Tue, 2013-01-22 at 11:20 +0800, Alex Shi wrote: 
>>
>> I just looked into the aim9 benchmark, in this case it forks 2000 tasks,
>> after all tasks ready, aim9 give a signal than all tasks burst waking up
>> and run until all finished.
>> Since each of tasks are finished very quickly, a imbalanced empty cpu
>> may goes to sleep till a regular balancing give it some new tasks. That
>> causes the performance dropping. cause more idle entering.
>
> Sounds like for AIM (and possibly for other really bursty loads), we
> might want to do some load-balancing at wakeup time by *just* looking
> at the number of running tasks, rather than at the load average. Hmm?
>
> The load average is fundamentally always going to run behind a bit,
> and while you want to use it for long-term balancing, a short-term you
> might want to do just a "if we have a huge amount of runnable
> processes, do a load balancing *now*". Where "huge amount" should
> probably be relative to the long-term load balancing (ie comparing the
> number of runnable processes on this CPU right *now* with the load
> average over the last second or so would show a clear spike, and a
> reason for quick action).
>

 Sorry for response late!

 Just written a patch following your suggestion, but no clear improvement 
 for this case.
 I also tried change the burst checking interval, also no clear help.

 If I totally give up runnable load in periodic balancing, the performance 
 can recover 60%
 of lose.

 I will try to optimize wake up balancing in weekend.

>>>
>>> (btw, the time for runnable avg to accumulate to 100%, needs 345ms; to
>>> 50% needs 32 ms)
>>>
>>> I have tried some tuning in both wake up balancing and regular
>>> balancing. Yes, when using instant load weight (without runnable avg
>>> engage), both in waking up, and regular balance, the performance recovered.
>>>
>>> But with per_cpu nr_running tracking, it's hard to find a elegant way to
>>> detect the burst whenever in waking up or in regular balance.
>>> In waking up, the whole sd_llc domain cpus are candidates, so just
>>> checking this_cpu is not enough.
>>> In regular balance, this_cpu is the migration destination cpu, checking
>>> if the burst on the cpu is not useful. Instead, we need to check whole
>>> domains' increased task number.
>>>
>>> So, guess 2 solutions for this issue.
>>> 1, for quick waking up, we need use instant load(same as current
>>> balancing) to do balance; and for regular balance, we can record both
>>> instant load and runnable load data for whole domain, then decide which
>>> one to use according to task number increasing in the domain after
>>> tracking done the whole domain.
>>>
>>> 2, we can keep current instant load balancing as performance balance
>>> policy, and using runnable load balancing in power friend policy.
>>> Since, none of us find performance benefit with runnable load balancing
>>> on benchmark hackbench/kbuild/aim9/tbench/specjbb etc.
>>> I prefer the 2nd.
>>
>> 3, On the other hand, Considering the aim9 testing scenario is rare in
>> real life(prepare thousands tasks and then wake up them at the same
>> time). And the runnable load avg includes useful running history info.
>> Only aim9 5~7% performance dropping is not unacceptable.
>> (kbuild/hackbench/tbench/specjbb have no clear performance change)
>>
>> So we can let this drop be with a reminder in code. Any comments?
> 
> Hm.  Burst of thousands of tasks may be rare and perhaps even silly, but
> what about few task bursts?   History is useless for bursts, they live
> or die now: modest gaggle of worker threads (NR_CPUS) for say video
> encoding job wake in parallel, each is handed a chunk of data to chew up
> in parallel.  Double scheduler latency of one worker (stack workers
> because individuals don't historically fill a cpu), you double latency
> for the entire job every time.
> 
> I think 2 is mandatory, keep both, and user picks his poison.
> 
> If you want max burst performance, you care about the here and now
> reality the burst is waking into.  If you're running a google freight
> train farm otoh, you may want some hysteresis so trains don't over-rev
> the electric meter on every microscopic spike.  Both policies make
> sense, but you can't have both performance profiles with either metric,
> so choosing one seems doomed to failure.
> 

Thanks for your suggestions and example, Mike!
I just can't understand the your last words here, Sorry. what the
detailed concern of you on 'both performance profiles with either
metric'? Could you like to give your preferred solutions?

> Case in point: tick skew.  It was removed because synchronized ticking
> saves power.. and then promptly returned under user control because the
> power saving gain also inflicted serious latency pain.
> 
> -Mike
> 


-- 
Thanks 

[PATCH v5 45/45] Documentation/cpu-hotplug: Remove references to stop_machine()

2013-01-21 Thread Srivatsa S. Bhat
Since stop_machine() is no longer used in the CPU offline path, we cannot
disable CPU hotplug using preempt_disable()/local_irq_disable() etc. We
need to use the newly introduced get/put_online_cpus_atomic() APIs.
Reflect this in the documentation.

Cc: Rob Landley 
Cc: linux-...@vger.kernel.org
Signed-off-by: Srivatsa S. Bhat 
---

 Documentation/cpu-hotplug.txt |   17 +++--
 1 file changed, 11 insertions(+), 6 deletions(-)

diff --git a/Documentation/cpu-hotplug.txt b/Documentation/cpu-hotplug.txt
index 9f40135..7f907ec 100644
--- a/Documentation/cpu-hotplug.txt
+++ b/Documentation/cpu-hotplug.txt
@@ -113,13 +113,15 @@ Never use anything other than cpumask_t to represent 
bitmap of CPUs.
#include 
get_online_cpus() and put_online_cpus():
 
-The above calls are used to inhibit cpu hotplug operations. While the
+The above calls are used to inhibit cpu hotplug operations, when invoked from
+non-atomic context (because the above functions can sleep). While the
 cpu_hotplug.refcount is non zero, the cpu_online_mask will not change.
-If you merely need to avoid cpus going away, you could also use
-preempt_disable() and preempt_enable() for those sections.
-Just remember the critical section cannot call any
-function that can sleep or schedule this process away. The preempt_disable()
-will work as long as stop_machine_run() is used to take a cpu down.
+
+However, if you are executing in atomic context (ie., you can't afford to
+sleep), and you merely need to avoid cpus going offline, you can use
+get_online_cpus_atomic() and put_online_cpus_atomic() for those sections.
+Just remember the critical section cannot call any function that can sleep or
+schedule this process away.
 
 CPU Hotplug - Frequently Asked Questions.
 
@@ -360,6 +362,9 @@ A: There are two ways.  If your code can be run in 
interrupt context, use
return err;
}
 
+   If my_func_on_cpu() itself cannot block, use get/put_online_cpus_atomic()
+   instead of get/put_online_cpus() to prevent CPUs from going offline.
+
 Q: How do we determine how many CPUs are available for hotplug.
 A: There is no clear spec defined way from ACPI that can give us that
information today. Based on some input from Natalie of Unisys,

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 33/33] ASoC: Convert to devm_ioremap_resource()

2013-01-21 Thread Mark Brown
On Mon, Jan 21, 2013 at 11:09:26AM +0100, Thierry Reding wrote:
> Convert all uses of devm_request_and_ioremap() to the newly introduced
> devm_ioremap_resource() which provides more consistent error handling.

Applied, thanks.


signature.asc
Description: Digital signature


[PATCH v5 44/45] CPU hotplug, stop_machine: Decouple CPU hotplug from stop_machine() in Kconfig

2013-01-21 Thread Srivatsa S. Bhat
... and also cleanup a comment that refers to CPU hotplug being dependent on
stop_machine().

Cc: David Howells 
Signed-off-by: Srivatsa S. Bhat 
---

 include/linux/stop_machine.h |2 +-
 init/Kconfig |2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/include/linux/stop_machine.h b/include/linux/stop_machine.h
index 3b5e910..ce2d3c4 100644
--- a/include/linux/stop_machine.h
+++ b/include/linux/stop_machine.h
@@ -120,7 +120,7 @@ int stop_machine(int (*fn)(void *), void *data, const 
struct cpumask *cpus);
  * @cpus: the cpus to run the @fn() on (NULL = any online cpu)
  *
  * Description: This is a special version of the above, which assumes cpus
- * won't come or go while it's being called.  Used by hotplug cpu.
+ * won't come or go while it's being called.
  */
 int __stop_machine(int (*fn)(void *), void *data, const struct cpumask *cpus);
 
diff --git a/init/Kconfig b/init/Kconfig
index be8b7f5..048a0c5 100644
--- a/init/Kconfig
+++ b/init/Kconfig
@@ -1711,7 +1711,7 @@ config INIT_ALL_POSSIBLE
 config STOP_MACHINE
bool
default y
-   depends on (SMP && MODULE_UNLOAD) || HOTPLUG_CPU
+   depends on (SMP && MODULE_UNLOAD)
help
  Need stop_machine() primitive.
 

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH v5 43/45] cpu: No more __stop_machine() in _cpu_down()

2013-01-21 Thread Srivatsa S. Bhat
From: Paul E. McKenney 

The _cpu_down() function invoked as part of the CPU-hotplug offlining
process currently invokes __stop_machine(), which is slow and inflicts
substantial real-time latencies on the entire system.  This patch
substitutes stop_one_cpu() for __stop_machine() in order to improve
both performance and real-time latency.

There were a number of uses of preempt_disable() or local_irq_disable()
that were intended to block CPU-hotplug offlining. These were fixed by
using get/put_online_cpus_atomic(), which is the new synchronization
primitive to prevent CPU offline, while invoking from atomic context.

Signed-off-by: Paul E. McKenney 
Signed-off-by: Paul E. McKenney 
[ srivatsa.b...@linux.vnet.ibm.com: Refer to the new sync primitives for
  readers (in the changelog); s/stop_cpus/stop_one_cpu and fix comment
  referring to stop_machine in the code]
Signed-off-by: Srivatsa S. Bhat 
---

 kernel/cpu.c |4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/kernel/cpu.c b/kernel/cpu.c
index 1c84138..7a51fb6 100644
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -337,7 +337,7 @@ static int __ref _cpu_down(unsigned int cpu, int 
tasks_frozen)
}
smpboot_park_threads(cpu);
 
-   err = __stop_machine(take_cpu_down, _param, cpumask_of(cpu));
+   err = stop_one_cpu(cpu, take_cpu_down, _param);
if (err) {
/* CPU didn't die: tell everyone.  Can't complain. */
smpboot_unpark_threads(cpu);
@@ -349,7 +349,7 @@ static int __ref _cpu_down(unsigned int cpu, int 
tasks_frozen)
/*
 * The migration_call() CPU_DYING callback will have removed all
 * runnable tasks from the cpu, there's only the idle task left now
-* that the migration thread is done doing the stop_machine thing.
+* that the migration thread is done doing the stop_one_cpu() thing.
 *
 * Wait for the stop thread to go away.
 */

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH v5 42/45] tile: Use get/put_online_cpus_atomic() to prevent CPU offline

2013-01-21 Thread Srivatsa S. Bhat
Once stop_machine() is gone from the CPU offline path, we won't be able to
depend on preempt_disable() or local_irq_disable() to prevent CPUs from
going offline from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going offline,
while invoking from atomic context.

Cc: Chris Metcalf 
Signed-off-by: Srivatsa S. Bhat 
---

 arch/tile/kernel/smp.c |4 
 1 file changed, 4 insertions(+)

diff --git a/arch/tile/kernel/smp.c b/arch/tile/kernel/smp.c
index cbc73a8..fb30624 100644
--- a/arch/tile/kernel/smp.c
+++ b/arch/tile/kernel/smp.c
@@ -15,6 +15,7 @@
  */
 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -82,9 +83,12 @@ void send_IPI_many(const struct cpumask *mask, int tag)
 void send_IPI_allbutself(int tag)
 {
struct cpumask mask;
+
+   get_online_cpus_atomic();
cpumask_copy(, cpu_online_mask);
cpumask_clear_cpu(smp_processor_id(), );
send_IPI_many(, tag);
+   put_online_cpus_atomic();
 }
 
 /*

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH v5 40/45] sh: Use get/put_online_cpus_atomic() to prevent CPU offline

2013-01-21 Thread Srivatsa S. Bhat
Once stop_machine() is gone from the CPU offline path, we won't be able to
depend on preempt_disable() or local_irq_disable() to prevent CPUs from
going offline from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going offline,
while invoking from atomic context.

Cc: Paul Mundt 
Cc: linux...@vger.kernel.org
Signed-off-by: Srivatsa S. Bhat 
---

 arch/sh/kernel/smp.c |   12 ++--
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/arch/sh/kernel/smp.c b/arch/sh/kernel/smp.c
index 2062aa8..232fabe 100644
--- a/arch/sh/kernel/smp.c
+++ b/arch/sh/kernel/smp.c
@@ -357,7 +357,7 @@ static void flush_tlb_mm_ipi(void *mm)
  */
 void flush_tlb_mm(struct mm_struct *mm)
 {
-   preempt_disable();
+   get_online_cpus_atomic();
 
if ((atomic_read(>mm_users) != 1) || (current->mm != mm)) {
smp_call_function(flush_tlb_mm_ipi, (void *)mm, 1);
@@ -369,7 +369,7 @@ void flush_tlb_mm(struct mm_struct *mm)
}
local_flush_tlb_mm(mm);
 
-   preempt_enable();
+   put_online_cpus_atomic();
 }
 
 struct flush_tlb_data {
@@ -390,7 +390,7 @@ void flush_tlb_range(struct vm_area_struct *vma,
 {
struct mm_struct *mm = vma->vm_mm;
 
-   preempt_disable();
+   get_online_cpus_atomic();
if ((atomic_read(>mm_users) != 1) || (current->mm != mm)) {
struct flush_tlb_data fd;
 
@@ -405,7 +405,7 @@ void flush_tlb_range(struct vm_area_struct *vma,
cpu_context(i, mm) = 0;
}
local_flush_tlb_range(vma, start, end);
-   preempt_enable();
+   put_online_cpus_atomic();
 }
 
 static void flush_tlb_kernel_range_ipi(void *info)
@@ -433,7 +433,7 @@ static void flush_tlb_page_ipi(void *info)
 
 void flush_tlb_page(struct vm_area_struct *vma, unsigned long page)
 {
-   preempt_disable();
+   get_online_cpus_atomic();
if ((atomic_read(>vm_mm->mm_users) != 1) ||
(current->mm != vma->vm_mm)) {
struct flush_tlb_data fd;
@@ -448,7 +448,7 @@ void flush_tlb_page(struct vm_area_struct *vma, unsigned 
long page)
cpu_context(i, vma->vm_mm) = 0;
}
local_flush_tlb_page(vma, page);
-   preempt_enable();
+   put_online_cpus_atomic();
 }
 
 static void flush_tlb_one_ipi(void *info)

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH v5 41/45] sparc: Use get/put_online_cpus_atomic() to prevent CPU offline

2013-01-21 Thread Srivatsa S. Bhat
Once stop_machine() is gone from the CPU offline path, we won't be able to
depend on preempt_disable() or local_irq_disable() to prevent CPUs from
going offline from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going offline,
while invoking from atomic context.

Cc: "David S. Miller" 
Cc: Sam Ravnborg 
Cc: sparcli...@vger.kernel.org
Signed-off-by: Srivatsa S. Bhat 
---

 arch/sparc/kernel/leon_smp.c  |2 ++
 arch/sparc/kernel/smp_64.c|9 +
 arch/sparc/kernel/sun4d_smp.c |2 ++
 arch/sparc/kernel/sun4m_smp.c |3 +++
 4 files changed, 12 insertions(+), 4 deletions(-)

diff --git a/arch/sparc/kernel/leon_smp.c b/arch/sparc/kernel/leon_smp.c
index 0f3fb6d..441d3ac 100644
--- a/arch/sparc/kernel/leon_smp.c
+++ b/arch/sparc/kernel/leon_smp.c
@@ -420,6 +420,7 @@ static void leon_cross_call(smpfunc_t func, cpumask_t mask, 
unsigned long arg1,
unsigned long flags;
 
spin_lock_irqsave(_call_lock, flags);
+   get_online_cpus_atomic();
 
{
/* If you make changes here, make sure gcc generates 
proper code... */
@@ -476,6 +477,7 @@ static void leon_cross_call(smpfunc_t func, cpumask_t mask, 
unsigned long arg1,
} while (++i <= high);
}
 
+   put_online_cpus_atomic();
spin_unlock_irqrestore(_call_lock, flags);
}
 }
diff --git a/arch/sparc/kernel/smp_64.c b/arch/sparc/kernel/smp_64.c
index 537eb66..e1d7300 100644
--- a/arch/sparc/kernel/smp_64.c
+++ b/arch/sparc/kernel/smp_64.c
@@ -894,7 +894,8 @@ void smp_flush_dcache_page_impl(struct page *page, int cpu)
atomic_inc(_flushes);
 #endif
 
-   this_cpu = get_cpu();
+   get_online_cpus_atomic();
+   this_cpu = smp_processor_id();
 
if (cpu == this_cpu) {
__local_flush_dcache_page(page);
@@ -920,7 +921,7 @@ void smp_flush_dcache_page_impl(struct page *page, int cpu)
}
}
 
-   put_cpu();
+   put_online_cpus_atomic();
 }
 
 void flush_dcache_page_all(struct mm_struct *mm, struct page *page)
@@ -931,7 +932,7 @@ void flush_dcache_page_all(struct mm_struct *mm, struct 
page *page)
if (tlb_type == hypervisor)
return;
 
-   preempt_disable();
+   get_online_cpus_atomic();
 
 #ifdef CONFIG_DEBUG_DCFLUSH
atomic_inc(_flushes);
@@ -956,7 +957,7 @@ void flush_dcache_page_all(struct mm_struct *mm, struct 
page *page)
}
__local_flush_dcache_page(page);
 
-   preempt_enable();
+   put_online_cpus_atomic();
 }
 
 void __irq_entry smp_new_mmu_context_version_client(int irq, struct pt_regs 
*regs)
diff --git a/arch/sparc/kernel/sun4d_smp.c b/arch/sparc/kernel/sun4d_smp.c
index ddaea31..1fa7ff2 100644
--- a/arch/sparc/kernel/sun4d_smp.c
+++ b/arch/sparc/kernel/sun4d_smp.c
@@ -300,6 +300,7 @@ static void sun4d_cross_call(smpfunc_t func, cpumask_t 
mask, unsigned long arg1,
unsigned long flags;
 
spin_lock_irqsave(_call_lock, flags);
+   get_online_cpus_atomic();
 
{
/*
@@ -356,6 +357,7 @@ static void sun4d_cross_call(smpfunc_t func, cpumask_t 
mask, unsigned long arg1,
} while (++i <= high);
}
 
+   put_online_cpus_atomic();
spin_unlock_irqrestore(_call_lock, flags);
}
 }
diff --git a/arch/sparc/kernel/sun4m_smp.c b/arch/sparc/kernel/sun4m_smp.c
index 128af73..5599548 100644
--- a/arch/sparc/kernel/sun4m_smp.c
+++ b/arch/sparc/kernel/sun4m_smp.c
@@ -192,6 +192,7 @@ static void sun4m_cross_call(smpfunc_t func, cpumask_t 
mask, unsigned long arg1,
unsigned long flags;
 
spin_lock_irqsave(_call_lock, flags);
+   get_online_cpus_atomic();
 
/* Init function glue. */
ccall_info.func = func;
@@ -238,6 +239,8 @@ static void sun4m_cross_call(smpfunc_t func, cpumask_t 
mask, unsigned long arg1,
barrier();
} while (++i < ncpus);
}
+
+   put_online_cpus_atomic();
spin_unlock_irqrestore(_call_lock, flags);
 }
 

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH v5 39/45] powerpc: Use get/put_online_cpus_atomic() to prevent CPU offline

2013-01-21 Thread Srivatsa S. Bhat
Once stop_machine() is gone from the CPU offline path, we won't be able to
depend on preempt_disable() or local_irq_disable() to prevent CPUs from
going offline from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going offline,
while invoking from atomic context.

Cc: Benjamin Herrenschmidt 
Cc: Paul Mackerras 
Cc: linuxppc-...@lists.ozlabs.org
Signed-off-by: Srivatsa S. Bhat 
---

 arch/powerpc/mm/mmu_context_nohash.c |2 ++
 1 file changed, 2 insertions(+)

diff --git a/arch/powerpc/mm/mmu_context_nohash.c 
b/arch/powerpc/mm/mmu_context_nohash.c
index e779642..29f58cf 100644
--- a/arch/powerpc/mm/mmu_context_nohash.c
+++ b/arch/powerpc/mm/mmu_context_nohash.c
@@ -196,6 +196,7 @@ void switch_mmu_context(struct mm_struct *prev, struct 
mm_struct *next)
 
/* No lockless fast path .. yet */
raw_spin_lock(_lock);
+   get_online_cpus_atomic();
 
pr_hard("[%d] activating context for mm @%p, active=%d, id=%d",
cpu, next, next->context.active, next->context.id);
@@ -279,6 +280,7 @@ void switch_mmu_context(struct mm_struct *prev, struct 
mm_struct *next)
/* Flick the MMU and release lock */
pr_hardcont(" -> %d\n", id);
set_context(id, next->pgd);
+   put_online_cpus_atomic();
raw_spin_unlock(_lock);
 }
 

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH v5 36/45] MIPS: Use get/put_online_cpus_atomic() to prevent CPU offline

2013-01-21 Thread Srivatsa S. Bhat
Once stop_machine() is gone from the CPU offline path, we won't be able to
depend on preempt_disable() or local_irq_disable() to prevent CPUs fom
going offline from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going offline,
while invoking from atomic context.

Cc: Ralf Baechle 
Cc: David Daney 
Cc: linux-m...@linux-mips.org
Signed-off-by: Srivatsa S. Bhat 
---

 arch/mips/kernel/cevt-smtc.c |8 
 arch/mips/kernel/smp.c   |   16 
 arch/mips/kernel/smtc.c  |3 +++
 arch/mips/mm/c-octeon.c  |4 ++--
 4 files changed, 21 insertions(+), 10 deletions(-)

diff --git a/arch/mips/kernel/cevt-smtc.c b/arch/mips/kernel/cevt-smtc.c
index 2e72d30..6fb311b 100644
--- a/arch/mips/kernel/cevt-smtc.c
+++ b/arch/mips/kernel/cevt-smtc.c
@@ -11,6 +11,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 
 #include 
@@ -84,6 +85,8 @@ static int mips_next_event(unsigned long delta,
unsigned long nextcomp = 0L;
int vpe = current_cpu_data.vpe_id;
int cpu = smp_processor_id();
+
+   get_online_cpus_atomic();
local_irq_save(flags);
mtflags = dmt();
 
@@ -164,6 +167,7 @@ static int mips_next_event(unsigned long delta,
}
emt(mtflags);
local_irq_restore(flags);
+   put_online_cpus_atomic();
return 0;
 }
 
@@ -180,6 +184,7 @@ void smtc_distribute_timer(int vpe)
 
 repeat:
nextstamp = 0L;
+   get_online_cpus_atomic();
for_each_online_cpu(cpu) {
/*
 * Find virtual CPUs within the current VPE who have
@@ -221,6 +226,9 @@ repeat:
 
}
}
+
+   put_online_cpus_atomic()
+
/* Reprogram for interrupt at next soonest timestamp for VPE */
if (ISVALID(nextstamp)) {
write_c0_compare(nextstamp);
diff --git a/arch/mips/kernel/smp.c b/arch/mips/kernel/smp.c
index 66bf4e2..3828afa 100644
--- a/arch/mips/kernel/smp.c
+++ b/arch/mips/kernel/smp.c
@@ -248,12 +248,12 @@ static inline void smp_on_other_tlbs(void (*func) (void 
*info), void *info)
 
 static inline void smp_on_each_tlb(void (*func) (void *info), void *info)
 {
-   preempt_disable();
+   get_online_cpus_atomic();
 
smp_on_other_tlbs(func, info);
func(info);
 
-   preempt_enable();
+   put_online_cpus_atomic();
 }
 
 /*
@@ -271,7 +271,7 @@ static inline void smp_on_each_tlb(void (*func) (void 
*info), void *info)
 
 void flush_tlb_mm(struct mm_struct *mm)
 {
-   preempt_disable();
+   get_online_cpus_atomic();
 
if ((atomic_read(>mm_users) != 1) || (current->mm != mm)) {
smp_on_other_tlbs(flush_tlb_mm_ipi, mm);
@@ -285,7 +285,7 @@ void flush_tlb_mm(struct mm_struct *mm)
}
local_flush_tlb_mm(mm);
 
-   preempt_enable();
+   put_online_cpus_atomic();
 }
 
 struct flush_tlb_data {
@@ -305,7 +305,7 @@ void flush_tlb_range(struct vm_area_struct *vma, unsigned 
long start, unsigned l
 {
struct mm_struct *mm = vma->vm_mm;
 
-   preempt_disable();
+   get_online_cpus_atomic();
if ((atomic_read(>mm_users) != 1) || (current->mm != mm)) {
struct flush_tlb_data fd = {
.vma = vma,
@@ -323,7 +323,7 @@ void flush_tlb_range(struct vm_area_struct *vma, unsigned 
long start, unsigned l
}
}
local_flush_tlb_range(vma, start, end);
-   preempt_enable();
+   put_online_cpus_atomic();
 }
 
 static void flush_tlb_kernel_range_ipi(void *info)
@@ -352,7 +352,7 @@ static void flush_tlb_page_ipi(void *info)
 
 void flush_tlb_page(struct vm_area_struct *vma, unsigned long page)
 {
-   preempt_disable();
+   get_online_cpus_atomic();
if ((atomic_read(>vm_mm->mm_users) != 1) || (current->mm != 
vma->vm_mm)) {
struct flush_tlb_data fd = {
.vma = vma,
@@ -369,7 +369,7 @@ void flush_tlb_page(struct vm_area_struct *vma, unsigned 
long page)
}
}
local_flush_tlb_page(vma, page);
-   preempt_enable();
+   put_online_cpus_atomic();
 }
 
 static void flush_tlb_one_ipi(void *info)
diff --git a/arch/mips/kernel/smtc.c b/arch/mips/kernel/smtc.c
index 1d47843..caf081e 100644
--- a/arch/mips/kernel/smtc.c
+++ b/arch/mips/kernel/smtc.c
@@ -22,6 +22,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -1143,6 +1144,7 @@ static irqreturn_t ipi_interrupt(int irq, void *dev_idm)
 * for the current TC, so we ought not to have to do it explicitly here.
 */
 
+   get_online_cpus_atomic();
for_each_online_cpu(cpu) {
if (cpu_data[cpu].vpe_id != my_vpe)
continue;
@@ -1179,6 +1181,7 @@ static irqreturn_t ipi_interrupt(int irq, void *dev_idm)
}
}
}
+   put_online_cpus_atomic();
 
return IRQ_HANDLED;
 }
diff --git a/arch/mips/mm/c-octeon.c 

[PATCH v5 37/45] mn10300: Use get/put_online_cpus_atomic() to prevent CPU offline

2013-01-21 Thread Srivatsa S. Bhat
Once stop_machine() is gone from the CPU offline path, we won't be able to
depend on preempt_disable() or local_irq_disable() to prevent CPUs from
going offline from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going offline,
while invoking from atomic context.

Cc: David Howells 
Cc: Koichi Yasutake 
Cc: linux-am33-l...@redhat.com
Signed-off-by: Srivatsa S. Bhat 
---

 arch/mn10300/kernel/smp.c   |2 ++
 arch/mn10300/mm/cache-smp.c |5 +
 arch/mn10300/mm/tlb-smp.c   |   15 +--
 3 files changed, 16 insertions(+), 6 deletions(-)

diff --git a/arch/mn10300/kernel/smp.c b/arch/mn10300/kernel/smp.c
index 5d7e152..9dfa172 100644
--- a/arch/mn10300/kernel/smp.c
+++ b/arch/mn10300/kernel/smp.c
@@ -349,9 +349,11 @@ void send_IPI_allbutself(int irq)
 {
cpumask_t cpumask;
 
+   get_online_cpus_atomic();
cpumask_copy(, cpu_online_mask);
cpumask_clear_cpu(smp_processor_id(), );
send_IPI_mask(, irq);
+   put_online_cpus_atomic();
 }
 
 void arch_send_call_function_ipi_mask(const struct cpumask *mask)
diff --git a/arch/mn10300/mm/cache-smp.c b/arch/mn10300/mm/cache-smp.c
index 2d23b9e..47ca1c9 100644
--- a/arch/mn10300/mm/cache-smp.c
+++ b/arch/mn10300/mm/cache-smp.c
@@ -13,6 +13,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -94,6 +95,8 @@ void smp_cache_call(unsigned long opr_mask,
smp_cache_mask = opr_mask;
smp_cache_start = start;
smp_cache_end = end;
+
+   get_online_cpus_atomic();
cpumask_copy(_cache_ipi_map, cpu_online_mask);
cpumask_clear_cpu(smp_processor_id(), _cache_ipi_map);
 
@@ -102,4 +105,6 @@ void smp_cache_call(unsigned long opr_mask,
while (!cpumask_empty(_cache_ipi_map))
/* nothing. lockup detection does not belong here */
mb();
+
+   put_online_cpus_atomic();
 }
diff --git a/arch/mn10300/mm/tlb-smp.c b/arch/mn10300/mm/tlb-smp.c
index 3e57faf..d47304d 100644
--- a/arch/mn10300/mm/tlb-smp.c
+++ b/arch/mn10300/mm/tlb-smp.c
@@ -23,6 +23,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -105,6 +106,7 @@ static void flush_tlb_others(cpumask_t cpumask, struct 
mm_struct *mm,
BUG_ON(cpumask_empty());
BUG_ON(cpumask_test_cpu(smp_processor_id(), ));
 
+   get_online_cpus_atomic();
cpumask_and(, , cpu_online_mask);
BUG_ON(!cpumask_equal(, ));
 
@@ -134,6 +136,7 @@ static void flush_tlb_others(cpumask_t cpumask, struct 
mm_struct *mm,
flush_mm = NULL;
flush_va = 0;
spin_unlock(_lock);
+   put_online_cpus_atomic();
 }
 
 /**
@@ -144,7 +147,7 @@ void flush_tlb_mm(struct mm_struct *mm)
 {
cpumask_t cpu_mask;
 
-   preempt_disable();
+   get_online_cpus_atomic();
cpumask_copy(_mask, mm_cpumask(mm));
cpumask_clear_cpu(smp_processor_id(), _mask);
 
@@ -152,7 +155,7 @@ void flush_tlb_mm(struct mm_struct *mm)
if (!cpumask_empty(_mask))
flush_tlb_others(cpu_mask, mm, FLUSH_ALL);
 
-   preempt_enable();
+   put_online_cpus_atomic();
 }
 
 /**
@@ -163,7 +166,7 @@ void flush_tlb_current_task(void)
struct mm_struct *mm = current->mm;
cpumask_t cpu_mask;
 
-   preempt_disable();
+   get_online_cpus_atomic();
cpumask_copy(_mask, mm_cpumask(mm));
cpumask_clear_cpu(smp_processor_id(), _mask);
 
@@ -171,7 +174,7 @@ void flush_tlb_current_task(void)
if (!cpumask_empty(_mask))
flush_tlb_others(cpu_mask, mm, FLUSH_ALL);
 
-   preempt_enable();
+   put_online_cpus_atomic();
 }
 
 /**
@@ -184,7 +187,7 @@ void flush_tlb_page(struct vm_area_struct *vma, unsigned 
long va)
struct mm_struct *mm = vma->vm_mm;
cpumask_t cpu_mask;
 
-   preempt_disable();
+   get_online_cpus_atomic();
cpumask_copy(_mask, mm_cpumask(mm));
cpumask_clear_cpu(smp_processor_id(), _mask);
 
@@ -192,7 +195,7 @@ void flush_tlb_page(struct vm_area_struct *vma, unsigned 
long va)
if (!cpumask_empty(_mask))
flush_tlb_others(cpu_mask, mm, va);
 
-   preempt_enable();
+   put_online_cpus_atomic();
 }
 
 /**

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH v5 35/45] m32r: Use get/put_online_cpus_atomic() to prevent CPU offline

2013-01-21 Thread Srivatsa S. Bhat
Once stop_machine() is gone from the CPU offline path, we won't be able to
depend on preempt_disable() or local_irq_disable() to prevent CPUs from
going offline from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going offline,
while invoking from atomic context.

Cc: Hirokazu Takata 
Cc: linux-m...@ml.linux-m32r.org
Cc: linux-m32r...@ml.linux-m32r.org
Signed-off-by: Srivatsa S. Bhat 
---

 arch/m32r/kernel/smp.c |   12 
 1 file changed, 8 insertions(+), 4 deletions(-)

diff --git a/arch/m32r/kernel/smp.c b/arch/m32r/kernel/smp.c
index ce7aea3..0dad4d7 100644
--- a/arch/m32r/kernel/smp.c
+++ b/arch/m32r/kernel/smp.c
@@ -151,7 +151,7 @@ void smp_flush_cache_all(void)
cpumask_t cpumask;
unsigned long *mask;
 
-   preempt_disable();
+   get_online_cpus_atomic();
cpumask_copy(, cpu_online_mask);
cpumask_clear_cpu(smp_processor_id(), );
spin_lock(_lock);
@@ -162,7 +162,7 @@ void smp_flush_cache_all(void)
while (flushcache_cpumask)
mb();
spin_unlock(_lock);
-   preempt_enable();
+   put_online_cpus_atomic();
 }
 
 void smp_flush_cache_all_interrupt(void)
@@ -250,7 +250,7 @@ void smp_flush_tlb_mm(struct mm_struct *mm)
unsigned long *mmc;
unsigned long flags;
 
-   preempt_disable();
+   get_online_cpus_atomic();
cpu_id = smp_processor_id();
mmc = >context[cpu_id];
cpumask_copy(_mask, mm_cpumask(mm));
@@ -268,7 +268,7 @@ void smp_flush_tlb_mm(struct mm_struct *mm)
if (!cpumask_empty(_mask))
flush_tlb_others(cpu_mask, mm, NULL, FLUSH_ALL);
 
-   preempt_enable();
+   put_online_cpus_atomic();
 }
 
 /*==*
@@ -715,10 +715,12 @@ static void send_IPI_allbutself(int ipi_num, int try)
 {
cpumask_t cpumask;
 
+   get_online_cpus_atomic();
cpumask_copy(, cpu_online_mask);
cpumask_clear_cpu(smp_processor_id(), );
 
send_IPI_mask(, ipi_num, try);
+   put_online_cpus_atomic();
 }
 
 /*==*
@@ -750,6 +752,7 @@ static void send_IPI_mask(const struct cpumask *cpumask, 
int ipi_num, int try)
if (num_cpus <= 1)  /* NO MP */
return;
 
+   get_online_cpus_atomic();
cpumask_and(, cpumask, cpu_online_mask);
BUG_ON(!cpumask_equal(cpumask, ));
 
@@ -760,6 +763,7 @@ static void send_IPI_mask(const struct cpumask *cpumask, 
int ipi_num, int try)
}
 
send_IPI_mask_phys(_mask, ipi_num, try);
+   put_online_cpus_atomic();
 }
 
 /*==*

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH v5 34/45] ia64: Use get/put_online_cpus_atomic() to prevent CPU offline

2013-01-21 Thread Srivatsa S. Bhat
Once stop_machine() is gone from the CPU offline path, we won't be able to
depend on preempt_disable() or local_irq_disable() to prevent CPUs from
going offline from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going offline,
while invoking from atomic context.

Cc: Tony Luck 
Cc: Fenghua Yu 
Cc: linux-i...@vger.kernel.org
Signed-off-by: Srivatsa S. Bhat 
---

 arch/ia64/kernel/irq_ia64.c |   13 +
 arch/ia64/kernel/perfmon.c  |6 ++
 arch/ia64/kernel/smp.c  |   23 ---
 arch/ia64/mm/tlb.c  |6 --
 4 files changed, 39 insertions(+), 9 deletions(-)

diff --git a/arch/ia64/kernel/irq_ia64.c b/arch/ia64/kernel/irq_ia64.c
index 1034884..d0b4478 100644
--- a/arch/ia64/kernel/irq_ia64.c
+++ b/arch/ia64/kernel/irq_ia64.c
@@ -31,6 +31,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
 #include 
@@ -190,9 +191,11 @@ static void clear_irq_vector(int irq)
 {
unsigned long flags;
 
+   get_online_cpus_atomic();
spin_lock_irqsave(_lock, flags);
__clear_irq_vector(irq);
spin_unlock_irqrestore(_lock, flags);
+   put_online_cpus_atomic();
 }
 
 int
@@ -204,6 +207,7 @@ ia64_native_assign_irq_vector (int irq)
 
vector = -ENOSPC;
 
+   get_online_cpus_atomic();
spin_lock_irqsave(_lock, flags);
for_each_online_cpu(cpu) {
domain = vector_allocation_domain(cpu);
@@ -218,6 +222,7 @@ ia64_native_assign_irq_vector (int irq)
BUG_ON(__bind_irq_vector(irq, vector, domain));
  out:
spin_unlock_irqrestore(_lock, flags);
+   put_online_cpus_atomic();
return vector;
 }
 
@@ -302,9 +307,11 @@ int irq_prepare_move(int irq, int cpu)
unsigned long flags;
int ret;
 
+   get_online_cpus_atomic();
spin_lock_irqsave(_lock, flags);
ret = __irq_prepare_move(irq, cpu);
spin_unlock_irqrestore(_lock, flags);
+   put_online_cpus_atomic();
return ret;
 }
 
@@ -320,11 +327,13 @@ void irq_complete_move(unsigned irq)
if (unlikely(cpu_isset(smp_processor_id(), cfg->old_domain)))
return;
 
+   get_online_cpus_atomic();
cpumask_and(_mask, >old_domain, cpu_online_mask);
cfg->move_cleanup_count = cpus_weight(cleanup_mask);
for_each_cpu_mask(i, cleanup_mask)
platform_send_ipi(i, IA64_IRQ_MOVE_VECTOR, IA64_IPI_DM_INT, 0);
cfg->move_in_progress = 0;
+   put_online_cpus_atomic();
 }
 
 static irqreturn_t smp_irq_move_cleanup_interrupt(int irq, void *dev_id)
@@ -409,6 +418,8 @@ int create_irq(void)
cpumask_t domain = CPU_MASK_NONE;
 
irq = vector = -ENOSPC;
+
+   get_online_cpus_atomic();
spin_lock_irqsave(_lock, flags);
for_each_online_cpu(cpu) {
domain = vector_allocation_domain(cpu);
@@ -424,6 +435,8 @@ int create_irq(void)
BUG_ON(__bind_irq_vector(irq, vector, domain));
  out:
spin_unlock_irqrestore(_lock, flags);
+   put_online_cpus_atomic();
+
if (irq >= 0)
dynamic_irq_init(irq);
return irq;
diff --git a/arch/ia64/kernel/perfmon.c b/arch/ia64/kernel/perfmon.c
index ea39eba..6c6a029 100644
--- a/arch/ia64/kernel/perfmon.c
+++ b/arch/ia64/kernel/perfmon.c
@@ -34,6 +34,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -6485,6 +6486,7 @@ pfm_install_alt_pmu_interrupt(pfm_intr_handler_desc_t 
*hdl)
}
 
/* reserve our session */
+   get_online_cpus_atomic();
for_each_online_cpu(reserve_cpu) {
ret = pfm_reserve_session(NULL, 1, reserve_cpu);
if (ret) goto cleanup_reserve;
@@ -6500,6 +6502,7 @@ pfm_install_alt_pmu_interrupt(pfm_intr_handler_desc_t 
*hdl)
/* officially change to the alternate interrupt handler */
pfm_alt_intr_handler = hdl;
 
+   put_online_cpus_atomic();
spin_unlock(_alt_install_check);
 
return 0;
@@ -6512,6 +6515,7 @@ cleanup_reserve:
pfm_unreserve_session(NULL, 1, i);
}
 
+   put_online_cpus_atomic();
spin_unlock(_alt_install_check);
 
return ret;
@@ -6536,6 +6540,7 @@ pfm_remove_alt_pmu_interrupt(pfm_intr_handler_desc_t *hdl)
 
pfm_alt_intr_handler = NULL;
 
+   get_online_cpus_atomic();
ret = on_each_cpu(pfm_alt_restore_pmu_state, NULL, 1);
if (ret) {
DPRINT(("on_each_cpu() failed: %d\n", ret));
@@ -6545,6 +6550,7 @@ pfm_remove_alt_pmu_interrupt(pfm_intr_handler_desc_t *hdl)
pfm_unreserve_session(NULL, 1, i);
}
 
+   put_online_cpus_atomic();
spin_unlock(_alt_install_check);
 
return 0;
diff --git a/arch/ia64/kernel/smp.c b/arch/ia64/kernel/smp.c
index 9fcd4e6..d9a4636 100644
--- a/arch/ia64/kernel/smp.c
+++ b/arch/ia64/kernel/smp.c
@@ -24,6 +24,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -154,12 +155,15 

[PATCH v5 33/45] hexagon/smp: Use get/put_online_cpus_atomic() to prevent CPU offline

2013-01-21 Thread Srivatsa S. Bhat
Once stop_machine() is gone from the CPU offline path, we won't be able to
depend on preempt_disable() or local_irq_disable() to prevent CPUs from
going offline from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going offline,
while invoking from atomic context.

Cc: Richard Kuo 
Cc: linux-hexa...@vger.kernel.org
Signed-off-by: Srivatsa S. Bhat 
---

 arch/hexagon/kernel/smp.c |5 +
 1 file changed, 5 insertions(+)

diff --git a/arch/hexagon/kernel/smp.c b/arch/hexagon/kernel/smp.c
index 8e095df..ec87de9 100644
--- a/arch/hexagon/kernel/smp.c
+++ b/arch/hexagon/kernel/smp.c
@@ -112,6 +112,7 @@ void send_ipi(const struct cpumask *cpumask, enum 
ipi_message_type msg)
unsigned long cpu;
unsigned long retval;
 
+   get_online_cpus_atomic();
local_irq_save(flags);
 
for_each_cpu(cpu, cpumask) {
@@ -128,6 +129,7 @@ void send_ipi(const struct cpumask *cpumask, enum 
ipi_message_type msg)
}
 
local_irq_restore(flags);
+   put_online_cpus_atomic();
 }
 
 static struct irqaction ipi_intdesc = {
@@ -241,9 +243,12 @@ void smp_send_reschedule(int cpu)
 void smp_send_stop(void)
 {
struct cpumask targets;
+
+   get_online_cpus_atomic();
cpumask_copy(, cpu_online_mask);
cpumask_clear_cpu(smp_processor_id(), );
send_ipi(, IPI_CPU_STOP);
+   put_online_cpus_atomic();
 }
 
 void arch_send_call_function_single_ipi(int cpu)

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH v5 32/45] cris/smp: Use get/put_online_cpus_atomic() to prevent CPU offline

2013-01-21 Thread Srivatsa S. Bhat
Once stop_machine() is gone from the CPU offline path, we won't be able to
depend on preempt_disable() or local_irq_disable() to prevent CPUs from
going offline from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going offline,
while invoking from atomic context.

Cc: Mikael Starvik 
Cc: Jesper Nilsson 
Cc: linux-cris-ker...@axis.com
Signed-off-by: Srivatsa S. Bhat 
---

 arch/cris/arch-v32/kernel/smp.c |8 
 1 file changed, 8 insertions(+)

diff --git a/arch/cris/arch-v32/kernel/smp.c b/arch/cris/arch-v32/kernel/smp.c
index 04a16ed..644b358 100644
--- a/arch/cris/arch-v32/kernel/smp.c
+++ b/arch/cris/arch-v32/kernel/smp.c
@@ -15,6 +15,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 
@@ -208,9 +209,12 @@ int __cpuinit __cpu_up(unsigned int cpu, struct 
task_struct *tidle)
 void smp_send_reschedule(int cpu)
 {
cpumask_t cpu_mask;
+
+   get_online_cpus_atomic();
cpumask_clear(_mask);
cpumask_set_cpu(cpu, _mask);
send_ipi(IPI_SCHEDULE, 0, cpu_mask);
+   put_online_cpus_atomic();
 }
 
 /* TLB flushing
@@ -224,6 +228,7 @@ void flush_tlb_common(struct mm_struct* mm, struct 
vm_area_struct* vma, unsigned
unsigned long flags;
cpumask_t cpu_mask;
 
+   get_online_cpus_atomic();
spin_lock_irqsave(_lock, flags);
cpu_mask = (mm == FLUSH_ALL ? cpu_all_mask : *mm_cpumask(mm));
cpumask_clear_cpu(smp_processor_id(), _mask);
@@ -232,6 +237,7 @@ void flush_tlb_common(struct mm_struct* mm, struct 
vm_area_struct* vma, unsigned
flush_addr = addr;
send_ipi(IPI_FLUSH_TLB, 1, cpu_mask);
spin_unlock_irqrestore(_lock, flags);
+   put_online_cpus_atomic();
 }
 
 void flush_tlb_all(void)
@@ -312,6 +318,7 @@ int smp_call_function(void (*func)(void *info), void *info, 
int wait)
struct call_data_struct data;
int ret;
 
+   get_online_cpus_atomic();
cpumask_setall(_mask);
cpumask_clear_cpu(smp_processor_id(), _mask);
 
@@ -325,6 +332,7 @@ int smp_call_function(void (*func)(void *info), void *info, 
int wait)
call_data = 
ret = send_ipi(IPI_CALL, wait, cpu_mask);
spin_unlock(_lock);
+   put_online_cpus_atomic();
 
return ret;
 }

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH v5 31/45] blackfin/smp: Use get/put_online_cpus_atomic() to prevent CPU offline

2013-01-21 Thread Srivatsa S. Bhat
Once stop_machine() is gone from the CPU offline path, we won't be able to
depend on preempt_disable() or local_irq_disable() to prevent CPUs from
going offline from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going offline,
while invoking from atomic context.

Cc: Mike Frysinger 
Cc: Bob Liu 
Cc: Steven Miao 
Cc: uclinux-dist-de...@blackfin.uclinux.org
Signed-off-by: Srivatsa S. Bhat 
---

 arch/blackfin/mach-common/smp.c |6 --
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/arch/blackfin/mach-common/smp.c b/arch/blackfin/mach-common/smp.c
index bb61ae4..6cc6d7a 100644
--- a/arch/blackfin/mach-common/smp.c
+++ b/arch/blackfin/mach-common/smp.c
@@ -194,6 +194,7 @@ void send_ipi(const struct cpumask *cpumask, enum 
ipi_message_type msg)
struct ipi_data *bfin_ipi_data;
unsigned long flags;
 
+   get_online_cpus_atomic();
local_irq_save(flags);
smp_mb();
for_each_cpu(cpu, cpumask) {
@@ -205,6 +206,7 @@ void send_ipi(const struct cpumask *cpumask, enum 
ipi_message_type msg)
}
 
local_irq_restore(flags);
+   put_online_cpus_atomic();
 }
 
 void arch_send_call_function_single_ipi(int cpu)
@@ -238,13 +240,13 @@ void smp_send_stop(void)
 {
cpumask_t callmap;
 
-   preempt_disable();
+   get_online_cpus_atomic();
cpumask_copy(, cpu_online_mask);
cpumask_clear_cpu(smp_processor_id(), );
if (!cpumask_empty())
send_ipi(, BFIN_IPI_CPU_STOP);
 
-   preempt_enable();
+   put_online_cpus_atomic();
 
return;
 }

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH v5 30/45] alpha/smp: Use get/put_online_cpus_atomic() to prevent CPU offline

2013-01-21 Thread Srivatsa S. Bhat
Once stop_machine() is gone from the CPU offline path, we won't be able to
depend on preempt_disable() or local_irq_disable() to prevent CPUs from
going offline from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going offline,
while invoking from atomic context.

Also, remove the non-ASCII character present in this file!

Cc: linux-al...@vger.kernel.org
Signed-off-by: Srivatsa S. Bhat 
---

 arch/alpha/kernel/smp.c |   19 +--
 1 file changed, 9 insertions(+), 10 deletions(-)

diff --git a/arch/alpha/kernel/smp.c b/arch/alpha/kernel/smp.c
index 9603bc2..9213d5d 100644
--- a/arch/alpha/kernel/smp.c
+++ b/arch/alpha/kernel/smp.c
@@ -498,7 +498,6 @@ smp_cpus_done(unsigned int max_cpus)
   ((bogosum + 2500) / (5000/HZ)) % 100);
 }
 
-
 void
 smp_percpu_timer_interrupt(struct pt_regs *regs)
 {
@@ -682,7 +681,7 @@ ipi_flush_tlb_mm(void *x)
 void
 flush_tlb_mm(struct mm_struct *mm)
 {
-   preempt_disable();
+   get_online_cpus_atomic();
 
if (mm == current->active_mm) {
flush_tlb_current(mm);
@@ -694,7 +693,7 @@ flush_tlb_mm(struct mm_struct *mm)
if (mm->context[cpu])
mm->context[cpu] = 0;
}
-   preempt_enable();
+   put_online_cpus_atomic();
return;
}
}
@@ -703,7 +702,7 @@ flush_tlb_mm(struct mm_struct *mm)
printk(KERN_CRIT "flush_tlb_mm: timed out\n");
}
 
-   preempt_enable();
+   put_online_cpus_atomic();
 }
 EXPORT_SYMBOL(flush_tlb_mm);
 
@@ -731,7 +730,7 @@ flush_tlb_page(struct vm_area_struct *vma, unsigned long 
addr)
struct flush_tlb_page_struct data;
struct mm_struct *mm = vma->vm_mm;
 
-   preempt_disable();
+   get_online_cpus_atomic();
 
if (mm == current->active_mm) {
flush_tlb_current_page(mm, vma, addr);
@@ -743,7 +742,7 @@ flush_tlb_page(struct vm_area_struct *vma, unsigned long 
addr)
if (mm->context[cpu])
mm->context[cpu] = 0;
}
-   preempt_enable();
+   put_online_cpus_atomic();
return;
}
}
@@ -756,7 +755,7 @@ flush_tlb_page(struct vm_area_struct *vma, unsigned long 
addr)
printk(KERN_CRIT "flush_tlb_page: timed out\n");
}
 
-   preempt_enable();
+   put_online_cpus_atomic();
 }
 EXPORT_SYMBOL(flush_tlb_page);
 
@@ -787,7 +786,7 @@ flush_icache_user_range(struct vm_area_struct *vma, struct 
page *page,
if ((vma->vm_flags & VM_EXEC) == 0)
return;
 
-   preempt_disable();
+   get_online_cpus_atomic();
 
if (mm == current->active_mm) {
__load_new_mm_context(mm);
@@ -799,7 +798,7 @@ flush_icache_user_range(struct vm_area_struct *vma, struct 
page *page,
if (mm->context[cpu])
mm->context[cpu] = 0;
}
-   preempt_enable();
+   put_online_cpus_atomic();
return;
}
}
@@ -808,5 +807,5 @@ flush_icache_user_range(struct vm_area_struct *vma, struct 
page *page,
printk(KERN_CRIT "flush_icache_page: timed out\n");
}
 
-   preempt_enable();
+   put_online_cpus_atomic();
 }

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH v5 29/45] x86/xen: Use get/put_online_cpus_atomic() to prevent CPU offline

2013-01-21 Thread Srivatsa S. Bhat
Once stop_machine() is gone from the CPU offline path, we won't be able to
depend on preempt_disable() or local_irq_disable() to prevent CPUs from
going offline from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going offline,
while invoking from atomic context.

Cc: Konrad Rzeszutek Wilk 
Cc: Jeremy Fitzhardinge 
Cc: "H. Peter Anvin" 
Cc: x...@kernel.org
Cc: xen-de...@lists.xensource.com
Cc: virtualizat...@lists.linux-foundation.org
Signed-off-by: Srivatsa S. Bhat 
---

 arch/x86/xen/mmu.c |   11 +--
 arch/x86/xen/smp.c |9 +
 2 files changed, 18 insertions(+), 2 deletions(-)

diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index 01de35c..6a95a15 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -39,6 +39,7 @@
  * Jeremy Fitzhardinge , XenSource Inc, 2007
  */
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -1163,9 +1164,13 @@ static void xen_drop_mm_ref(struct mm_struct *mm)
  */
 static void xen_exit_mmap(struct mm_struct *mm)
 {
-   get_cpu();  /* make sure we don't move around */
+   /*
+* Make sure we don't move around, and prevent CPUs from going
+* offline.
+*/
+   get_online_cpus_atomic();
xen_drop_mm_ref(mm);
-   put_cpu();
+   put_online_cpus_atomic();
 
spin_lock(>page_table_lock);
 
@@ -1371,6 +1376,7 @@ static void xen_flush_tlb_others(const struct cpumask 
*cpus,
args->op.arg2.vcpumask = to_cpumask(args->mask);
 
/* Remove us, and any offline CPUS. */
+   get_online_cpus_atomic();
cpumask_and(to_cpumask(args->mask), cpus, cpu_online_mask);
cpumask_clear_cpu(smp_processor_id(), to_cpumask(args->mask));
 
@@ -1383,6 +1389,7 @@ static void xen_flush_tlb_others(const struct cpumask 
*cpus,
MULTI_mmuext_op(mcs.mc, >op, 1, NULL, DOMID_SELF);
 
xen_mc_issue(PARAVIRT_LAZY_MMU);
+   put_online_cpus_atomic();
 }
 
 static unsigned long xen_read_cr3(void)
diff --git a/arch/x86/xen/smp.c b/arch/x86/xen/smp.c
index 4f7d259..7d753ae 100644
--- a/arch/x86/xen/smp.c
+++ b/arch/x86/xen/smp.c
@@ -16,6 +16,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 
 #include 
@@ -487,8 +488,10 @@ static void __xen_send_IPI_mask(const struct cpumask *mask,
 {
unsigned cpu;
 
+   get_online_cpus_atomic();
for_each_cpu_and(cpu, mask, cpu_online_mask)
xen_send_IPI_one(cpu, vector);
+   put_online_cpus_atomic();
 }
 
 static void xen_smp_send_call_function_ipi(const struct cpumask *mask)
@@ -551,8 +554,10 @@ void xen_send_IPI_all(int vector)
 {
int xen_vector = xen_map_vector(vector);
 
+   get_online_cpus_atomic();
if (xen_vector >= 0)
__xen_send_IPI_mask(cpu_online_mask, xen_vector);
+   put_online_cpus_atomic();
 }
 
 void xen_send_IPI_self(int vector)
@@ -572,20 +577,24 @@ void xen_send_IPI_mask_allbutself(const struct cpumask 
*mask,
if (!(num_online_cpus() > 1))
return;
 
+   get_online_cpus_atomic();
for_each_cpu_and(cpu, mask, cpu_online_mask) {
if (this_cpu == cpu)
continue;
 
xen_smp_send_call_function_single_ipi(cpu);
}
+   put_online_cpus_atomic();
 }
 
 void xen_send_IPI_allbutself(int vector)
 {
int xen_vector = xen_map_vector(vector);
 
+   get_online_cpus_atomic();
if (xen_vector >= 0)
xen_send_IPI_mask_allbutself(cpu_online_mask, xen_vector);
+   put_online_cpus_atomic();
 }
 
 static irqreturn_t xen_call_function_interrupt(int irq, void *dev_id)

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH v5 26/45] perf/x86: Use get/put_online_cpus_atomic() to prevent CPU offline

2013-01-21 Thread Srivatsa S. Bhat
The CPU_DYING notifier modifies the per-cpu pointer pmu->box, and this can
race with functions such as uncore_pmu_to_box() and uncore_pci_remove() when
we remove stop_machine() from the CPU offline path. So protect them using
get/put_online_cpus_atomic().

Cc: "H. Peter Anvin" 
Cc: x...@kernel.org
Cc: Arnaldo Carvalho de Melo 
Signed-off-by: Srivatsa S. Bhat 
---

 arch/x86/kernel/cpu/perf_event_intel_uncore.c |5 +
 1 file changed, 5 insertions(+)

diff --git a/arch/x86/kernel/cpu/perf_event_intel_uncore.c 
b/arch/x86/kernel/cpu/perf_event_intel_uncore.c
index b43200d..6faae53 100644
--- a/arch/x86/kernel/cpu/perf_event_intel_uncore.c
+++ b/arch/x86/kernel/cpu/perf_event_intel_uncore.c
@@ -1,3 +1,4 @@
+#include 
 #include "perf_event_intel_uncore.h"
 
 static struct intel_uncore_type *empty_uncore[] = { NULL, };
@@ -1965,6 +1966,7 @@ uncore_pmu_to_box(struct intel_uncore_pmu *pmu, int cpu)
if (box)
return box;
 
+   get_online_cpus_atomic();
raw_spin_lock(_box_lock);
list_for_each_entry(box, >box_list, list) {
if (box->phys_id == topology_physical_package_id(cpu)) {
@@ -1974,6 +1976,7 @@ uncore_pmu_to_box(struct intel_uncore_pmu *pmu, int cpu)
}
}
raw_spin_unlock(_box_lock);
+   put_online_cpus_atomic();
 
return *per_cpu_ptr(pmu->box, cpu);
 }
@@ -2556,6 +2559,7 @@ static void uncore_pci_remove(struct pci_dev *pdev)
if (WARN_ON_ONCE(phys_id != box->phys_id))
return;
 
+   get_online_cpus_atomic();
raw_spin_lock(_box_lock);
list_del(>list);
raw_spin_unlock(_box_lock);
@@ -2569,6 +2573,7 @@ static void uncore_pci_remove(struct pci_dev *pdev)
 
WARN_ON_ONCE(atomic_read(>refcnt) != 1);
kfree(box);
+   put_online_cpus_atomic();
 }
 
 static int uncore_pci_probe(struct pci_dev *pdev,

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH v5 28/45] kvm/vmx: Use get/put_online_cpus_atomic() to prevent CPU offline

2013-01-21 Thread Srivatsa S. Bhat
Once stop_machine() is gone from the CPU offline path, we won't be able to
depend on preempt_disable() or local_irq_disable() to prevent CPUs from
going offline from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going offline,
while invoking from atomic context (in vmx_vcpu_load() to prevent CPUs from
going offline while clearing vmcs).

Reported-by: Michael Wang 
Debugged-by: Xiao Guangrong 
Cc: Marcelo Tosatti 
Cc: Gleb Natapov 
Cc: "H. Peter Anvin" 
Cc: x...@kernel.org
Cc: k...@vger.kernel.org
Signed-off-by: Srivatsa S. Bhat 
---

 arch/x86/kvm/vmx.c |8 ++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 9120ae1..2886ff0 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -1557,10 +1557,14 @@ static void vmx_vcpu_load(struct kvm_vcpu *vcpu, int 
cpu)
struct vcpu_vmx *vmx = to_vmx(vcpu);
u64 phys_addr = __pa(per_cpu(vmxarea, cpu));
 
-   if (!vmm_exclusive)
+   if (!vmm_exclusive) {
kvm_cpu_vmxon(phys_addr);
-   else if (vmx->loaded_vmcs->cpu != cpu)
+   } else if (vmx->loaded_vmcs->cpu != cpu) {
+   /* Prevent any CPU from going offline */
+   get_online_cpus_atomic();
loaded_vmcs_clear(vmx->loaded_vmcs);
+   put_online_cpus_atomic();
+   }
 
if (per_cpu(current_vmcs, cpu) != vmx->loaded_vmcs->vmcs) {
per_cpu(current_vmcs, cpu) = vmx->loaded_vmcs->vmcs;

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH v5 27/45] KVM: Use get/put_online_cpus_atomic() to prevent CPU offline from atomic context

2013-01-21 Thread Srivatsa S. Bhat
Once stop_machine() is gone from the CPU offline path, we won't be able to
depend on preempt_disable() or local_irq_disable() to prevent CPUs from
going offline from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going offline,
while invoking from atomic context.

Cc: Marcelo Tosatti 
Cc: Gleb Natapov 
Cc: k...@vger.kernel.org
Signed-off-by: Srivatsa S. Bhat 
---

 virt/kvm/kvm_main.c |   10 ++
 1 file changed, 6 insertions(+), 4 deletions(-)

diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 1cd693a..47f9c30 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -174,7 +174,8 @@ static bool make_all_cpus_request(struct kvm *kvm, unsigned 
int req)
 
zalloc_cpumask_var(, GFP_ATOMIC);
 
-   me = get_cpu();
+   get_online_cpus_atomic();
+   me = smp_processor_id();
kvm_for_each_vcpu(i, vcpu, kvm) {
kvm_make_request(req, vcpu);
cpu = vcpu->cpu;
@@ -192,7 +193,7 @@ static bool make_all_cpus_request(struct kvm *kvm, unsigned 
int req)
smp_call_function_many(cpus, ack_flush, NULL, 1);
else
called = false;
-   put_cpu();
+   put_online_cpus_atomic();
free_cpumask_var(cpus);
return called;
 }
@@ -1621,11 +1622,12 @@ void kvm_vcpu_kick(struct kvm_vcpu *vcpu)
++vcpu->stat.halt_wakeup;
}
 
-   me = get_cpu();
+   get_online_cpus_atomic();
+   me = smp_processor_id();
if (cpu != me && (unsigned)cpu < nr_cpu_ids && cpu_online(cpu))
if (kvm_arch_vcpu_should_kick(vcpu))
smp_send_reschedule(cpu);
-   put_cpu();
+   put_online_cpus_atomic();
 }
 #endif /* !CONFIG_S390 */
 

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH V3 RESEND RFC 2/2] kvm: Handle yield_to failure return code for potential undercommit case

2013-01-21 Thread Raghavendra K T
From: Raghavendra K T 

yield_to returns -ESRCH, When source and target of yield_to
run queue length is one. When we see three successive failures of
yield_to we assume we are in potential undercommit case and abort
from PLE handler.
The assumption is backed by low probability of wrong decision
for even worst case scenarios such as average runqueue length
between 1 and 2.

More detail on rationale behind using three tries:
if p is the probability of finding rq length one on a particular cpu,
and if we do n tries, then probability of exiting ple handler is:

 p^(n+1) [ because we would have come across one source with rq length
1 and n target cpu rqs  with length 1 ]

so
num tries: probability of aborting ple handler (1.5x overcommit)
 1 1/4
 2 1/8
 3 1/16

We can increase this probability with more tries, but the problem is
the overhead.
Also, If we have tried three times that means we would have iterated
over 3 good eligible vcpus along with many non-eligible candidates. In
worst case if we iterate all the vcpus, we reduce 1x performance and
overcommit performance get hit.

note that we do not update last boosted vcpu in failure cases.
Thank Avi for raising question on aborting after first fail from yield_to.

Reviewed-by: Srikar Dronamraju 
Signed-off-by: Raghavendra K T 
Tested-by: Chegu Vinod 
---
 Note:Updated with why number of tries to do yield is three.

 virt/kvm/kvm_main.c |   26 --
 1 file changed, 16 insertions(+), 10 deletions(-)

diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index be70035..053f494 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -1639,6 +1639,7 @@ bool kvm_vcpu_yield_to(struct kvm_vcpu *target)
 {
struct pid *pid;
struct task_struct *task = NULL;
+   bool ret = false;
 
rcu_read_lock();
pid = rcu_dereference(target->pid);
@@ -1646,17 +1647,15 @@ bool kvm_vcpu_yield_to(struct kvm_vcpu *target)
task = get_pid_task(target->pid, PIDTYPE_PID);
rcu_read_unlock();
if (!task)
-   return false;
+   return ret;
if (task->flags & PF_VCPU) {
put_task_struct(task);
-   return false;
-   }
-   if (yield_to(task, 1)) {
-   put_task_struct(task);
-   return true;
+   return ret;
}
+   ret = yield_to(task, 1);
put_task_struct(task);
-   return false;
+
+   return ret;
 }
 EXPORT_SYMBOL_GPL(kvm_vcpu_yield_to);
 
@@ -1697,12 +1696,14 @@ bool kvm_vcpu_eligible_for_directed_yield(struct 
kvm_vcpu *vcpu)
return eligible;
 }
 #endif
+
 void kvm_vcpu_on_spin(struct kvm_vcpu *me)
 {
struct kvm *kvm = me->kvm;
struct kvm_vcpu *vcpu;
int last_boosted_vcpu = me->kvm->last_boosted_vcpu;
int yielded = 0;
+   int try = 3;
int pass;
int i;
 
@@ -1714,7 +1715,7 @@ void kvm_vcpu_on_spin(struct kvm_vcpu *me)
 * VCPU is holding the lock that we need and will release it.
 * We approximate round-robin by starting at the last boosted VCPU.
 */
-   for (pass = 0; pass < 2 && !yielded; pass++) {
+   for (pass = 0; pass < 2 && !yielded && try; pass++) {
kvm_for_each_vcpu(i, vcpu, kvm) {
if (!pass && i <= last_boosted_vcpu) {
i = last_boosted_vcpu;
@@ -1727,10 +1728,15 @@ void kvm_vcpu_on_spin(struct kvm_vcpu *me)
continue;
if (!kvm_vcpu_eligible_for_directed_yield(vcpu))
continue;
-   if (kvm_vcpu_yield_to(vcpu)) {
+
+   yielded = kvm_vcpu_yield_to(vcpu);
+   if (yielded > 0) {
kvm->last_boosted_vcpu = i;
-   yielded = 1;
break;
+   } else if (yielded < 0) {
+   try--;
+   if (!try)
+   break;
}
}
}

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH v5 25/45] x86: Use get/put_online_cpus_atomic() to prevent CPU offline

2013-01-21 Thread Srivatsa S. Bhat
Once stop_machine() is gone from the CPU offline path, we won't be able to
depend on preempt_disable() or local_irq_disable() to prevent CPUs from
going offline from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going offline,
while invoking from atomic context.

Cc: "H. Peter Anvin" 
Cc: x...@kernel.org
Cc: Tony Luck 
Cc: Borislav Petkov 
Cc: Yinghai Lu 
Cc: Daniel J Blueman 
Cc: Steffen Persvold 
Cc: Joerg Roedel 
Cc: linux-e...@vger.kernel.org
Signed-off-by: Srivatsa S. Bhat 
---

 arch/x86/include/asm/ipi.h   |5 +
 arch/x86/kernel/apic/apic_flat_64.c  |   10 ++
 arch/x86/kernel/apic/apic_numachip.c |5 +
 arch/x86/kernel/apic/es7000_32.c |5 +
 arch/x86/kernel/apic/io_apic.c   |7 +--
 arch/x86/kernel/apic/ipi.c   |   10 ++
 arch/x86/kernel/apic/x2apic_cluster.c|4 
 arch/x86/kernel/apic/x2apic_uv_x.c   |4 
 arch/x86/kernel/cpu/mcheck/therm_throt.c |4 ++--
 arch/x86/mm/tlb.c|   14 +++---
 10 files changed, 57 insertions(+), 11 deletions(-)

diff --git a/arch/x86/include/asm/ipi.h b/arch/x86/include/asm/ipi.h
index 615fa90..112249c 100644
--- a/arch/x86/include/asm/ipi.h
+++ b/arch/x86/include/asm/ipi.h
@@ -20,6 +20,7 @@
  * Subject to the GNU Public License, v.2
  */
 
+#include 
 #include 
 #include 
 #include 
@@ -131,18 +132,22 @@ extern int no_broadcast;
 
 static inline void __default_local_send_IPI_allbutself(int vector)
 {
+   get_online_cpus_atomic();
if (no_broadcast || vector == NMI_VECTOR)
apic->send_IPI_mask_allbutself(cpu_online_mask, vector);
else
__default_send_IPI_shortcut(APIC_DEST_ALLBUT, vector, 
apic->dest_logical);
+   put_online_cpus_atomic();
 }
 
 static inline void __default_local_send_IPI_all(int vector)
 {
+   get_online_cpus_atomic();
if (no_broadcast || vector == NMI_VECTOR)
apic->send_IPI_mask(cpu_online_mask, vector);
else
__default_send_IPI_shortcut(APIC_DEST_ALLINC, vector, 
apic->dest_logical);
+   put_online_cpus_atomic();
 }
 
 #ifdef CONFIG_X86_32
diff --git a/arch/x86/kernel/apic/apic_flat_64.c 
b/arch/x86/kernel/apic/apic_flat_64.c
index 00c77cf..8207ade 100644
--- a/arch/x86/kernel/apic/apic_flat_64.c
+++ b/arch/x86/kernel/apic/apic_flat_64.c
@@ -11,6 +11,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -92,6 +93,8 @@ static void flat_send_IPI_allbutself(int vector)
 #else
int hotplug = 0;
 #endif
+
+   get_online_cpus_atomic();
if (hotplug || vector == NMI_VECTOR) {
if (!cpumask_equal(cpu_online_mask, cpumask_of(cpu))) {
unsigned long mask = cpumask_bits(cpu_online_mask)[0];
@@ -105,16 +108,19 @@ static void flat_send_IPI_allbutself(int vector)
__default_send_IPI_shortcut(APIC_DEST_ALLBUT,
vector, apic->dest_logical);
}
+   put_online_cpus_atomic();
 }
 
 static void flat_send_IPI_all(int vector)
 {
+   get_online_cpus_atomic();
if (vector == NMI_VECTOR) {
flat_send_IPI_mask(cpu_online_mask, vector);
} else {
__default_send_IPI_shortcut(APIC_DEST_ALLINC,
vector, apic->dest_logical);
}
+   put_online_cpus_atomic();
 }
 
 static unsigned int flat_get_apic_id(unsigned long x)
@@ -255,12 +261,16 @@ static void physflat_send_IPI_mask_allbutself(const 
struct cpumask *cpumask,
 
 static void physflat_send_IPI_allbutself(int vector)
 {
+   get_online_cpus_atomic();
default_send_IPI_mask_allbutself_phys(cpu_online_mask, vector);
+   put_online_cpus_atomic();
 }
 
 static void physflat_send_IPI_all(int vector)
 {
+   get_online_cpus_atomic();
physflat_send_IPI_mask(cpu_online_mask, vector);
+   put_online_cpus_atomic();
 }
 
 static int physflat_probe(void)
diff --git a/arch/x86/kernel/apic/apic_numachip.c 
b/arch/x86/kernel/apic/apic_numachip.c
index 9c2aa89..7d19c1d 100644
--- a/arch/x86/kernel/apic/apic_numachip.c
+++ b/arch/x86/kernel/apic/apic_numachip.c
@@ -14,6 +14,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -131,15 +132,19 @@ static void numachip_send_IPI_allbutself(int vector)
unsigned int this_cpu = smp_processor_id();
unsigned int cpu;
 
+   get_online_cpus_atomic();
for_each_online_cpu(cpu) {
if (cpu != this_cpu)
numachip_send_IPI_one(cpu, vector);
}
+   put_online_cpus_atomic();
 }
 
 static void numachip_send_IPI_all(int vector)
 {
+   get_online_cpus_atomic();
numachip_send_IPI_mask(cpu_online_mask, vector);
+   put_online_cpus_atomic();
 }
 
 static void numachip_send_IPI_self(int vector)
diff --git 

[PATCH V3 RESEND RFC 1/2] sched: Bail out of yield_to when source and target runqueue has one task

2013-01-21 Thread Raghavendra K T
From: Peter Zijlstra 

In case of undercomitted scenarios, especially in large guests
yield_to overhead is significantly high. when run queue length of
source and target is one, take an opportunity to bail out and return
-ESRCH. This return condition can be further exploited to quickly come
out of PLE handler.

(History: Raghavendra initially worked on break out of kvm ple handler upon
 seeing source runqueue length = 1, but it had to export rq length).
 Peter came up with the elegant idea of return -ESRCH in scheduler core.

Signed-off-by: Peter Zijlstra 
Raghavendra, Checking the rq length of target vcpu condition added.(thanks Avi)
Reviewed-by: Srikar Dronamraju 
Signed-off-by: Raghavendra K T 
Acked-by: Andrew Jones 
Tested-by: Chegu Vinod 
---

 kernel/sched/core.c |   25 +++--
 1 file changed, 19 insertions(+), 6 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 2d8927f..fc219a5 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -4289,7 +4289,10 @@ EXPORT_SYMBOL(yield);
  * It's the caller's job to ensure that the target task struct
  * can't go away on us before we can do any checks.
  *
- * Returns true if we indeed boosted the target task.
+ * Returns:
+ * true (>0) if we indeed boosted the target task.
+ * false (0) if we failed to boost the target.
+ * -ESRCH if there's no task to yield to.
  */
 bool __sched yield_to(struct task_struct *p, bool preempt)
 {
@@ -4303,6 +4306,15 @@ bool __sched yield_to(struct task_struct *p, bool 
preempt)
 
 again:
p_rq = task_rq(p);
+   /*
+* If we're the only runnable task on the rq and target rq also
+* has only one task, there's absolutely no point in yielding.
+*/
+   if (rq->nr_running == 1 && p_rq->nr_running == 1) {
+   yielded = -ESRCH;
+   goto out_irq;
+   }
+
double_rq_lock(rq, p_rq);
while (task_rq(p) != p_rq) {
double_rq_unlock(rq, p_rq);
@@ -4310,13 +4322,13 @@ again:
}
 
if (!curr->sched_class->yield_to_task)
-   goto out;
+   goto out_unlock;
 
if (curr->sched_class != p->sched_class)
-   goto out;
+   goto out_unlock;
 
if (task_running(p_rq, p) || p->state)
-   goto out;
+   goto out_unlock;
 
yielded = curr->sched_class->yield_to_task(rq, p, preempt);
if (yielded) {
@@ -4329,11 +4341,12 @@ again:
resched_task(p_rq->curr);
}
 
-out:
+out_unlock:
double_rq_unlock(rq, p_rq);
+out_irq:
local_irq_restore(flags);
 
-   if (yielded)
+   if (yielded > 0)
schedule();
 
return yielded;

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH v5 24/45] staging: octeon: Use get/put_online_cpus_atomic() to prevent CPU offline

2013-01-21 Thread Srivatsa S. Bhat
Once stop_machine() is gone from the CPU offline path, we won't be able to
depend on preempt_disable() or local_irq_disable() to prevent CPUs from
going offline from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going offline,
while invoking from atomic context.

Cc: Greg Kroah-Hartman 
Cc: David Daney 
Signed-off-by: Srivatsa S. Bhat 
---

 drivers/staging/octeon/ethernet-rx.c |3 +++
 1 file changed, 3 insertions(+)

diff --git a/drivers/staging/octeon/ethernet-rx.c 
b/drivers/staging/octeon/ethernet-rx.c
index 34afc16..8588b4d 100644
--- a/drivers/staging/octeon/ethernet-rx.c
+++ b/drivers/staging/octeon/ethernet-rx.c
@@ -36,6 +36,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #ifdef CONFIG_XFRM
@@ -97,6 +98,7 @@ static void cvm_oct_enable_one_cpu(void)
return;
 
/* ... if a CPU is available, Turn on NAPI polling for that CPU.  */
+   get_online_cpus_atomic();
for_each_online_cpu(cpu) {
if (!cpu_test_and_set(cpu, core_state.cpu_state)) {
v = smp_call_function_single(cpu, cvm_oct_enable_napi,
@@ -106,6 +108,7 @@ static void cvm_oct_enable_one_cpu(void)
break;
}
}
+   put_online_cpus_atomic();
 }
 
 static void cvm_oct_no_more_work(void)

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH v5 23/45] [SCSI] fcoe: Use get/put_online_cpus_atomic() to prevent CPU offline

2013-01-21 Thread Srivatsa S. Bhat
Once stop_machine() is gone from the CPU offline path, we won't be able to
depend on preempt_disable() or local_irq_disable() to prevent CPUs from
going offline from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going offline,
while invoking from atomic context.

Cc: Robert Love 
Cc: "James E.J. Bottomley" 
Cc: de...@open-fcoe.org
Cc: linux-s...@vger.kernel.org
Signed-off-by: Srivatsa S. Bhat 
---

 drivers/scsi/fcoe/fcoe.c |7 ++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/drivers/scsi/fcoe/fcoe.c b/drivers/scsi/fcoe/fcoe.c
index 666b7ac..c971a17 100644
--- a/drivers/scsi/fcoe/fcoe.c
+++ b/drivers/scsi/fcoe/fcoe.c
@@ -1475,6 +1475,7 @@ static int fcoe_rcv(struct sk_buff *skb, struct 
net_device *netdev,
 * was originated, otherwise select cpu using rx exchange id
 * or fcoe_select_cpu().
 */
+   get_online_cpus_atomic();
if (ntoh24(fh->fh_f_ctl) & FC_FC_EX_CTX)
cpu = ntohs(fh->fh_ox_id) & fc_cpu_mask;
else {
@@ -1484,8 +1485,10 @@ static int fcoe_rcv(struct sk_buff *skb, struct 
net_device *netdev,
cpu = ntohs(fh->fh_rx_id) & fc_cpu_mask;
}
 
-   if (cpu >= nr_cpu_ids)
+   if (cpu >= nr_cpu_ids) {
+   put_online_cpus_atomic();
goto err;
+   }
 
fps = _cpu(fcoe_percpu, cpu);
spin_lock(>fcoe_rx_list.lock);
@@ -1505,6 +1508,7 @@ static int fcoe_rcv(struct sk_buff *skb, struct 
net_device *netdev,
spin_lock(>fcoe_rx_list.lock);
if (!fps->thread) {
spin_unlock(>fcoe_rx_list.lock);
+   put_online_cpus_atomic();
goto err;
}
}
@@ -1526,6 +1530,7 @@ static int fcoe_rcv(struct sk_buff *skb, struct 
net_device *netdev,
if (fps->thread->state == TASK_INTERRUPTIBLE)
wake_up_process(fps->thread);
spin_unlock(>fcoe_rx_list.lock);
+   put_online_cpus_atomic();
 
return 0;
 err:

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH v5 21/45] crypto: pcrypt - Protect access to cpu_online_mask with get/put_online_cpus()

2013-01-21 Thread Srivatsa S. Bhat
The pcrypt_aead_init_tfm() function access the cpu_online_mask without
disabling CPU hotplug. And it looks like it can afford to sleep, so use
the get/put_online_cpus() APIs to protect against CPU hotplug.

Cc: Steffen Klassert 
Cc: Herbert Xu 
Cc: "David S. Miller" 
Cc: linux-cry...@vger.kernel.org
Signed-off-by: Srivatsa S. Bhat 
---

 crypto/pcrypt.c |4 
 1 file changed, 4 insertions(+)

diff --git a/crypto/pcrypt.c b/crypto/pcrypt.c
index b2c99dc..10f64e2 100644
--- a/crypto/pcrypt.c
+++ b/crypto/pcrypt.c
@@ -280,12 +280,16 @@ static int pcrypt_aead_init_tfm(struct crypto_tfm *tfm)
 
ictx->tfm_count++;
 
+   get_online_cpus();
+
cpu_index = ictx->tfm_count % cpumask_weight(cpu_online_mask);
 
ctx->cb_cpu = cpumask_first(cpu_online_mask);
for (cpu = 0; cpu < cpu_index; cpu++)
ctx->cb_cpu = cpumask_next(ctx->cb_cpu, cpu_online_mask);
 
+   put_online_cpus();
+
cipher = crypto_spawn_aead(crypto_instance_ctx(inst));
 
if (IS_ERR(cipher))

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH v5 20/45] block: Use get/put_online_cpus_atomic() to prevent CPU offline

2013-01-21 Thread Srivatsa S. Bhat
Once stop_machine() is gone from the CPU offline path, we won't be able to
depend on preempt_disable() or local_irq_disable() to prevent CPUs from
going offline from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going offline,
while invoking from atomic context.

Cc: Jens Axboe 
Signed-off-by: Srivatsa S. Bhat 
---

 block/blk-softirq.c |4 
 1 file changed, 4 insertions(+)

diff --git a/block/blk-softirq.c b/block/blk-softirq.c
index 467c8de..448f9a9 100644
--- a/block/blk-softirq.c
+++ b/block/blk-softirq.c
@@ -58,6 +58,8 @@ static void trigger_softirq(void *data)
  */
 static int raise_blk_irq(int cpu, struct request *rq)
 {
+   get_online_cpus_atomic();
+
if (cpu_online(cpu)) {
struct call_single_data *data = >csd;
 
@@ -66,9 +68,11 @@ static int raise_blk_irq(int cpu, struct request *rq)
data->flags = 0;
 
__smp_call_function_single(cpu, data, 0);
+   put_online_cpus_atomic();
return 0;
}
 
+   put_online_cpus_atomic();
return 1;
 }
 #else /* CONFIG_SMP && CONFIG_USE_GENERIC_SMP_HELPERS */

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH v5 19/45] net: Use get/put_online_cpus_atomic() to prevent CPU offline

2013-01-21 Thread Srivatsa S. Bhat
Once stop_machine() is gone from the CPU offline path, we won't be able to
depend on preempt_disable() or local_irq_disable() to prevent CPUs from
going offline from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going offline,
while invoking from atomic context.

Cc: "David S. Miller" 
Cc: Eric Dumazet 
Cc: net...@vger.kernel.org
Signed-off-by: Srivatsa S. Bhat 
---

 net/core/dev.c |9 +++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/net/core/dev.c b/net/core/dev.c
index f64e439..5421f96 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -3089,7 +3089,7 @@ int netif_rx(struct sk_buff *skb)
struct rps_dev_flow voidflow, *rflow = 
int cpu;
 
-   preempt_disable();
+   get_online_cpus_atomic();
rcu_read_lock();
 
cpu = get_rps_cpu(skb->dev, skb, );
@@ -3099,7 +3099,7 @@ int netif_rx(struct sk_buff *skb)
ret = enqueue_to_backlog(skb, cpu, >last_qtail);
 
rcu_read_unlock();
-   preempt_enable();
+   put_online_cpus_atomic();
} else
 #endif
{
@@ -3498,6 +3498,7 @@ int netif_receive_skb(struct sk_buff *skb)
struct rps_dev_flow voidflow, *rflow = 
int cpu, ret;
 
+   get_online_cpus_atomic();
rcu_read_lock();
 
cpu = get_rps_cpu(skb->dev, skb, );
@@ -3505,9 +3506,11 @@ int netif_receive_skb(struct sk_buff *skb)
if (cpu >= 0) {
ret = enqueue_to_backlog(skb, cpu, >last_qtail);
rcu_read_unlock();
+   put_online_cpus_atomic();
return ret;
}
rcu_read_unlock();
+   put_online_cpus_atomic();
}
 #endif
return __netif_receive_skb(skb);
@@ -3887,6 +3890,7 @@ static void net_rps_action_and_irq_enable(struct 
softnet_data *sd)
local_irq_enable();
 
/* Send pending IPI's to kick RPS processing on remote cpus. */
+   get_online_cpus_atomic();
while (remsd) {
struct softnet_data *next = remsd->rps_ipi_next;
 
@@ -3895,6 +3899,7 @@ static void net_rps_action_and_irq_enable(struct 
softnet_data *sd)
   >csd, 0);
remsd = next;
}
+   put_online_cpus_atomic();
} else
 #endif
local_irq_enable();

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH v5 18/45] irq: Use get/put_online_cpus_atomic() to prevent CPU offline

2013-01-21 Thread Srivatsa S. Bhat
Once stop_machine() is gone from the CPU offline path, we won't be able to
depend on preempt_disable() or local_irq_disable() to prevent CPUs from
going offline from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going offline,
while invoking from atomic context.

Signed-off-by: Srivatsa S. Bhat 
---

 kernel/irq/manage.c |7 +++
 1 file changed, 7 insertions(+)

diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
index e49a288..b4240b9 100644
--- a/kernel/irq/manage.c
+++ b/kernel/irq/manage.c
@@ -16,6 +16,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 
 #include "internals.h"
@@ -202,7 +203,9 @@ int irq_set_affinity(unsigned int irq, const struct cpumask 
*mask)
return -EINVAL;
 
raw_spin_lock_irqsave(>lock, flags);
+   get_online_cpus_atomic();
ret =  __irq_set_affinity_locked(irq_desc_get_irq_data(desc), mask);
+   put_online_cpus_atomic();
raw_spin_unlock_irqrestore(>lock, flags);
return ret;
 }
@@ -343,7 +346,9 @@ int irq_select_affinity_usr(unsigned int irq, struct 
cpumask *mask)
int ret;
 
raw_spin_lock_irqsave(>lock, flags);
+   get_online_cpus_atomic();
ret = setup_affinity(irq, desc, mask);
+   put_online_cpus_atomic();
raw_spin_unlock_irqrestore(>lock, flags);
return ret;
 }
@@ -1126,7 +1131,9 @@ __setup_irq(unsigned int irq, struct irq_desc *desc, 
struct irqaction *new)
}
 
/* Set default affinity mask once everything is setup */
+   get_online_cpus_atomic();
setup_affinity(irq, desc, mask);
+   put_online_cpus_atomic();
 
} else if (new->flags & IRQF_TRIGGER_MASK) {
unsigned int nmsk = new->flags & IRQF_TRIGGER_MASK;

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH v5 17/45] softirq: Use get/put_online_cpus_atomic() to prevent CPU offline

2013-01-21 Thread Srivatsa S. Bhat
Once stop_machine() is gone from the CPU offline path, we won't be able to
depend on preempt_disable() or local_irq_disable() to prevent CPUs from
going offline from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going offline,
while invoking from atomic context.

Cc: Frederic Weisbecker 
Signed-off-by: Srivatsa S. Bhat 
---

 kernel/softirq.c |3 +++
 1 file changed, 3 insertions(+)

diff --git a/kernel/softirq.c b/kernel/softirq.c
index ed567ba..98c3e27 100644
--- a/kernel/softirq.c
+++ b/kernel/softirq.c
@@ -631,6 +631,7 @@ static void remote_softirq_receive(void *data)
 
 static int __try_remote_softirq(struct call_single_data *cp, int cpu, int 
softirq)
 {
+   get_online_cpus_atomic();
if (cpu_online(cpu)) {
cp->func = remote_softirq_receive;
cp->info = cp;
@@ -638,8 +639,10 @@ static int __try_remote_softirq(struct call_single_data 
*cp, int cpu, int softir
cp->priv = softirq;
 
__smp_call_function_single(cpu, cp, 0);
+   put_online_cpus_atomic();
return 0;
}
+   put_online_cpus_atomic();
return 1;
 }
 #else /* CONFIG_USE_GENERIC_SMP_HELPERS */

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH v5 16/45] time/clocksource: Use get/put_online_cpus_atomic() to prevent CPU offline

2013-01-21 Thread Srivatsa S. Bhat
Once stop_machine() is gone from the CPU offline path, we won't be able to
depend on preempt_disable() or local_irq_disable() to prevent CPUs from going
offline from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going offline,
while invoking from atomic context.

Cc: John Stultz 
Signed-off-by: Srivatsa S. Bhat 
---

 kernel/time/clocksource.c |5 +
 1 file changed, 5 insertions(+)

diff --git a/kernel/time/clocksource.c b/kernel/time/clocksource.c
index c958338..1c8d735 100644
--- a/kernel/time/clocksource.c
+++ b/kernel/time/clocksource.c
@@ -30,6 +30,7 @@
 #include  /* for spin_unlock_irq() using preempt_count() m68k */
 #include 
 #include 
+#include 
 
 void timecounter_init(struct timecounter *tc,
  const struct cyclecounter *cc,
@@ -320,11 +321,13 @@ static void clocksource_watchdog(unsigned long data)
 * Cycle through CPUs to check if the CPUs stay synchronized
 * to each other.
 */
+   get_online_cpus_atomic();
next_cpu = cpumask_next(raw_smp_processor_id(), cpu_online_mask);
if (next_cpu >= nr_cpu_ids)
next_cpu = cpumask_first(cpu_online_mask);
watchdog_timer.expires += WATCHDOG_INTERVAL;
add_timer_on(_timer, next_cpu);
+   put_online_cpus_atomic();
 out:
spin_unlock(_lock);
 }
@@ -336,7 +339,9 @@ static inline void clocksource_start_watchdog(void)
init_timer(_timer);
watchdog_timer.function = clocksource_watchdog;
watchdog_timer.expires = jiffies + WATCHDOG_INTERVAL;
+   get_online_cpus_atomic();
add_timer_on(_timer, cpumask_first(cpu_online_mask));
+   put_online_cpus_atomic();
watchdog_running = 1;
 }
 

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH v5 14/45] rcu, CPU hotplug: Fix comment referring to stop_machine()

2013-01-21 Thread Srivatsa S. Bhat
Don't refer to stop_machine() in the CPU hotplug path, since we are going
to get rid of it. Also, move the comment referring to callback adoption
to the CPU_DEAD case, because that's where it happens now.

Signed-off-by: Srivatsa S. Bhat 
---

 kernel/rcutree.c |9 -
 1 file changed, 4 insertions(+), 5 deletions(-)

diff --git a/kernel/rcutree.c b/kernel/rcutree.c
index e441b77..ac94474 100644
--- a/kernel/rcutree.c
+++ b/kernel/rcutree.c
@@ -2827,11 +2827,6 @@ static int __cpuinit rcu_cpu_notify(struct 
notifier_block *self,
break;
case CPU_DYING:
case CPU_DYING_FROZEN:
-   /*
-* The whole machine is "stopped" except this CPU, so we can
-* touch any data without introducing corruption. We send the
-* dying CPU's callbacks to an arbitrarily chosen online CPU.
-*/
for_each_rcu_flavor(rsp)
rcu_cleanup_dying_cpu(rsp);
rcu_cleanup_after_idle(cpu);
@@ -2840,6 +2835,10 @@ static int __cpuinit rcu_cpu_notify(struct 
notifier_block *self,
case CPU_DEAD_FROZEN:
case CPU_UP_CANCELED:
case CPU_UP_CANCELED_FROZEN:
+   /*
+* We send the dead CPU's callbacks to an arbitrarily chosen
+* online CPU.
+*/
for_each_rcu_flavor(rsp)
rcu_cleanup_dead_cpu(cpu, rsp);
break;

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH v5 12/45] sched/migration: Use raw_spin_lock/unlock since interrupts are already disabled

2013-01-21 Thread Srivatsa S. Bhat
We need not use the raw_spin_lock_irqsave/restore primitives because
all CPU_DYING notifiers run with interrupts disabled. So just use
raw_spin_lock/unlock.

Signed-off-by: Srivatsa S. Bhat 
---

 kernel/sched/core.c |   12 +---
 1 file changed, 5 insertions(+), 7 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index c1596ac..c2cec88 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -4869,9 +4869,7 @@ static void calc_load_migrate(struct rq *rq)
  * Migrate all tasks from the rq, sleeping tasks will be migrated by
  * try_to_wake_up()->select_task_rq().
  *
- * Called with rq->lock held even though we'er in stop_machine() and
- * there's no concurrency possible, we hold the required locks anyway
- * because of lock validation efforts.
+ * Called with rq->lock held.
  */
 static void migrate_tasks(unsigned int dead_cpu)
 {
@@ -4883,8 +4881,8 @@ static void migrate_tasks(unsigned int dead_cpu)
 * Fudge the rq selection such that the below task selection loop
 * doesn't get stuck on the currently eligible stop task.
 *
-* We're currently inside stop_machine() and the rq is either stuck
-* in the stop_machine_cpu_stop() loop, or we're executing this code,
+* We're currently inside stop_one_cpu() and the rq is either stuck
+* in the cpu_stopper_thread(), or we're executing this code,
 * either way we should never end up calling schedule() until we're
 * done here.
 */
@@ -5153,14 +5151,14 @@ migration_call(struct notifier_block *nfb, unsigned 
long action, void *hcpu)
case CPU_DYING:
sched_ttwu_pending();
/* Update our root-domain */
-   raw_spin_lock_irqsave(>lock, flags);
+   raw_spin_lock(>lock); /* Interrupts already disabled */
if (rq->rd) {
BUG_ON(!cpumask_test_cpu(cpu, rq->rd->span));
set_rq_offline(rq);
}
migrate_tasks(cpu);
BUG_ON(rq->nr_running != 1); /* the migration thread */
-   raw_spin_unlock_irqrestore(>lock, flags);
+   raw_spin_unlock(>lock);
break;
 
case CPU_DEAD:

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH v5 11/45] sched/timer: Use get/put_online_cpus_atomic() to prevent CPU offline

2013-01-21 Thread Srivatsa S. Bhat
Once stop_machine() is gone from the CPU offline path, we won't be able to
depend on preempt_disable() or local_irq_disable() to prevent CPUs from going
offline from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going offline,
while invoking from atomic context.

Signed-off-by: Srivatsa S. Bhat 
---

 kernel/sched/core.c |   24 +---
 kernel/sched/fair.c |5 -
 kernel/timer.c  |2 ++
 3 files changed, 27 insertions(+), 4 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 257002c..c1596ac 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -1117,11 +1117,11 @@ void kick_process(struct task_struct *p)
 {
int cpu;
 
-   preempt_disable();
+   get_online_cpus_atomic();
cpu = task_cpu(p);
if ((cpu != smp_processor_id()) && task_curr(p))
smp_send_reschedule(cpu);
-   preempt_enable();
+   put_online_cpus_atomic();
 }
 EXPORT_SYMBOL_GPL(kick_process);
 #endif /* CONFIG_SMP */
@@ -1129,6 +1129,10 @@ EXPORT_SYMBOL_GPL(kick_process);
 #ifdef CONFIG_SMP
 /*
  * ->cpus_allowed is protected by both rq->lock and p->pi_lock
+ *
+ *  Must be called under get/put_online_cpus_atomic() or
+ *  equivalent, to avoid CPUs from going offline from underneath
+ *  us.
  */
 static int select_fallback_rq(int cpu, struct task_struct *p)
 {
@@ -1192,6 +1196,9 @@ out:
 
 /*
  * The caller (fork, wakeup) owns p->pi_lock, ->cpus_allowed is stable.
+ *
+ * Must be called under get/put_online_cpus_atomic(), to prevent
+ * CPUs from going offline from underneath us.
  */
 static inline
 int select_task_rq(struct task_struct *p, int sd_flags, int wake_flags)
@@ -1432,6 +1439,7 @@ try_to_wake_up(struct task_struct *p, unsigned int state, 
int wake_flags)
int cpu, success = 0;
 
smp_wmb();
+   get_online_cpus_atomic();
raw_spin_lock_irqsave(>pi_lock, flags);
if (!(p->state & state))
goto out;
@@ -1472,6 +1480,7 @@ stat:
ttwu_stat(p, cpu, wake_flags);
 out:
raw_spin_unlock_irqrestore(>pi_lock, flags);
+   put_online_cpus_atomic();
 
return success;
 }
@@ -1692,6 +1701,7 @@ void wake_up_new_task(struct task_struct *p)
unsigned long flags;
struct rq *rq;
 
+   get_online_cpus_atomic();
raw_spin_lock_irqsave(>pi_lock, flags);
 #ifdef CONFIG_SMP
/*
@@ -1712,6 +1722,7 @@ void wake_up_new_task(struct task_struct *p)
p->sched_class->task_woken(rq, p);
 #endif
task_rq_unlock(rq, p, );
+   put_online_cpus_atomic();
 }
 
 #ifdef CONFIG_PREEMPT_NOTIFIERS
@@ -2609,6 +2620,7 @@ void sched_exec(void)
unsigned long flags;
int dest_cpu;
 
+   get_online_cpus_atomic();
raw_spin_lock_irqsave(>pi_lock, flags);
dest_cpu = p->sched_class->select_task_rq(p, SD_BALANCE_EXEC, 0);
if (dest_cpu == smp_processor_id())
@@ -2618,11 +2630,13 @@ void sched_exec(void)
struct migration_arg arg = { p, dest_cpu };
 
raw_spin_unlock_irqrestore(>pi_lock, flags);
+   put_online_cpus_atomic();
stop_one_cpu(task_cpu(p), migration_cpu_stop, );
return;
}
 unlock:
raw_spin_unlock_irqrestore(>pi_lock, flags);
+   put_online_cpus_atomic();
 }
 
 #endif
@@ -4372,6 +4386,7 @@ bool __sched yield_to(struct task_struct *p, bool preempt)
unsigned long flags;
bool yielded = 0;
 
+   get_online_cpus_atomic();
local_irq_save(flags);
rq = this_rq();
 
@@ -4399,13 +4414,14 @@ again:
 * Make p's CPU reschedule; pick_next_entity takes care of
 * fairness.
 */
-   if (preempt && rq != p_rq)
+   if (preempt && rq != p_rq && cpu_online(task_cpu(p)))
resched_task(p_rq->curr);
}
 
 out:
double_rq_unlock(rq, p_rq);
local_irq_restore(flags);
+   put_online_cpus_atomic();
 
if (yielded)
schedule();
@@ -4810,9 +4826,11 @@ static int migration_cpu_stop(void *data)
 * The original target cpu might have gone down and we might
 * be on another cpu but it doesn't matter.
 */
+   get_online_cpus_atomic();
local_irq_disable();
__migrate_task(arg->task, raw_smp_processor_id(), arg->dest_cpu);
local_irq_enable();
+   put_online_cpus_atomic();
return 0;
 }
 
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 5eea870..a846028 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5695,8 +5695,11 @@ void trigger_load_balance(struct rq *rq, int cpu)
likely(!on_null_domain(cpu)))
raise_softirq(SCHED_SOFTIRQ);
 #ifdef CONFIG_NO_HZ
-   if (nohz_kick_needed(rq, cpu) && likely(!on_null_domain(cpu)))
+   if (nohz_kick_needed(rq, cpu) && likely(!on_null_domain(cpu))) {
+   

[PATCH v5 10/45] smp, cpu hotplug: Fix on_each_cpu_*() to prevent CPU offline properly

2013-01-21 Thread Srivatsa S. Bhat
Once stop_machine() is gone from the CPU offline path, we won't be able to
depend on preempt_disable() to prevent CPUs from going offline from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going offline,
while invoking from atomic context.

Signed-off-by: Srivatsa S. Bhat 
---

 kernel/smp.c |   25 +++--
 1 file changed, 15 insertions(+), 10 deletions(-)

diff --git a/kernel/smp.c b/kernel/smp.c
index f421bcc..d870bfe 100644
--- a/kernel/smp.c
+++ b/kernel/smp.c
@@ -688,12 +688,12 @@ int on_each_cpu(void (*func) (void *info), void *info, 
int wait)
unsigned long flags;
int ret = 0;
 
-   preempt_disable();
+   get_online_cpus_atomic();
ret = smp_call_function(func, info, wait);
local_irq_save(flags);
func(info);
local_irq_restore(flags);
-   preempt_enable();
+   put_online_cpus_atomic();
return ret;
 }
 EXPORT_SYMBOL(on_each_cpu);
@@ -715,7 +715,11 @@ EXPORT_SYMBOL(on_each_cpu);
 void on_each_cpu_mask(const struct cpumask *mask, smp_call_func_t func,
void *info, bool wait)
 {
-   int cpu = get_cpu();
+   int cpu;
+
+   get_online_cpus_atomic();
+
+   cpu = smp_processor_id();
 
smp_call_function_many(mask, func, info, wait);
if (cpumask_test_cpu(cpu, mask)) {
@@ -723,7 +727,7 @@ void on_each_cpu_mask(const struct cpumask *mask, 
smp_call_func_t func,
func(info);
local_irq_enable();
}
-   put_cpu();
+   put_online_cpus_atomic();
 }
 EXPORT_SYMBOL(on_each_cpu_mask);
 
@@ -748,8 +752,9 @@ EXPORT_SYMBOL(on_each_cpu_mask);
  * The function might sleep if the GFP flags indicates a non
  * atomic allocation is allowed.
  *
- * Preemption is disabled to protect against CPUs going offline but not online.
- * CPUs going online during the call will not be seen or sent an IPI.
+ * We use get/put_online_cpus_atomic() to prevent CPUs from going
+ * offline in-between our operation. CPUs coming online during the
+ * call will not be seen or sent an IPI.
  *
  * You must not call this function with disabled interrupts or
  * from a hardware interrupt handler or from a bottom half handler.
@@ -764,26 +769,26 @@ void on_each_cpu_cond(bool (*cond_func)(int cpu, void 
*info),
might_sleep_if(gfp_flags & __GFP_WAIT);
 
if (likely(zalloc_cpumask_var(, (gfp_flags|__GFP_NOWARN {
-   preempt_disable();
+   get_online_cpus_atomic();
for_each_online_cpu(cpu)
if (cond_func(cpu, info))
cpumask_set_cpu(cpu, cpus);
on_each_cpu_mask(cpus, func, info, wait);
-   preempt_enable();
+   put_online_cpus_atomic();
free_cpumask_var(cpus);
} else {
/*
 * No free cpumask, bother. No matter, we'll
 * just have to IPI them one by one.
 */
-   preempt_disable();
+   get_online_cpus_atomic();
for_each_online_cpu(cpu)
if (cond_func(cpu, info)) {
ret = smp_call_function_single(cpu, func,
info, wait);
WARN_ON_ONCE(!ret);
}
-   preempt_enable();
+   put_online_cpus_atomic();
}
 }
 EXPORT_SYMBOL(on_each_cpu_cond);

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH v5 09/45] smp, cpu hotplug: Fix smp_call_function_*() to prevent CPU offline properly

2013-01-21 Thread Srivatsa S. Bhat
Once stop_machine() is gone from the CPU offline path, we won't be able to
depend on preempt_disable() to prevent CPUs from going offline from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going offline,
while invoking from atomic context.

Signed-off-by: Srivatsa S. Bhat 
---

 kernel/smp.c |   40 ++--
 1 file changed, 26 insertions(+), 14 deletions(-)

diff --git a/kernel/smp.c b/kernel/smp.c
index 29dd40a..f421bcc 100644
--- a/kernel/smp.c
+++ b/kernel/smp.c
@@ -310,7 +310,8 @@ int smp_call_function_single(int cpu, smp_call_func_t func, 
void *info,
 * prevent preemption and reschedule on another processor,
 * as well as CPU removal
 */
-   this_cpu = get_cpu();
+   get_online_cpus_atomic();
+   this_cpu = smp_processor_id();
 
/*
 * Can deadlock when called with interrupts disabled.
@@ -342,7 +343,7 @@ int smp_call_function_single(int cpu, smp_call_func_t func, 
void *info,
}
}
 
-   put_cpu();
+   put_online_cpus_atomic();
 
return err;
 }
@@ -371,8 +372,10 @@ int smp_call_function_any(const struct cpumask *mask,
const struct cpumask *nodemask;
int ret;
 
+   get_online_cpus_atomic();
/* Try for same CPU (cheapest) */
-   cpu = get_cpu();
+   cpu = smp_processor_id();
+
if (cpumask_test_cpu(cpu, mask))
goto call;
 
@@ -388,7 +391,7 @@ int smp_call_function_any(const struct cpumask *mask,
cpu = cpumask_any_and(mask, cpu_online_mask);
 call:
ret = smp_call_function_single(cpu, func, info, wait);
-   put_cpu();
+   put_online_cpus_atomic();
return ret;
 }
 EXPORT_SYMBOL_GPL(smp_call_function_any);
@@ -409,25 +412,28 @@ void __smp_call_function_single(int cpu, struct 
call_single_data *data,
unsigned int this_cpu;
unsigned long flags;
 
-   this_cpu = get_cpu();
+   get_online_cpus_atomic();
+
+   this_cpu = smp_processor_id();
+
/*
 * Can deadlock when called with interrupts disabled.
 * We allow cpu's that are not yet online though, as no one else can
 * send smp call function interrupt to this cpu and as such deadlocks
 * can't happen.
 */
-   WARN_ON_ONCE(cpu_online(smp_processor_id()) && wait && irqs_disabled()
+   WARN_ON_ONCE(cpu_online(this_cpu) && wait && irqs_disabled()
 && !oops_in_progress);
 
if (cpu == this_cpu) {
local_irq_save(flags);
data->func(data->info);
local_irq_restore(flags);
-   } else {
+   } else if ((unsigned)cpu < nr_cpu_ids && cpu_online(cpu)) {
csd_lock(data);
generic_exec_single(cpu, data, wait);
}
-   put_cpu();
+   put_online_cpus_atomic();
 }
 
 /**
@@ -451,6 +457,8 @@ void smp_call_function_many(const struct cpumask *mask,
unsigned long flags;
int refs, cpu, next_cpu, this_cpu = smp_processor_id();
 
+   get_online_cpus_atomic();
+
/*
 * Can deadlock when called with interrupts disabled.
 * We allow cpu's that are not yet online though, as no one else can
@@ -467,17 +475,18 @@ void smp_call_function_many(const struct cpumask *mask,
 
/* No online cpus?  We're done. */
if (cpu >= nr_cpu_ids)
-   return;
+   goto out_unlock;
 
/* Do we have another CPU which isn't us? */
next_cpu = cpumask_next_and(cpu, mask, cpu_online_mask);
if (next_cpu == this_cpu)
-   next_cpu = cpumask_next_and(next_cpu, mask, cpu_online_mask);
+   next_cpu = cpumask_next_and(next_cpu, mask,
+   cpu_online_mask);
 
/* Fastpath: do that cpu by itself. */
if (next_cpu >= nr_cpu_ids) {
smp_call_function_single(cpu, func, info, wait);
-   return;
+   goto out_unlock;
}
 
data = &__get_cpu_var(cfd_data);
@@ -523,7 +532,7 @@ void smp_call_function_many(const struct cpumask *mask,
/* Some callers race with other cpus changing the passed mask */
if (unlikely(!refs)) {
csd_unlock(>csd);
-   return;
+   goto out_unlock;
}
 
raw_spin_lock_irqsave(_function.lock, flags);
@@ -554,6 +563,9 @@ void smp_call_function_many(const struct cpumask *mask,
/* Optionally wait for the CPUs to complete */
if (wait)
csd_lock_wait(>csd);
+
+out_unlock:
+   put_online_cpus_atomic();
 }
 EXPORT_SYMBOL(smp_call_function_many);
 
@@ -574,9 +586,9 @@ EXPORT_SYMBOL(smp_call_function_many);
  */
 int smp_call_function(smp_call_func_t func, void *info, int wait)
 {
-   preempt_disable();
+   get_online_cpus_atomic();
smp_call_function_many(cpu_online_mask, func, info, wait);
-   preempt_enable();
+   

[PATCH v5 06/45] percpu_rwlock: Allow writers to be readers, and add lockdep annotations

2013-01-21 Thread Srivatsa S. Bhat
CPU hotplug (which will be the first user of per-CPU rwlocks) has a special
requirement with respect to locking: the writer, after acquiring the per-CPU
rwlock for write, must be allowed to take the same lock for read, without
deadlocking and without getting complaints from lockdep. In comparison, this
is similar to what get_online_cpus()/put_online_cpus() does today: it allows
a hotplug writer (who holds the cpu_hotplug.lock mutex) to invoke it without
locking issues, because it silently returns if the caller is the hotplug
writer itself.

This can be easily achieved with per-CPU rwlocks as well (even without a
"is this a writer?" check) by incrementing the per-CPU refcount of the writer
immediately after taking the global rwlock for write, and then decrementing
the per-CPU refcount before releasing the global rwlock.
This ensures that any reader that comes along on that CPU while the writer is
active (on that same CPU), notices the non-zero value of the nested counter
and assumes that it is a nested read-side critical section and proceeds by
just incrementing the refcount. Thus we prevent the reader from taking the
global rwlock for read, which prevents the writer from deadlocking itself.

Add that support and teach lockdep about this special locking scheme so
that it knows that this sort of usage is valid. Also add the required lockdep
annotations to enable it to detect common locking problems with per-CPU
rwlocks.

Cc: David Howells 
Signed-off-by: Srivatsa S. Bhat 
---

 lib/percpu-rwlock.c |   21 +
 1 file changed, 21 insertions(+)

diff --git a/lib/percpu-rwlock.c b/lib/percpu-rwlock.c
index a8d177a..054a50a 100644
--- a/lib/percpu-rwlock.c
+++ b/lib/percpu-rwlock.c
@@ -84,6 +84,10 @@ void percpu_read_lock_irqsafe(struct percpu_rwlock 
*pcpu_rwlock)
 
if (likely(!writer_active(pcpu_rwlock))) {
this_cpu_inc(*pcpu_rwlock->reader_refcnt);
+
+   /* Pretend that we take global_rwlock for lockdep */
+   rwlock_acquire_read(_rwlock->global_rwlock.dep_map,
+   0, 0, _RET_IP_);
} else {
/* Writer is active, so switch to global rwlock. */
 
@@ -108,6 +112,12 @@ void percpu_read_lock_irqsafe(struct percpu_rwlock 
*pcpu_rwlock)
if (!writer_active(pcpu_rwlock)) {
this_cpu_inc(*pcpu_rwlock->reader_refcnt);
read_unlock(_rwlock->global_rwlock);
+
+   /*
+* Pretend that we take global_rwlock for 
lockdep
+*/
+   
rwlock_acquire_read(_rwlock->global_rwlock.dep_map,
+   0, 0, _RET_IP_);
}
}
}
@@ -128,6 +138,14 @@ void percpu_read_unlock_irqsafe(struct percpu_rwlock 
*pcpu_rwlock)
if (reader_nested_percpu(pcpu_rwlock)) {
this_cpu_dec(*pcpu_rwlock->reader_refcnt);
smp_wmb(); /* Paired with smp_rmb() in sync_reader() */
+
+   /*
+* If this is the last decrement, then it is time to pretend
+* to lockdep that we are releasing the read lock.
+*/
+   if (!reader_nested_percpu(pcpu_rwlock))
+   rwlock_release(_rwlock->global_rwlock.dep_map,
+  1, _RET_IP_);
} else {
read_unlock(_rwlock->global_rwlock);
}
@@ -205,11 +223,14 @@ void percpu_write_lock_irqsave(struct percpu_rwlock 
*pcpu_rwlock,
announce_writer_active(pcpu_rwlock);
sync_all_readers(pcpu_rwlock);
write_lock_irqsave(_rwlock->global_rwlock, *flags);
+   this_cpu_inc(*pcpu_rwlock->reader_refcnt);
 }
 
 void percpu_write_unlock_irqrestore(struct percpu_rwlock *pcpu_rwlock,
 unsigned long *flags)
 {
+   this_cpu_dec(*pcpu_rwlock->reader_refcnt);
+
/*
 * Inform all readers that we are done, so that they can switch back
 * to their per-cpu refcounts. (We don't need to wait for them to

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH v5 05/45] percpu_rwlock: Make percpu-rwlocks IRQ-safe, optimally

2013-01-21 Thread Srivatsa S. Bhat
If interrupt handlers can also be readers, then one of the ways to make
per-CPU rwlocks safe, is to disable interrupts at the reader side before
trying to acquire the per-CPU rwlock and keep it disabled throughout the
duration of the read-side critical section.

The goal is to avoid cases such as:

  1. writer is active and it holds the global rwlock for write

  2. a regular reader comes in and marks itself as present (by incrementing
 its per-CPU refcount) before checking whether writer is active.

  3. an interrupt hits the reader;
 [If it had not hit, the reader would have noticed that the writer is
  active and would have decremented its refcount and would have tried
  to acquire the global rwlock for read].
 Since the interrupt handler also happens to be a reader, it notices
 the non-zero refcount (which was due to the reader who got interrupted)
 and thinks that this is a nested read-side critical section and
 proceeds to take the fastpath, which is wrong. The interrupt handler
 should have noticed that the writer is active and taken the rwlock
 for read.

So, disabling interrupts can help avoid this problem (at the cost of keeping
the interrupts disabled for quite long).

But Oleg had a brilliant idea by which we can do much better than that:
we can manage with disabling interrupts _just_ during the updates (writes to
per-CPU refcounts) to safe-guard against races with interrupt handlers.
Beyond that, we can keep the interrupts enabled and still be safe w.r.t
interrupt handlers that can act as readers.

Basically the idea is that we differentiate between the *part* of the
per-CPU refcount that we use for reference counting vs the part that we use
merely to make the writer wait for us to switch over to the right
synchronization scheme.

The scheme involves splitting the per-CPU refcounts into 2 parts:
eg: the lower 16 bits are used to track the nesting depth of the reader
(a "nested-counter"), and the remaining (upper) bits are used to merely mark
the presence of the reader.

As long as the overall reader_refcnt is non-zero, the writer waits for the
reader (assuming that the reader is still actively using per-CPU refcounts for
synchronization).

The reader first sets one of the higher bits to mark its presence, and then
uses the lower 16 bits to manage the nesting depth. So, an interrupt handler
coming in as illustrated above will be able to distinguish between "this is
a nested read-side critical section" vs "we have merely marked our presence
to make the writer wait for us to switch" by looking at the same refcount.
Thus, it makes it unnecessary to keep interrupts disabled throughout the
read-side critical section, despite having the possibility of interrupt
handlers being readers themselves.


Implement this logic and rename the locking functions appropriately, to
reflect what they do.

Based-on-idea-by: Oleg Nesterov 
Cc: David Howells 
Signed-off-by: Srivatsa S. Bhat 
---

 include/linux/percpu-rwlock.h |   15 ++-
 lib/percpu-rwlock.c   |   41 +++--
 2 files changed, 37 insertions(+), 19 deletions(-)

diff --git a/include/linux/percpu-rwlock.h b/include/linux/percpu-rwlock.h
index 6819bb8..856ba6b 100644
--- a/include/linux/percpu-rwlock.h
+++ b/include/linux/percpu-rwlock.h
@@ -34,11 +34,13 @@ struct percpu_rwlock {
rwlock_tglobal_rwlock;
 };
 
-extern void percpu_read_lock(struct percpu_rwlock *);
-extern void percpu_read_unlock(struct percpu_rwlock *);
+extern void percpu_read_lock_irqsafe(struct percpu_rwlock *);
+extern void percpu_read_unlock_irqsafe(struct percpu_rwlock *);
 
-extern void percpu_write_lock(struct percpu_rwlock *);
-extern void percpu_write_unlock(struct percpu_rwlock *);
+extern void percpu_write_lock_irqsave(struct percpu_rwlock *,
+ unsigned long *flags);
+extern void percpu_write_unlock_irqrestore(struct percpu_rwlock *,
+  unsigned long *flags);
 
 extern int __percpu_init_rwlock(struct percpu_rwlock *,
const char *, struct lock_class_key *);
@@ -68,11 +70,14 @@ extern void percpu_free_rwlock(struct percpu_rwlock *);
__percpu_init_rwlock(pcpu_rwlock, #pcpu_rwlock, _key);   \
 })
 
+#define READER_PRESENT (1UL << 16)
+#define READER_REFCNT_MASK (READER_PRESENT - 1)
+
 #define reader_uses_percpu_refcnt(pcpu_rwlock, cpu)\
(ACCESS_ONCE(per_cpu(*((pcpu_rwlock)->reader_refcnt), cpu)))
 
 #define reader_nested_percpu(pcpu_rwlock)  \
-   (__this_cpu_read(*((pcpu_rwlock)->reader_refcnt)) > 1)
+   (__this_cpu_read(*((pcpu_rwlock)->reader_refcnt)) & READER_REFCNT_MASK)
 
 #define writer_active(pcpu_rwlock) \
(__this_cpu_read(*((pcpu_rwlock)->writer_signal)))
diff --git 

[PATCH v5 03/45] percpu_rwlock: Provide a way to define and init percpu-rwlocks at compile time

2013-01-21 Thread Srivatsa S. Bhat
Add the support for defining and initializing percpu-rwlocks at compile time
for those users who would like to use percpu-rwlocks really early in the boot
process (even before dynamic per-CPU allocations can begin).

Cc: David Howells 
Signed-off-by: Srivatsa S. Bhat 
---

 include/linux/percpu-rwlock.h |   18 ++
 1 file changed, 18 insertions(+)

diff --git a/include/linux/percpu-rwlock.h b/include/linux/percpu-rwlock.h
index cd5eab5..8dec8fe 100644
--- a/include/linux/percpu-rwlock.h
+++ b/include/linux/percpu-rwlock.h
@@ -45,6 +45,24 @@ extern int __percpu_init_rwlock(struct percpu_rwlock *,
 
 extern void percpu_free_rwlock(struct percpu_rwlock *);
 
+
+#define __PERCPU_RWLOCK_INIT(name) \
+   {   \
+   .reader_refcnt = ##_reader_refcnt, \
+   .writer_signal = ##_writer_signal, \
+   .global_rwlock = __RW_LOCK_UNLOCKED(name.global_rwlock) \
+   }
+
+#define DEFINE_PERCPU_RWLOCK(name) \
+   static DEFINE_PER_CPU(unsigned long, name##_reader_refcnt); \
+   static DEFINE_PER_CPU(bool, name##_writer_signal);  \
+   struct percpu_rwlock (name) = __PERCPU_RWLOCK_INIT(name);
+
+#define DEFINE_STATIC_PERCPU_RWLOCK(name)  \
+   static DEFINE_PER_CPU(unsigned long, name##_reader_refcnt); \
+   static DEFINE_PER_CPU(bool, name##_writer_signal);  \
+   static struct percpu_rwlock(name) = __PERCPU_RWLOCK_INIT(name);
+
 #define percpu_init_rwlock(pcpu_rwlock)
\
 ({ static struct lock_class_key rwlock_key;\
__percpu_init_rwlock(pcpu_rwlock, #pcpu_rwlock, _key);   \

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH v5 02/45] percpu_rwlock: Introduce per-CPU variables for the reader and the writer

2013-01-21 Thread Srivatsa S. Bhat
Per-CPU rwlocks ought to give better performance than global rwlocks.
That is where the "per-CPU" component comes in. So introduce the necessary
per-CPU variables that would be necessary at the reader and the writer sides,
and add the support for dynamically initializing per-CPU rwlocks.
These per-CPU variables will be used subsequently to implement the core
algorithm behind per-CPU rwlocks.

Cc: David Howells 
Signed-off-by: Srivatsa S. Bhat 
---

 include/linux/percpu-rwlock.h |4 
 lib/percpu-rwlock.c   |   21 +
 2 files changed, 25 insertions(+)

diff --git a/include/linux/percpu-rwlock.h b/include/linux/percpu-rwlock.h
index 45620d0..cd5eab5 100644
--- a/include/linux/percpu-rwlock.h
+++ b/include/linux/percpu-rwlock.h
@@ -29,6 +29,8 @@
 #include 
 
 struct percpu_rwlock {
+   unsigned long __percpu  *reader_refcnt;
+   bool __percpu   *writer_signal;
rwlock_tglobal_rwlock;
 };
 
@@ -41,6 +43,8 @@ extern void percpu_write_unlock(struct percpu_rwlock *);
 extern int __percpu_init_rwlock(struct percpu_rwlock *,
const char *, struct lock_class_key *);
 
+extern void percpu_free_rwlock(struct percpu_rwlock *);
+
 #define percpu_init_rwlock(pcpu_rwlock)
\
 ({ static struct lock_class_key rwlock_key;\
__percpu_init_rwlock(pcpu_rwlock, #pcpu_rwlock, _key);   \
diff --git a/lib/percpu-rwlock.c b/lib/percpu-rwlock.c
index af0c714..80dad93 100644
--- a/lib/percpu-rwlock.c
+++ b/lib/percpu-rwlock.c
@@ -31,6 +31,17 @@
 int __percpu_init_rwlock(struct percpu_rwlock *pcpu_rwlock,
 const char *name, struct lock_class_key *rwlock_key)
 {
+   pcpu_rwlock->reader_refcnt = alloc_percpu(unsigned long);
+   if (unlikely(!pcpu_rwlock->reader_refcnt))
+   return -ENOMEM;
+
+   pcpu_rwlock->writer_signal = alloc_percpu(bool);
+   if (unlikely(!pcpu_rwlock->writer_signal)) {
+   free_percpu(pcpu_rwlock->reader_refcnt);
+   pcpu_rwlock->reader_refcnt = NULL;
+   return -ENOMEM;
+   }
+
/* ->global_rwlock represents the whole percpu_rwlock for lockdep */
 #ifdef CONFIG_DEBUG_SPINLOCK
__rwlock_init(_rwlock->global_rwlock, name, rwlock_key);
@@ -41,6 +52,16 @@ int __percpu_init_rwlock(struct percpu_rwlock *pcpu_rwlock,
return 0;
 }
 
+void percpu_free_rwlock(struct percpu_rwlock *pcpu_rwlock)
+{
+   free_percpu(pcpu_rwlock->reader_refcnt);
+   free_percpu(pcpu_rwlock->writer_signal);
+
+   /* Catch use-after-free bugs */
+   pcpu_rwlock->reader_refcnt = NULL;
+   pcpu_rwlock->writer_signal = NULL;
+}
+
 void percpu_read_lock(struct percpu_rwlock *pcpu_rwlock)
 {
read_lock(_rwlock->global_rwlock);

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH v5 01/45] percpu_rwlock: Introduce the global reader-writer lock backend

2013-01-21 Thread Srivatsa S. Bhat
A straight-forward (and obvious) algorithm to implement Per-CPU Reader-Writer
locks can also lead to too many deadlock possibilities which can make it very
hard/impossible to use. This is explained in the example below, which helps
justify the need for a different algorithm to implement flexible Per-CPU
Reader-Writer locks.

We can use global rwlocks as shown below safely, without fear of deadlocks:

Readers:

 CPU 0CPU 1
 --   --

1.spin_lock(_lock); read_lock(_rwlock);


2.read_lock(_rwlock);   spin_lock(_lock);


Writer:

 CPU 2:
 --

   write_lock(_rwlock);


We can observe that there is no possibility of deadlocks or circular locking
dependencies here. Its perfectly safe.

Now consider a blind/straight-forward conversion of global rwlocks to per-CPU
rwlocks like this:

The reader locks its own per-CPU rwlock for read, and proceeds.

Something like: read_lock(per-cpu rwlock of this cpu);

The writer acquires all per-CPU rwlocks for write and only then proceeds.

Something like:

  for_each_online_cpu(cpu)
write_lock(per-cpu rwlock of 'cpu');


Now let's say that for performance reasons, the above scenario (which was
perfectly safe when using global rwlocks) was converted to use per-CPU rwlocks.


 CPU 0CPU 1
 --   --

1.spin_lock(_lock); read_lock(my_rwlock of CPU 1);


2.read_lock(my_rwlock of CPU 0);   spin_lock(_lock);


Writer:

 CPU 2:
 --

  for_each_online_cpu(cpu)
write_lock(my_rwlock of 'cpu');


Consider what happens if the writer begins his operation in between steps 1
and 2 at the reader side. It becomes evident that we end up in a (previously
non-existent) deadlock due to a circular locking dependency between the 3
entities, like this:


(holds  Waiting for
 random_lock) CPU 0 -> CPU 2  (holds my_rwlock of CPU 0
   for write)
   ^   |
   |   |
Waiting|   | Waiting
  for  |   |  for
   |   V
-- CPU 1 <--

(holds my_rwlock of
 CPU 1 for read)



So obviously this "straight-forward" way of implementing percpu rwlocks is
deadlock-prone. One simple measure for (or characteristic of) safe percpu
rwlock should be that if a user replaces global rwlocks with per-CPU rwlocks
(for performance reasons), he shouldn't suddenly end up in numerous deadlock
possibilities which never existed before. The replacement should continue to
remain safe, and perhaps improve the performance.

Observing the robustness of global rwlocks in providing a fair amount of
deadlock safety, we implement per-CPU rwlocks as nothing but global rwlocks,
as a first step.


Cc: David Howells 
Signed-off-by: Srivatsa S. Bhat 
---

 include/linux/percpu-rwlock.h |   49 
 lib/Kconfig   |3 ++
 lib/Makefile  |1 +
 lib/percpu-rwlock.c   |   63 +
 4 files changed, 116 insertions(+)
 create mode 100644 include/linux/percpu-rwlock.h
 create mode 100644 lib/percpu-rwlock.c

diff --git a/include/linux/percpu-rwlock.h b/include/linux/percpu-rwlock.h
new file mode 100644
index 000..45620d0
--- /dev/null
+++ b/include/linux/percpu-rwlock.h
@@ -0,0 +1,49 @@
+/*
+ * Flexible Per-CPU Reader-Writer Locks
+ * (with relaxed locking rules and reduced deadlock-possibilities)
+ *
+ * Copyright (C) IBM Corporation, 2012-2013
+ * Author: Srivatsa S. Bhat 
+ *
+ * With lots of invaluable suggestions from:
+ *Oleg Nesterov 
+ *Tejun Heo 
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ */
+
+#ifndef _LINUX_PERCPU_RWLOCK_H
+#define _LINUX_PERCPU_RWLOCK_H
+
+#include 
+#include 
+#include 
+
+struct percpu_rwlock {
+   rwlock_tglobal_rwlock;
+};
+
+extern void percpu_read_lock(struct percpu_rwlock *);
+extern void percpu_read_unlock(struct percpu_rwlock *);
+
+extern void percpu_write_lock(struct percpu_rwlock *);
+extern void percpu_write_unlock(struct percpu_rwlock *);
+
+extern int __percpu_init_rwlock(struct percpu_rwlock *,
+   const char *, struct lock_class_key *);
+

[PATCH v5 00/45] CPU hotplug: stop_machine()-free CPU hotplug

2013-01-21 Thread Srivatsa S. Bhat
Hi,

This patchset removes CPU hotplug's dependence on stop_machine() from the CPU
offline path and provides an alternative (set of APIs) to preempt_disable() to
prevent CPUs from going offline, which can be invoked from atomic context.
The motivation behind the removal of stop_machine() is to avoid its ill-effects
and thus improve the design of CPU hotplug. (More description regarding this
is available in the patches).

All the users of preempt_disable()/local_irq_disable() who used to use it to
prevent CPU offline, have been converted to the new primitives introduced in the
patchset. Also, the CPU_DYING notifiers have been audited to check whether
they can cope up with the removal of stop_machine() or whether they need to
use new locks for synchronization (all CPU_DYING notifiers looked OK, without
the need for any new locks).

Applies on v3.8-rc4. It currently has some locking issues with cpu idle (on
which even lockdep didn't provide any insight unfortunately). So for now, it
works with CONFIG_CPU_IDLE=n.

Overview of the patches:
---

Patches 1 to 6 introduce a generic, flexible Per-CPU Reader-Writer Locking
scheme.

Patch 7 uses this synchronization mechanism to build the
get/put_online_cpus_atomic() APIs which can be used from atomic context, to
prevent CPUs from going offline.

Patch 8 is a cleanup; it converts preprocessor macros to static inline
functions.

Patches 9 to 42 convert various call-sites to use the new APIs.

Patch 43 is the one which actually removes stop_machine() from the CPU
offline path.

Patch 44 decouples stop_machine() and CPU hotplug from Kconfig.

Patch 45 updates the documentation to reflect the new APIs.


Changes in v5:
--
  Exposed a new generic locking scheme: Flexible Per-CPU Reader-Writer locks,
  based on the synchronization schemes already discussed in the previous
  versions, and used it in CPU hotplug, to implement the new APIs.

  Audited the CPU_DYING notifiers in the kernel source tree and replaced
  usages of preempt_disable() with the new get/put_online_cpus_atomic() APIs
  where necessary.


Changes in v4:
--
  The synchronization scheme has been simplified quite a bit, which makes it
  look a lot less complex than before. Some highlights:

* Implicit ACKs:

  The earlier design required the readers to explicitly ACK the writer's
  signal. The new design uses implicit ACKs instead. The reader switching
  over to rwlock implicitly tells the writer to stop waiting for that reader.

* No atomic operations:

  Since we got rid of explicit ACKs, we no longer have the need for a reader
  and a writer to update the same counter. So we can get rid of atomic ops
  too.

Changes in v3:
--
* Dropped the _light() and _full() variants of the APIs. Provided a single
  interface: get/put_online_cpus_atomic().

* Completely redesigned the synchronization mechanism again, to make it
  fast and scalable at the reader-side in the fast-path (when no hotplug
  writers are active). This new scheme also ensures that there is no
  possibility of deadlocks due to circular locking dependency.
  In summary, this provides the scalability and speed of per-cpu rwlocks
  (without actually using them), while avoiding the downside (deadlock
  possibilities) which is inherent in any per-cpu locking scheme that is
  meant to compete with preempt_disable()/enable() in terms of flexibility.

  The problem with using per-cpu locking to replace preempt_disable()/enable
  was explained here:
  https://lkml.org/lkml/2012/12/6/290

  Basically we use per-cpu counters (for scalability) when no writers are
  active, and then switch to global rwlocks (for lock-safety) when a writer
  becomes active. It is a slightly complex scheme, but it is based on
  standard principles of distributed algorithms.

Changes in v2:
-
* Completely redesigned the synchronization scheme to avoid using any extra
  cpumasks.

* Provided APIs for 2 types of atomic hotplug readers: "light" (for
  light-weight) and "full". We wish to have more "light" readers than
  the "full" ones, to avoid indirectly inducing the "stop_machine effect"
  without even actually using stop_machine().

  And the patches show that it _is_ generally true: 5 patches deal with
  "light" readers, whereas only 1 patch deals with a "full" reader.

  Also, the "light" readers happen to be in very hot paths. So it makes a
  lot of sense to have such a distinction and a corresponding light-weight
  API.

Links to previous versions:
v4: https://lkml.org/lkml/2012/12/11/209
v3: https://lkml.org/lkml/2012/12/7/287
v2: https://lkml.org/lkml/2012/12/5/322
v1: https://lkml.org/lkml/2012/12/4/88

--

Paul E. McKenney (1):
  cpu: No more __stop_machine() in _cpu_down()

Srivatsa S. Bhat (44):
  percpu_rwlock: Introduce the global reader-writer lock backend
  percpu_rwlock: Introduce per-CPU variables for the reader and the writer
  percpu_rwlock: Provide a way to define and init 

Re: Can jiffies freeze?

2013-01-21 Thread anish singh
On Tue, Jan 22, 2013 at 11:21 AM, sandeep kumar
 wrote:
> Hi all
> As far as I know jiffie counter is incremented HZ times/second. And it is
> used to measure the time lapses in the kernel code.
>
> I m seeing a case where, actualy time spent in some module using giffies is
> zero, but while seeing UART logs i am seein 2 sec time difference. I dont
Please post the code here regarding how did you find out it is zero.
> know how to interpret this. The case which i am seeing, hrtimers are not
> enabled yet, so only thing i can rely are on jiffies.
>
> My question here is,
> Is it possible that the measured time lapse shown is "0"(jiffie count is
> same before and after), but actually some time is spent?(say some 2 sec)
>
> In another way..can jiffies may freeze for some time?
Is your watchdog enabled?If it is then you will see panic happening
i.e. soft lockup.
>
> Please clarify...
>
>
> --
> With regards,
> Sandeep Kumar Anantapalli,
>
> ___
> Kernelnewbies mailing list
> kernelnewb...@kernelnewbies.org
> http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] lib: vsprintf: Add %pa format specifier for phys_addr_t types

2013-01-21 Thread Andy Shevchenko
On Mon, 2013-01-21 at 21:47 -0800, Stepan Moskovchenko wrote: 
> Add the %pa format specifier for printing a phys_addr_t
> type, since the physical address size on some platforms
> can vary based on build options, regardless of the native
> integer type.
> 
> Signed-off-by: Stepan Moskovchenko 
> ---
>  Documentation/printk-formats.txt |   13 ++---
>  lib/vsprintf.c   |7 +++
>  2 files changed, 17 insertions(+), 3 deletions(-)
> 
> diff --git a/Documentation/printk-formats.txt 
> b/Documentation/printk-formats.txt
> index 8ffb274..dbc977b 100644
> --- a/Documentation/printk-formats.txt
> +++ b/Documentation/printk-formats.txt
> @@ -53,6 +53,13 @@ Struct Resources:
>   For printing struct resources. The 'R' and 'r' specifiers result in a
>   printed resource with ('R') or without ('r') a decoded flags member.
> 
> +Physical addresses:
> +
> + %pa 0x01234567 or 0x0123456789abcdef
> +
> + For printing a phys_addr_t type, which can vary based on build options,
> + regardless of the width of the CPU data path. Passed by reference.
> +
>  Raw buffer as a hex string:
>   %*ph00 01 02  ...  3f
>   %*phC   00:01:02: ... :3f
> @@ -150,9 +157,9 @@ s64 SHOULD be printed with %lld/%llx, (long long):
>   printk("%lld", (long long)s64_var);
> 
>  If  is dependent on a config option for its size (e.g., sector_t,
> -blkcnt_t, phys_addr_t, resource_size_t) or is architecture-dependent
> -for its size (e.g., tcflag_t), use a format specifier of its largest
> -possible type and explicitly cast to it.  Example:
> +blkcnt_t, resource_size_t) or is architecture-dependent for its size (e.g.,

resource_size_t is a typedef of phys_addr_t.

Probably you should mention that your change related to the phys_addr_t
*and* derivatives.

> +tcflag_t), use a format specifier of its largest possible type and explicitly
> +cast to it.  Example:
> 
>   printk("test: sector number/total blocks: %llu/%llu\n",
>   (unsigned long long)sector, (unsigned long long)blockcount);
> diff --git a/lib/vsprintf.c b/lib/vsprintf.c
> index 39c99fe..9b02a71 100644
> --- a/lib/vsprintf.c
> +++ b/lib/vsprintf.c
> @@ -1022,6 +1022,7 @@ int kptr_restrict __read_mostly;
>   *  N no separator
>   *The maximum supported length is 64 bytes of the input. Consider
>   *to use print_hex_dump() for the larger input.
> + * - 'a' For a phys_addr_t type (passed by reference)
>   *
>   * Note: The difference between 'S' and 'F' is that on ia64 and ppc64
>   * function pointers are really function descriptors, which contain a
> @@ -1112,6 +1113,12 @@ char *pointer(const char *fmt, char *buf, char *end, 
> void *ptr,
>   return netdev_feature_string(buf, end, ptr, spec);
>   }
>   break;
> + case 'a':
> + spec.flags |= SPECIAL | SMALL | ZEROPAD;
> + spec.field_width = sizeof(phys_addr_t) * 2;
> + spec.base = 16;
> + return number(buf, end,
> +   (unsigned long long) *((phys_addr_t *)ptr), spec);
>   }
>   spec.flags |= SMALL;
>   if (spec.field_width == -1) {
> --
> The Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
> hosted by The Linux Foundation
> 

-- 
Andy Shevchenko 
Intel Finland Oy
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 1/2] ARM: shmobile: sh73a0: Use generic irqchip_init()

2013-01-21 Thread Olof Johansson
On Mon, Jan 21, 2013 at 08:03:01AM +0100, Thierry Reding wrote:
> On Mon, Jan 21, 2013 at 09:54:39AM +0900, Simon Horman wrote:
> > On Fri, Jan 18, 2013 at 08:16:12AM +0100, Thierry Reding wrote:
> > > The asm/hardware/gic.h header does no longer exist and the corresponding
> > > functionality was moved to linux/irqchip.h and linux/irqchip/arm-gic.h
> > > respectively. gic_handle_irq() and of_irq_init() are no longer available
> > > either and have been replaced by irqchip_init().
> > 
> > asm/hardware/gic.h Seems to still exist in Linus's tree.
> > Could you let me know which tree of which branch I should depend on
> > in order to apply this change?
> 
> I found this when doing an automated build over all ARM defconfigs on
> linux-next.
> 
> Commit 520f7bd73354f003a9a59937b28e4903d985c420 "irqchip: Move ARM gic.h
> to include/linux/irqchip/arm-gic.h" moved the file and was merged
> through Olof Johansson's next/cleanup and for-next branches.
> 
> Adding Olof on Cc since I'm not quite sure myself about how this is
> handled.

The way to handle this is to base the branch you are adding new shmobile code
in, on top of the cleanup branches that changes the underlying infrastructure.
This is why we merge it early during the release, so that new code for various
platforms can be based on it to avoid a bunch of conflicts in the end.

In this case, you might need to base your branch onto a merge of both
the irqchip/gic-vic-move and timer/cleanup branches from arm-soc.


-Olof

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 2/2] dw_dmac: return proper residue value

2013-01-21 Thread Andy Shevchenko
On Mon, 2013-01-21 at 06:22 -0800, Vinod Koul wrote: 
> On Mon, Jan 21, 2013 at 11:45:51AM +0200, Andy Shevchenko wrote:
> > > > +   return 0;
> > > hmmm, why not use BLOCK_TS value. That way you dont need to look at 
> > > direction
> > > and along with burst can easily calculate residue...
> > 
> > Do you mean to read CTL hi/lo and do
> > 
> > desc->len - ctlhi.block_ts * ctllo.src_tr_width?
> > 
> Yes
> > I think it could be not precise when memory-to-peripheral transfer is
> > going on. In that case you probably will have src_tr_width like 32 bits,
> > meanwhile peripheral may receive only byte stream.
> Nope that is not the case.
> SAR/DAR is always incremented in src/dstn_tr_width granularity. For example if
> you are using MEM to DMA, then SAR will always increment in case of x86 in 
> 4byte
> granularity as we will read bursts not singles.
> 
> Also if check the spec, it says "Once the transfer starts, the read-back 
> value is the
> total number of data items already read from the source peripheral, 
> regardless of
> what is the flow controller"
> 
> So basically you get what is read from buffer in case of MEM->PER and get what
> is read from FIFO in case of PER->MEM which IMO gives you better or equal 
> results
> than your calulation.

I will try this. Indeed I don't like usage of direction as well, and
your solution seems much clear in that sense.

-- 
Andy Shevchenko 
Intel Finland Oy
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] dts: vt8500: Add initial dts support for WM8850

2013-01-21 Thread Olof Johansson
On Sat, Jan 19, 2013 at 07:44:28PM +1300, Tony Prisk wrote:
> This patch adds a soc dtsi for the Wondermedia WM8850.
> 
> A board dts file is also included for the W70v2 tablet, with support
> for all the drivers currently in mainline.
> 
> Signed-off-by: Tony Prisk 
> ---
> Hi Olof,
> 
> Sorry this is a bit late.

For 3.9? No worries, not late yet.

I've applied this to the same branch as the other wm8x50 patches. I also fixed
up three  occurrances in the dtsi that git am complained about.


-Olof
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[patch] ntb: off by one sanity checks

2013-01-21 Thread Dan Carpenter
These tests are off by one.  If "mw" is equal to NTB_NUM_MW then we
would go beyond the end of the ndev->mw[] array.

Signed-off-by: Dan Carpenter 

diff --git a/drivers/ntb/ntb_hw.c b/drivers/ntb/ntb_hw.c
index 4c71b17..5bf54f3 100644
--- a/drivers/ntb/ntb_hw.c
+++ b/drivers/ntb/ntb_hw.c
@@ -359,7 +359,7 @@ int ntb_read_remote_spad(struct ntb_device *ndev, unsigned 
int idx, u32 *val)
  */
 void *ntb_get_mw_vbase(struct ntb_device *ndev, unsigned int mw)
 {
-   if (mw > NTB_NUM_MW)
+   if (mw >= NTB_NUM_MW)
return NULL;
 
return ndev->mw[mw].vbase;
@@ -376,7 +376,7 @@ void *ntb_get_mw_vbase(struct ntb_device *ndev, unsigned 
int mw)
  */
 resource_size_t ntb_get_mw_size(struct ntb_device *ndev, unsigned int mw)
 {
-   if (mw > NTB_NUM_MW)
+   if (mw >= NTB_NUM_MW)
return 0;
 
return ndev->mw[mw].bar_sz;
@@ -394,7 +394,7 @@ resource_size_t ntb_get_mw_size(struct ntb_device *ndev, 
unsigned int mw)
  */
 void ntb_set_mw_addr(struct ntb_device *ndev, unsigned int mw, u64 addr)
 {
-   if (mw > NTB_NUM_MW)
+   if (mw >= NTB_NUM_MW)
return;
 
dev_dbg(>pdev->dev, "Writing addr %Lx to BAR %d\n", addr,
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 7/7] ARM: sunxi: olinuxino: Add muxing for the uart

2013-01-21 Thread Olof Johansson
On Mon, Jan 21, 2013 at 11:15:32PM +0100, Linus Walleij wrote:
> On Fri, Jan 18, 2013 at 10:30 PM, Maxime Ripard
>  wrote:
> 
> > Signed-off-by: Maxime Ripard 
> 
> All pinctrl and device tree patches applied to my allwinner branch in the
> pinctrl tree. Hope the ARM SoC can accept me poking around in your
> device trees.

Ah, my just-now sent reply was to an older version of this thread.

Sure, it's not a big deal that you're picking up the DT changes, but there
might end up being add/add conflicts if they are adding more stuff to the same
files. Not a huge deal, ideally we want to avoid them but a couple are ok.


-Olof
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 0/3] pwm-backlight: add subdrivers & Tegra support

2013-01-21 Thread Thierry Reding
On Mon, Jan 21, 2013 at 05:18:11PM +0900, Alex Courbot wrote:
> Hi Thierry,
> 
> On Monday 21 January 2013 15:49:28 Thierry Reding wrote:
> > Eventually this should all be covered by the CDF, but since that's not
> > ready yet we want something ad-hoc to get the hardware supported. As
> > such I would like to see this go into some sort of minimalistic, Tegra-
> > specific display/panel framework. I'd prefer to keep the pwm-backlight
> > driver as simple and generic as possible, that is, a driver for a PWM-
> > controlled backlight.
> > 
> > Another advantage of moving this into a sort of display framework is
> > that it may help in defining the requirements for a CDF and that moving
> > the code to the CDF should be easier once it is done.
> > 
> > Last but not least, abstracting away the panel allows other things such
> > as physical dimensions and display modes to be properly encapsulated. I
> > think that power-on/off timing requirements for panels also belong to
> > this set since they are usually specific to a given panel.
> > 
> > Maybe adding these drivers to tegra-drm for now would be a good option.
> > That way the corresponding glue can be added without a need for inter-
> > tree dependencies.
> 
> IIRC (because that was a while ago already) having a Tegra-only display 
> framework is exactly what we wanted to avoid in the first place. This series 
> does nothing but leverage the callbacks mechanism that already exists in pwm-
> backlight and make it available to DT systems. If we start making a Tegra-
> specific solution, then other architectures will have to reinvent the wheel 
> again. I really don't think we want to go that way.
> 
> These patches only makes slight changes to pwm_bl.c and do not extend its 
> capabilities. I agree that a suitable solution will require the CDF, but by 
> the meantime, let's go for the practical route instead of repeating the same 
> mistakes (i.e. architecture-specific frameworks) again.
> 
> There are certainly better ways to do this, but I'm not convinced at all that 
> a Tegra-only solution is one of them.

Well, your proposal is a Tegra-only solution as well. Anything we come
up with now will be Tegra-only because it will eventually be integrated
with the CDF.

Trying to come up with something generic would be counter-productive.
CDF *is* the generic solution. All we would be doing is add a competing
framework.

Thierry


pgpStd4O0mcDS.pgp
Description: PGP signature


[PATCH v1] net: net_cls: fd passed in SCM_RIGHTS datagram not set correctly

2013-01-21 Thread Daniel Wagner
From: Daniel Wagner 

Commit 6a328d8c6f03501657ad580f6f98bf9a42583ff7 changed the update
logic for the socket but it does not update the SCM_RIGHTS update
as well. This patch is based on the net_prio fix commit

48a87cc26c13b68f6cce4e9d769fcb17a6b3e4b8

net: netprio: fd passed in SCM_RIGHTS datagram not set correctly

A socket fd passed in a SCM_RIGHTS datagram was not getting
updated with the new tasks cgrp prioidx. This leaves IO on
the socket tagged with the old tasks priority.

To fix this add a check in the scm recvmsg path to update the
sock cgrp prioidx with the new tasks value.

Let's apply the same fix for net_cls.

Signed-off-by: Daniel Wagner 
Reported-by: Li Zefan 
Cc: "David S. Miller" 
Cc: "Eric W. Biederman" 
Cc: Al Viro 
Cc: John Fastabend 
Cc: Neil Horman 
Cc: net...@vger.kernel.org
Cc: cgro...@vger.kernel.org
---

v1: missing Sob added (d'oh)

 net/core/scm.c | 5 -
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/net/core/scm.c b/net/core/scm.c
index 57fb1ee..905dcc6 100644
--- a/net/core/scm.c
+++ b/net/core/scm.c
@@ -35,6 +35,7 @@
 #include 
 #include 
 #include 
+#include 
 
 
 /*
@@ -302,8 +303,10 @@ void scm_detach_fds(struct msghdr *msg, struct scm_cookie 
*scm)
}
/* Bump the usage count and install the file. */
sock = sock_from_file(fp[i], );
-   if (sock)
+   if (sock) {
sock_update_netprioidx(sock->sk, current);
+   sock_update_classid(sock->sk, current);
+   }
fd_install(new_fd, get_file(fp[i]));
}
 
-- 
1.8.0.rc0

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 1/4] ACPI / PM: Make acpi_bus_init_power() more robust

2013-01-21 Thread Mika Westerberg
On Tue, Jan 22, 2013 at 03:09:01AM +0100, Rafael J. Wysocki wrote:
> From: Rafael J. Wysocki 
> 
> The ACPI specification requires the _PSC method to be present under
> a device object if its power state cannot be inferred from the states
> of power resources used by it (ACPI 5, Section 7.6.2).  However, it
> also requires that (for power states D0-D2 and D3hot) if the _PSn
> (n = 0, 1, 2, 3) method is present under the device object, it also
> must be executed after the power resources have been set
> appropriately for the device to go into power state Dn (D3 means
> D3hot in this case).  Thus it is not clear from the specification
> whether or not the _PSn method should be executed if the initial
> configuraion of power resources used by the device indicates power
> state Dn and the _PSC method is not present.
> 
> The current implementation of acpi_bus_init_power() is based on the
> assumption that it should not be necessary to execute _PSn in the
> above situation, but experience shows that in fact that assumption
> need not be satisfied.  For this reason, make acpi_bus_init_power()
> always execute _PSn if the initial configuration of device power
> resources indicates power state Dn.
> 
> Reported-by: Mika Westerberg 

You can add also,

Tested-by: Mika Westerberg 

if you like. Thanks for fixing this.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 2/3] tegra: pwm-backlight: add tegra pwm-bl driver

2013-01-21 Thread Thierry Reding
On Tue, Jan 22, 2013 at 12:24:34PM +0900, Alex Courbot wrote:
> On Tuesday 22 January 2013 01:46:33 Stephen Warren wrote:
> > >  arch/arm/boot/dts/tegra20-ventana.dts  |  18 +++-
> > >  arch/arm/configs/tegra_defconfig   |   1 +
> > >  drivers/video/backlight/Kconfig|   7 ++
> > >  drivers/video/backlight/pwm_bl.c   |   3 +
> > >  drivers/video/backlight/pwm_bl_tegra.c | 159
> > >  +
> > This should be at least 3 separate patches: (1) Driver code (2) Ventana
> > .dts file (3) Tegra defconfig.
> 
> Will do that.
> 
> > If this is Ventana-specific, this should have a vendor prefix; "nvidia,"
> > would be appropriate.
> > 
> > But, why is this Ventana-specific; surely it's at most panel-specific,
> > or perhaps even generic across any/most LCD panels?
> 
> Yes, we could use the panel model here instead. Not sure how many other 
> panels 
> follow the same powering sequence though.
> 
> Making it Ventana-specific would have allowed to group all Tegra board 
> support 
> into the same driver, and considering that probably not many devices use the 
> same panels as we do this seemed to make sense at first.
> 
> > > + power-supply = <_bl_reg>;
> > 
> > "power" doesn't seem like a good regulator name; power to what? Is this
> > for the backlight, since I see there's a panel-supply below?
> > 
> > > + panel-supply = <_pnl_reg>;
> > > 
> > > + bl-gpio = < 28 0>;
> > > + bl-panel = < 10 0>;
> > 
> > GPIO names usually have "gpios" in their name, so I assume those should
> > be bl-enable-gpios, panel-enable-gpios?
> 
> Indeed, even though there is only one gpio here. Maybe we could group them 
> into a single property and retrieve them by index - that's what the DT GPIO 
> APIs seem to be designed for initially.
> 
> > > +static struct pwm_backlight_subdriver pwm_backlight_ventana_subdriver = {
> > > + .name = "pwm-backlight-ventana",
> > > + .init = init_ventana,
> > > + .exit = exit_ventana,
> > > + .notify = notify_ventana,
> > > + .notify_after = notify_after_ventana,
> > > +};
> > 
> > It seems like all of that code should be completely generic.
> 
> Sorry, I don't get your point here - could you elaborate?
> 
> > Rather than invent some new registration mechanism, if we need
> > board-/panel-/...-specific drivers, it'd be better to make each of those
> > specific drivers a full platform device in an of itself (i.e. regular
> > Linux platform device/driver, have its own probe(), etc.), and have
> > those specific drivers call into the base PWM backlight code, treating
> > it like a utility API.
> 
> That's what would make the most sense indeed, but would require some extra 
> changes in pwm-backlight and might go against Thierry's wish to keep it 
> simple. On the other hand I totally agree this would be more elegant. Every 
> pwm-backlight based driver would just need to invoke pwm_bl's probe/remove 
> function from its own. Thierry, would that be an acceptable alternative to 
> the 
> sub-driver thing despite the slightly deeper changes this involves?

I'm confused. Why would you want to call into pwm_bl directly? If we're
going to split this up into separate platform devices, why not look up a
given backlight device and use the backlight API on that? The pieces of
the puzzle are all there: you can use of_find_backlight_by_node() to
obtain a backlight device from a device tree node, so I'd expect the DT
to look something like this:

backlight: backlight {
compatible = "pwm-backlight";
...
};

panel: panel {
compatible = "...";
...
backlight = <>;
...
};

After that you can wire it up with host1x using something like:

host1x {
dc@5420 {
rgb {
status = "okay";

nvidia,panel = <>;
};
};
};

Maybe with such a binding, we should move the various display-timings
properties to the panel node as well and have an API to retrieve them
for use by tegra-drm.

Thierry


pgpBoRYOWsq15.pgp
Description: PGP signature


Re: [PATCH] staging/iio: Use correct argument for sizeof

2013-01-21 Thread Dan Carpenter
These two lines are what I meant.  Not the other stuff before.

> > More information about semantic patching is available at
> > http://coccinelle.lip6.fr/

regards,
dan carpenter
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] powerpc/pasemi: Fix crash on reboot

2013-01-21 Thread Olof Johansson
Hi,

On Mon, Jan 21, 2013 at 7:23 PM, Steven Rostedt  wrote:
> commit f96972f2dc "kernel/sys.c: call disable_nonboot_cpus() in
> kernel_restart()"
>
> added a call to disable_nonboot_cpus() on kernel_restart(), which tries
> to shutdown all the CPUs except the first one. The issue with the PA
> Semi, is that it does not support CPU hotplug.
>
> When the call is made to __cpu_down(), it calls the notifiers
> CPU_DOWN_PREPARE, and then tries to take the CPU down.
>
> One of the notifiers to the CPU hotplug code, is the cpufreq. The
> DOWN_PREPARE will call __cpufreq_remove_dev() which calls
> cpufreq_driver->exit. The PA Semi exit handler unmaps regions of I/O
> that is used by an interrupt that goes off constantly
> (system_reset_common, but it goes off during normal system operations
> too). I'm not sure exactly what this interrupt does.

On this version of the power architecture, the system comes back
through the reset vector when returning from some of the lower-power
idle states, which should be why you see those exceptions go off.

Thanks for catching this. I have a system that I try booting a few
times every release cycle, but I must have missed checking if reboots
still work. Glad to see you're keeping yours alive, it's becoming a
collectible. :-)

[...]

> Cc: Olof Johansson 
> Signed-off-by: Steven Rostedt 

Acked-by: Olof Johansson 

Ben, please apply for 3.8.


-Olof
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 20/33] net: Convert to devm_ioremap_resource()

2013-01-21 Thread Thierry Reding
On Mon, Jan 21, 2013 at 03:29:13PM -0500, David Miller wrote:
> From: Thierry Reding 
> Date: Mon, 21 Jan 2013 11:09:13 +0100
> 
> > Convert all uses of devm_request_and_ioremap() to the newly introduced
> > devm_ioremap_resource() which provides more consistent error handling.
> > 
> > devm_ioremap_resource() provides its own error messages so all explicit
> > error messages can be removed from the failure code paths.
> > 
> > Signed-off-by: Thierry Reding 
> 
> This won't compile if I apply it.
> 
> You really have to be clear when you submit patches like this.
> 
> Since you only CC:'d the networking developers for this one
> patch, there is _ZERO_ context for us to work with to understand
> what's going on.
> 
> You have to also CC: us on the other relevant changes and your
> "[PATCH 00/33]" posting that explains what is happening.

I planned to do so initially, but that yielded a Cc list of 156 people
and mailing lists, which I thought wasn't going to go down so well
either. In general I like Cc'ing everyone concerned on all patches of
the series, specifically for reasons of context. Some people have been
annoyed when I did so. Still, for small series where only a few dozen
people are concerned that seems to me to be the best way. But 156 email
addresses is a different story.

Either you add to many people or you don't add enough. Where do we draw
the line?

Thierry


pgp4eknkADVqh.pgp
Description: PGP signature


Re: [PATCH v3 09/22] sched: compute runnable load avg in cpu_load and cpu_avg_load_per_task

2013-01-21 Thread Mike Galbraith
On Tue, 2013-01-22 at 11:20 +0800, Alex Shi wrote: 
> 
>  I just looked into the aim9 benchmark, in this case it forks 2000 tasks,
>  after all tasks ready, aim9 give a signal than all tasks burst waking up
>  and run until all finished.
>  Since each of tasks are finished very quickly, a imbalanced empty cpu
>  may goes to sleep till a regular balancing give it some new tasks. That
>  causes the performance dropping. cause more idle entering.
> >>>
> >>> Sounds like for AIM (and possibly for other really bursty loads), we
> >>> might want to do some load-balancing at wakeup time by *just* looking
> >>> at the number of running tasks, rather than at the load average. Hmm?
> >>>
> >>> The load average is fundamentally always going to run behind a bit,
> >>> and while you want to use it for long-term balancing, a short-term you
> >>> might want to do just a "if we have a huge amount of runnable
> >>> processes, do a load balancing *now*". Where "huge amount" should
> >>> probably be relative to the long-term load balancing (ie comparing the
> >>> number of runnable processes on this CPU right *now* with the load
> >>> average over the last second or so would show a clear spike, and a
> >>> reason for quick action).
> >>>
> >>
> >> Sorry for response late!
> >>
> >> Just written a patch following your suggestion, but no clear improvement 
> >> for this case.
> >> I also tried change the burst checking interval, also no clear help.
> >>
> >> If I totally give up runnable load in periodic balancing, the performance 
> >> can recover 60%
> >> of lose.
> >>
> >> I will try to optimize wake up balancing in weekend.
> >>
> > 
> > (btw, the time for runnable avg to accumulate to 100%, needs 345ms; to
> > 50% needs 32 ms)
> > 
> > I have tried some tuning in both wake up balancing and regular
> > balancing. Yes, when using instant load weight (without runnable avg
> > engage), both in waking up, and regular balance, the performance recovered.
> > 
> > But with per_cpu nr_running tracking, it's hard to find a elegant way to
> > detect the burst whenever in waking up or in regular balance.
> > In waking up, the whole sd_llc domain cpus are candidates, so just
> > checking this_cpu is not enough.
> > In regular balance, this_cpu is the migration destination cpu, checking
> > if the burst on the cpu is not useful. Instead, we need to check whole
> > domains' increased task number.
> > 
> > So, guess 2 solutions for this issue.
> > 1, for quick waking up, we need use instant load(same as current
> > balancing) to do balance; and for regular balance, we can record both
> > instant load and runnable load data for whole domain, then decide which
> > one to use according to task number increasing in the domain after
> > tracking done the whole domain.
> > 
> > 2, we can keep current instant load balancing as performance balance
> > policy, and using runnable load balancing in power friend policy.
> > Since, none of us find performance benefit with runnable load balancing
> > on benchmark hackbench/kbuild/aim9/tbench/specjbb etc.
> > I prefer the 2nd.
> 
> 3, On the other hand, Considering the aim9 testing scenario is rare in
> real life(prepare thousands tasks and then wake up them at the same
> time). And the runnable load avg includes useful running history info.
> Only aim9 5~7% performance dropping is not unacceptable.
> (kbuild/hackbench/tbench/specjbb have no clear performance change)
> 
> So we can let this drop be with a reminder in code. Any comments?

Hm.  Burst of thousands of tasks may be rare and perhaps even silly, but
what about few task bursts?   History is useless for bursts, they live
or die now: modest gaggle of worker threads (NR_CPUS) for say video
encoding job wake in parallel, each is handed a chunk of data to chew up
in parallel.  Double scheduler latency of one worker (stack workers
because individuals don't historically fill a cpu), you double latency
for the entire job every time.

I think 2 is mandatory, keep both, and user picks his poison.

If you want max burst performance, you care about the here and now
reality the burst is waking into.  If you're running a google freight
train farm otoh, you may want some hysteresis so trains don't over-rev
the electric meter on every microscopic spike.  Both policies make
sense, but you can't have both performance profiles with either metric,
so choosing one seems doomed to failure.

Case in point: tick skew.  It was removed because synchronized ticking
saves power.. and then promptly returned under user control because the
power saving gain also inflicted serious latency pain.

-Mike

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 15/17] workqueue: remove global_cwq

2013-01-21 Thread Joonsoo Kim
On Wed, Jan 16, 2013 at 05:42:47PM -0800, Tejun Heo wrote:
> global_cwq is now nothing but a container for per-pcu standard

s/per-pcu/per-cpu/

> worker_pools.  Declare the worker pools directly as
> cpu/unbound_std_worker_pools[] and remove global_cwq.
> 
> * get_gcwq() is replaced with std_worker_pools() which returns the
>   pointer to the standard pool array for a given CPU.
> 
> * __alloc_workqueue_key() updated to use get_std_worker_pool() instead
>   of open-coding pool determination.
> 
> This is part of an effort to remove global_cwq and make worker_pool
> the top level abstraction, which in turn will help implementing worker
> pools with user-specified attributes.
> 
> Signed-off-by: Tejun Heo 
> ---
>  kernel/workqueue.c | 47 +--
>  1 file changed, 17 insertions(+), 30 deletions(-)
> 
> diff --git a/kernel/workqueue.c b/kernel/workqueue.c
> index d37db53..4bddf52 100644
> --- a/kernel/workqueue.c
> +++ b/kernel/workqueue.c
> @@ -120,7 +120,6 @@ enum {
>   * W: workqueue_lock protected.
>   */
>  
> -struct global_cwq;
>  struct worker_pool;
>  
>  /*
> @@ -174,16 +173,6 @@ struct worker_pool {
>  };
>  
>  /*
> - * Global per-cpu workqueue.  There's one and only one for each cpu
> - * and all works are queued and processed here regardless of their
> - * target workqueues.
> - */
> -struct global_cwq {
> - struct worker_pool  pools[NR_STD_WORKER_POOLS];
> - /* normal and highpri pools */
> -} cacheline_aligned_in_smp;
> -
> -/*
>   * The per-CPU workqueue.  The lower WORK_STRUCT_FLAG_BITS of
>   * work_struct->data are used for flags and thus cwqs need to be
>   * aligned at two's power of the number of flag bits.
> @@ -277,8 +266,8 @@ EXPORT_SYMBOL_GPL(system_freezable_wq);
>  #include 
>  
>  #define for_each_std_worker_pool(pool, cpu)  \
> - for ((pool) = _gcwq((cpu))->pools[0];   \
> -  (pool) < _gcwq((cpu))->pools[NR_STD_WORKER_POOLS]; (pool)++)
> + for ((pool) = _worker_pools(cpu)[0];\
> +  (pool) < _worker_pools(cpu)[NR_STD_WORKER_POOLS]; (pool)++)
>  
>  #define for_each_busy_worker(worker, i, pos, pool)   \
>   hash_for_each(pool->busy_hash, i, pos, worker, hentry)
> @@ -454,19 +443,19 @@ static LIST_HEAD(workqueues);
>  static bool workqueue_freezing;  /* W: have wqs started 
> freezing? */
>  
>  /*
> - * The almighty global cpu workqueues.  nr_running is the only field
> - * which is expected to be used frequently by other cpus via
> - * try_to_wake_up().  Put it in a separate cacheline.
> + * The CPU standard worker pools.  nr_running is the only field which is
> + * expected to be used frequently by other cpus via try_to_wake_up().  Put
> + * it in a separate cacheline.
>   */
> -static DEFINE_PER_CPU(struct global_cwq, global_cwq);
> +static DEFINE_PER_CPU_ALIGNED(struct worker_pool [NR_STD_WORKER_POOLS],
> +   cpu_std_worker_pools);
>  static DEFINE_PER_CPU_SHARED_ALIGNED(atomic_t, 
> pool_nr_running[NR_STD_WORKER_POOLS]);

Why worker_pool is defined as DEFINE_PER_CPU_ALIGNED?

And this makes only worker_pool[0] aligned with cacheline.
worker_pool[1] is not aligned with cacheline.
Now, we have a spin_lock for each instance of worker_pool and
each one is independent instance.
So, IMHO, it is better to align worker_pool[1] with cacheline.

Thanks.

>  /*
> - * Global cpu workqueue and nr_running counter for unbound gcwq.  The pools
> - * for online CPUs have POOL_DISASSOCIATED set, and all their workers have
> - * WORKER_UNBOUND set.
> + * Standard worker pools and nr_running counter for unbound CPU.  The pools
> + * have POOL_DISASSOCIATED set, and all workers have WORKER_UNBOUND set.
>   */
> -static struct global_cwq unbound_global_cwq;
> +static struct worker_pool unbound_std_worker_pools[NR_STD_WORKER_POOLS];
>  static atomic_t unbound_pool_nr_running[NR_STD_WORKER_POOLS] = {
>   [0 ... NR_STD_WORKER_POOLS - 1] = ATOMIC_INIT(0),   /* always 0 */
>  };
> @@ -477,17 +466,17 @@ static DEFINE_IDR(worker_pool_idr);
>  
>  static int worker_thread(void *__worker);
>  
> -static struct global_cwq *get_gcwq(unsigned int cpu)
> +static struct worker_pool *std_worker_pools(int cpu)
>  {
>   if (cpu != WORK_CPU_UNBOUND)
> - return _cpu(global_cwq, cpu);
> + return per_cpu(cpu_std_worker_pools, cpu);
>   else
> - return _global_cwq;
> + return unbound_std_worker_pools;
>  }
>  
>  static int std_worker_pool_pri(struct worker_pool *pool)
>  {
> - return pool - get_gcwq(pool->cpu)->pools;
> + return pool - std_worker_pools(pool->cpu);
>  }
>  
>  /* allocate ID and assign it to @pool */
> @@ -514,9 +503,9 @@ static struct worker_pool *worker_pool_by_id(int pool_id)
>  
>  static struct worker_pool *get_std_worker_pool(int cpu, bool highpri)
>  {
> - struct global_cwq 

Re: [PATCH] staging/iio: Use correct argument for sizeof

2013-01-21 Thread Dan Carpenter
On Mon, Jan 21, 2013 at 10:14:02PM +0100, Peter Huewe wrote:
> found with coccicheck
> sizeof when applied to a pointer typed expression gives the size of
> the pointer
> 

The original code is correct, in this case.  We're storing an array
of pointers and the last element in the array is a NULL.

> The semantic patch that makes this output is available
> in scripts/coccinelle/misc/noderef.cocci.
> 
> More information about semantic patching is available at
> http://coccinelle.lip6.fr/

Can you remove those two boiler plate lines?  We all have google.

regards,
dan carpenter

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] regmap: debugfs: Fix compilation warning

2013-01-21 Thread Mark Brown
On Mon, Jan 21, 2013 at 03:36:55PM +0100, Vincent Stehlé wrote:
> This fixes the following compilation warning:

> - unsigned int i, ret;
> + unsigned int i, ret = 0;

This sort of fix is not a good idea, you're just shutting the warning up
without any sort of analysis explaining why it's generated in error.  If
it's generating a spurious error that's a compiler bug.


signature.asc
Description: Digital signature


Re: [PATCH 05/15] ASoC: fsl: fiq and dma cannot both be modules

2013-01-21 Thread Mark Brown
On Tue, Jan 22, 2013 at 11:50:30AM +0800, Shawn Guo wrote:
> On Mon, Jan 21, 2013 at 05:15:58PM +, Arnd Bergmann wrote:

> > Without this patch, we cannot build the ARM 'allmodconfig', or
> > we get this error:

> > sound/soc/fsl/imx-pcm-dma.o: In function `init_module':
> > sound/soc/fsl/imx-pcm-dma.c:177: multiple definition of `init_module'
> > sound/soc/fsl/imx-pcm-fiq.o:sound/soc/fsl/imx-pcm-fiq.c:334: first defined 
> > here
> > sound/soc/fsl/imx-pcm-dma.o: In function `cleanup_module':
> > sound/soc/fsl/imx-pcm-dma.c:177: multiple definition of `cleanup_module'
> > sound/soc/fsl/imx-pcm-fiq.o:sound/soc/fsl/imx-pcm-fiq.c:334: first defined 
> > here

> I sent a fix [1] for that queued by Mark.

> Mark,

> Is the patch on the way to 3.8-rc?

Yes, should be.


signature.asc
Description: Digital signature


Re: [PATCH 0/3] ELF executable signing and verification

2013-01-21 Thread Rusty Russell
Vivek Goyal  writes:
> Hi,
>
> This is a very crude RFC for ELF executable signing and verification. This
> has been done along the lines of module signature verification.

Yes, but I'm the first to admit that's the wrong lines.

The reasons we didn't choose that for module signatures:
1) I was unaware of it,
2) We didn't have a file descriptor in the module syscall, and
3) It needs attributes, and we don't understand xattrs in cpio (though
   bsdcpio does).

#1 and #2 are no longer true; #3 is a simple matter of coding.

Since signing binaries is the New Hotness, I'd prefer not to keep
reiterating this discussion every month.  Let's beef up IMA instead...

Thanks,
Rusty.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] MODSIGN: Warn when module signature checking fails

2013-01-21 Thread Rusty Russell
Chris Samuel  writes:
> /* Please CC me, I'm not on LKML */
>
> On 21/01/13 10:36, Rusty Russell wrote:
>
>> We have errnos for a reason; let's not pollute the kernel logs.  That's
>> a userspace job.
>
> Fair enough.
>
>> This part is OK, but I'll add mod->name to the printk.
>
> Sounds good.
>
>> How's this:
>
> Looks fine, modulo the lack of mod->name as Stephen mentioned.

Yeah, here's what is now in Linus' tree:

commit 64748a2c9062da0c32b59c1b368a86fc4613b1e1
Author: Rusty Russell 
Date:   Mon Jan 21 17:03:02 2013 +1030

module: printk message when module signature fail taints kernel.

Reported-by: Chris Samuel 
Signed-off-by: Rusty Russell 

diff --git a/kernel/module.c b/kernel/module.c
index eab0827..e69a5a6 100644
--- a/kernel/module.c
+++ b/kernel/module.c
@@ -3192,8 +3192,13 @@ again:
 
 #ifdef CONFIG_MODULE_SIG
mod->sig_ok = info->sig_ok;
-   if (!mod->sig_ok)
+   if (!mod->sig_ok) {
+   printk_once(KERN_NOTICE
+   "%s: module verification failed: signature and/or"
+   " required key missing - tainting kernel\n",
+   mod->name);
add_taint_module(mod, TAINT_FORCED_MODULE);
+   }
 #endif
 
/* Now module is in final location, initialize linked lists, etc. */
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: One of these things (CONFIG_HZ) is not like the others..

2013-01-21 Thread Santosh Shilimkar

On Tuesday 22 January 2013 04:53 AM, Tony Lindgren wrote:

* Russell King - ARM Linux  [130121 13:07]:


As for Samsung and the rest I can't comment.  The original reason OMAP
used this though was because the 32768Hz counter can't produce 100Hz
without a .1% error - too much error under pre-clocksource
implementations for timekeeping.  Whether that's changed with the
clocksource/clockevent support needs to be checked.


Yes that's why HZ was originally set to 128. That value (or some multiple)
still makes sense when the 32 KiHZ clock source is being used. Of course
we should rely on the local timer when running for the SoCs that have
them.


This is right. It was only because of the drift associated when clocked
with 32KHz. Even on SOCs where local timers are available for power
management reasons we need to switch to 32KHz clocked device in
low power states. Hence the HZ value should be multiple of 32 on
OMAP.

Regards
Santosh

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


HID: clean up quirk for Sony RF receivers

2013-01-21 Thread Fernando Luis Vázquez Cao
Document what the fix-up is does and make it more robust by ensuring
that it is only applied to the USB interface that corresponds to the
mouse (sony_report_fixup() is called once per interface during probing).

Cc: linux-in...@vger.kernel.org
Cc: linux-...@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Fernando Luis Vazquez Cao 
Signed-off-by: Jiri Kosina 
---

diff -urNp linux-3.8-rc4-orig/drivers/hid/hid-sony.c 
linux-3.8-rc4/drivers/hid/hid-sony.c
--- linux-3.8-rc4-orig/drivers/hid/hid-sony.c   2013-01-22 14:21:13.380552283 
+0900
+++ linux-3.8-rc4/drivers/hid/hid-sony.c2013-01-22 14:41:56.316934002 
+0900
@@ -43,9 +43,19 @@ static __u8 *sony_report_fixup(struct hi
 {
struct sony_sc *sc = hid_get_drvdata(hdev);
 
-   if ((sc->quirks & VAIO_RDESC_CONSTANT) &&
-   *rsize >= 56 && rdesc[54] == 0x81 && rdesc[55] == 0x07) 
{
+   /*
+* Some Sony RF receivers wrongly declare the mouse pointer as a
+* a constant non-data variable.
+*/
+   if ((sc->quirks & VAIO_RDESC_CONSTANT) && *rsize >= 56 &&
+   /* usage page: generic desktop controls */
+   /* rdesc[0] == 0x05 && rdesc[1] == 0x01 && */
+   /* usage: mouse */
+   rdesc[2] == 0x09 && rdesc[3] == 0x02 &&
+   /* input (usage page for x,y axes): constant, variable, relative */
+   rdesc[54] == 0x81 && rdesc[55] == 0x07) {
hid_info(hdev, "Fixing up Sony RF Receiver report 
descriptor\n");
+   /* input: data, variable, relative */
rdesc[55] = 0x06;
}
 


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [RFC PATCH linux-next] et131x: Promote staging et131x driver to drivers/net

2013-01-21 Thread Dan Carpenter
On Mon, Jan 21, 2013 at 11:44:55PM +, Mark Einon wrote:
> On 19 January 2013 11:03, Dan Carpenter  wrote:
> 
> >
> > et131x_get_regs() has endian bugs calling et131x_mii_read().
> >
> 
> 
> Hi Dan,
> 
> Could you be a bit more descriptive about the issues you think there
> are with these calls?
> 

Sorry, that was sloppy on my part.  Here's what I meant:

et131x_mii_read(adapter, MII_BMCR, (u16 *)_buff[num++]);

That puts the number in the 2 high bits which works for little
endian systems but not for big endian.  It should be something like:

u16 tmp;

et131x_mii_read(adapter, MII_BMCR, );
regs_buff[num++] = tmp;

regards,
dan carpenter

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] usb: musb: core: Add missing newline character

2013-01-21 Thread kishon

Hi,

On Monday 21 January 2013 10:52 PM, Sergei Shtylyov wrote:

Hello.

On 01/21/2013 05:52 PM, Kishon Vijay Abraham I wrote:


No functional change. Just added a missing newline character.



Signed-off-by: Kishon Vijay Abraham I 
---
  drivers/usb/musb/musb_core.c |2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)



diff --git a/drivers/usb/musb/musb_core.c b/drivers/usb/musb/musb_core.c
index f1c6c54..c534131 100644
--- a/drivers/usb/musb/musb_core.c
+++ b/drivers/usb/musb/musb_core.c
@@ -2301,7 +2301,7 @@ static int __init musb_init(void)
pr_info("%s: version " MUSB_VERSION ", "
"?dma?"
", "
-   "otg (peripheral+host)",
+   "otg (peripheral+host)\n",
musb_driver_name);
return platform_driver_register(_driver);
  }


Refresh your tree, alike patch is already upstream. :-)


Cool. Missed that one.

Thanks
Kishon
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [v3 2/2] ARM: tegra: Skip scu_enable(scu_base) if not Cortex A9

2013-01-21 Thread Santosh Shilimkar

On Tuesday 22 January 2013 11:22 AM, Hiroshi Doyu wrote:

Skip scu_enable(scu_base) if CPU is not Cortex A9 with SCU.

Signed-off-by: Hiroshi Doyu 
---

Looks fine. I will also update OMAP code with the new
interface. Thanks.

For the patch,
Acked-by: Santosh Shilimkar
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 08/13] perf gtk/browser: Add support for event group view

2013-01-21 Thread Namhyung Kim
Hi Arnaldo,

On Wed, 16 Jan 2013 15:28:59 -0300, Arnaldo Carvalho de Melo wrote:
> Em Wed, Jan 16, 2013 at 07:25:02PM +0900, Namhyung Kim escreveu:
>> Adding current header name to event name will fix the problem but it
>> probably occupies too much screen width especially for long named
>> tracepoint or PMU-specific events like
>> "compaction:mm_compaction_isolate_migratepages".
>> 
>>   Overhead/branches  Overhead/branch-misses  sys/branches  sys/branch-misses 
>>  usr/branches  usr/branch-misses  Command  Shared Object   Symbol
>>   .  ..    . 
>>    .  ...  .  ...
>>  98.32%  31.16% 0.00%  0.00% 
>>98.32% 31.16%a.out  a.out  [.] foo
>> 
>> 
>> If you have a better idea or other way to place the cursor without
>> printing bogus 0.00% on GTK, please let me know.
>
> Compacting it using an extra line:
>
>  Overhead...  sys  usr
>  branches  branch-misses  branches  branch-misses  branches  branch-misses  
> Command  Shared Object   Symbol
>    .    .    .  
> ...  .  ...
>98.32% 31.16% 0.00%  0.00%98.32% 31.16%
> a.out  a.out  [.] foo
>
> It could even use some reference:
>
>  Overhead.  sys.  usr...
>  branches(1)  branch-misses(2)  (1)(2)(1) (2) Command  Shared 
> Object   Symbol
>  ...    .  .  ..  ..  ...  
> .  ...
>   98.32%31.16%  0.00%  0.00%  98.32%  31.16%a.out 
>  a.out  [.] foo
>
> The (1) could be done with a superscript number or even just using a
> different fore/background color, to use fewer columns.
>
> One other way, that would scale for really long event names, would be to
> have the event list in the first few lines and then:
>
> Events:
> 1. branches
> 2. branch-misses
>
>  Overhead..  sys.  usr...
>  (1) (2) (1)(2)(1) (2) Command  Shared Object   Symbol
>  ..  ..  .  .  ..  ..  ...  .  ...
>  98.32%  31.16%  0.00%  0.00%  98.32%  31.16%   a.out   a.out  [.] foo
>
> I think you could switch to/from each of these forms using a hotkey,
> that would influence how the hist_entry__snprintf() routine would work,
> either using perf_evsel__name() or evsel->idx :-)
>
> This way if at some point the user wants to expand/compress the lines,
> it will be possible to do so quickly, just pressing the hotkey.

By saying hotkey, I guess you meant to use it for TUI.  However TUI
doesn't provide those header lines. ;-)

As this extra line (and hotkey) thing might add complexity to the
patchset, I'd like to separate it to a different work and to focus on
the basic feature with current behavior.  Is it acceptable for you?

Thanks,
Namhyung
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2] HID: add support for Sony RF receiver with USB product id 0x0374

2013-01-21 Thread Fernando Luis Vazquez Cao

On Wed, 2013-01-16 at 11:44 +0100, Jiri Kosina wrote:

On Wed, 16 Jan 2013, Fernando Luis Vazquez Cao wrote:

> I noticed that the patch was tagged "for-3.9". Does this mean
> that it is too late to get it merged during the current release
> cycle?

I currently don't have anything queued for 3.8, and this particular patch
doesn't justify a separate pull request.

Once it's in Linus' tree, it can be easily pushed out to all existing
-stable branches (including 3.8-stable, once it's created).

If I am gfoing to be sending pull request for 3.8 to Linus still due to
some important bugfix, I will be including this.


Ok, thank you for the explanation. I really appreciate it.



> If possible, I would like to get it backported to 3.7-stable (and
> possibly 3.2 stable), since without it a whole family of Sony desktop
> computers is unusable under Linux out of the box. Should I do it myself
> or do you have a process in place for HID stable patches?

If the patch had

Cc: sta...@vger.kernel.org

in it, it'd be picked for -stable queue automatically.


I considered doing that but I thought and upstream commit
ID was needed.



Otherwise, anyone is free to take it once it's in Linus' tree and sent
to to sta...@vger.kernel.org for inclusion.



So it is the standard procedure. I just wanted to make
sure whether you wanted to have all the -stable patches
funnelled through you. I will send the patch to -stable
directly and Cc you as soon as it makes it into Linus'
tree.


By the way, I will be replying to this email with a
follow-up patch that I forgot to send the last time
around. It is just documentation for the quirk.

Thanks,
Fernando
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 0/2] ARM: Exynos5250: Enabling samsung usb phy

2013-01-21 Thread Vivek Gautam
Hi Kukjin,


On Tue, Jan 22, 2013 at 10:36 AM, Kukjin Kim  wrote:
> Felipe Balbi wrote:
>>
>> On Fri, Jan 18, 2013 at 03:10:13PM +0200, Felipe Balbi wrote:
>> > Hi,
>> >
>> > On Tue, Dec 18, 2012 at 09:09:40PM +0530, Vivek Gautam wrote:
>> > > This patch-set enables the samsung-usbphy driver on exynos5250,
>> > > which enables the support for USB2 type and USB3 type phys.
>> > > The corresponding phy driver patches are available at:
>> > >  1) https://lkml.org/lkml/2012/12/18/201
>> > >  2) https://lists.ozlabs.org/pipermail/devicetree-discuss/2012-
>> December/024559.html
>> > >
>> > > Tested this patch-set on exynos5250 with following patch-sets for
>> > > USB 2.0 and USB 3.0:
>> > >  - https://patchwork.kernel.org/patch/1794651/
>> > >  - https://lkml.org/lkml/2012/12/18/201
>> > >  - https://lists.ozlabs.org/pipermail/devicetree-discuss/2012-
>> December/024559.html
>> > >  - http://comments.gmane.org/gmane.linux.usb.general/76352
>> > >  - https://lkml.org/lkml/2012/12/13/492
>> > >
>> > > Vivek Gautam (2):
>> > >   ARM: Exynos5250: Enabling samsung-usbphy driver
>> > >   ARM: Exynos5250: Enabling USB 3.0 phy for samsung-usbphy driver
>> >
>> > What should I do with this series ? Is it ready to apply ? If it is,
>> > then please resend with Kukjim's Acked-by.
>>
>> actually, now that I look again, it's all under arch/arm/, so Kukjim can
>> take all of those through his tree ;-)
>>
> Yes, once Vivek addresses comments from Sylwester, let me pick up into
> Samsung tree :-)
>

Sure, i shall update this patch-series based on separate drivers for
USB 3.0 PHY controller
as posted in following patch series :
[PATCH v3 0/2] Adding USB 3.0 DRD-phy support for exynos5250



-- 
Thanks & Regards
Vivek
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[v3 2/2] ARM: tegra: Skip scu_enable(scu_base) if not Cortex A9

2013-01-21 Thread Hiroshi Doyu
Skip scu_enable(scu_base) if CPU is not Cortex A9 with SCU.

Signed-off-by: Hiroshi Doyu 
---
 arch/arm/mach-tegra/platsmp.c |4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/arm/mach-tegra/platsmp.c b/arch/arm/mach-tegra/platsmp.c
index 689ee4b..8853bd2 100644
--- a/arch/arm/mach-tegra/platsmp.c
+++ b/arch/arm/mach-tegra/platsmp.c
@@ -38,7 +38,6 @@
 extern void tegra_secondary_startup(void);
 
 static cpumask_t tegra_cpu_init_mask;
-static void __iomem *scu_base = IO_ADDRESS(TEGRA_ARM_PERIF_BASE);
 
 #define EVP_CPU_RESET_VECTOR \
(IO_ADDRESS(TEGRA_EXCEPTION_VECTORS_BASE) + 0x100)
@@ -187,7 +186,8 @@ static void __init tegra_smp_prepare_cpus(unsigned int 
max_cpus)
/* Always mark the boot CPU (CPU0) as initialized. */
cpumask_set_cpu(0, _cpu_init_mask);
 
-   scu_enable(scu_base);
+   if (scu_a9_has_base())
+   scu_enable(IO_ADDRESS(scu_a9_get_base()));
 }
 
 struct smp_operations tegra_smp_ops __initdata = {
-- 
1.7.9.5

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[v3 1/2] ARM: Add API to detect SCU base address from CP15

2013-01-21 Thread Hiroshi Doyu
Add API to detect SCU base address from CP15.

Signed-off-by: Hiroshi Doyu 
---
Update: Use Russell's suggestion,
http://lists.infradead.org/pipermail/linux-arm-kernel/2013-January/143321.html
---
 arch/arm/include/asm/smp_scu.h |   17 +
 1 file changed, 17 insertions(+)

diff --git a/arch/arm/include/asm/smp_scu.h b/arch/arm/include/asm/smp_scu.h
index 4eb6d00..006f026 100644
--- a/arch/arm/include/asm/smp_scu.h
+++ b/arch/arm/include/asm/smp_scu.h
@@ -6,6 +6,23 @@
 #define SCU_PM_POWEROFF3
 
 #ifndef __ASSEMBLER__
+
+#include 
+
+static inline bool scu_a9_has_base(void)
+{
+   return read_cpuid_part_number() == ARM_CPU_PART_CORTEX_A9;
+}
+
+static inline unsigned long scu_a9_get_base(void)
+{
+   unsigned long pa;
+
+   asm("mrc p15, 4, %0, c15, c0, 0" : "=r" (pa));
+
+   return pa;
+}
+
 unsigned int scu_get_core_count(void __iomem *);
 void scu_enable(void __iomem *);
 int scu_power_mode(void __iomem *, unsigned int);
-- 
1.7.9.5

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH] lib: vsprintf: Add %pa format specifier for phys_addr_t types

2013-01-21 Thread Stepan Moskovchenko
Add the %pa format specifier for printing a phys_addr_t
type, since the physical address size on some platforms
can vary based on build options, regardless of the native
integer type.

Signed-off-by: Stepan Moskovchenko 
---
 Documentation/printk-formats.txt |   13 ++---
 lib/vsprintf.c   |7 +++
 2 files changed, 17 insertions(+), 3 deletions(-)

diff --git a/Documentation/printk-formats.txt b/Documentation/printk-formats.txt
index 8ffb274..dbc977b 100644
--- a/Documentation/printk-formats.txt
+++ b/Documentation/printk-formats.txt
@@ -53,6 +53,13 @@ Struct Resources:
For printing struct resources. The 'R' and 'r' specifiers result in a
printed resource with ('R') or without ('r') a decoded flags member.

+Physical addresses:
+
+   %pa 0x01234567 or 0x0123456789abcdef
+
+   For printing a phys_addr_t type, which can vary based on build options,
+   regardless of the width of the CPU data path. Passed by reference.
+
 Raw buffer as a hex string:
%*ph00 01 02  ...  3f
%*phC   00:01:02: ... :3f
@@ -150,9 +157,9 @@ s64 SHOULD be printed with %lld/%llx, (long long):
printk("%lld", (long long)s64_var);

 If  is dependent on a config option for its size (e.g., sector_t,
-blkcnt_t, phys_addr_t, resource_size_t) or is architecture-dependent
-for its size (e.g., tcflag_t), use a format specifier of its largest
-possible type and explicitly cast to it.  Example:
+blkcnt_t, resource_size_t) or is architecture-dependent for its size (e.g.,
+tcflag_t), use a format specifier of its largest possible type and explicitly
+cast to it.  Example:

printk("test: sector number/total blocks: %llu/%llu\n",
(unsigned long long)sector, (unsigned long long)blockcount);
diff --git a/lib/vsprintf.c b/lib/vsprintf.c
index 39c99fe..9b02a71 100644
--- a/lib/vsprintf.c
+++ b/lib/vsprintf.c
@@ -1022,6 +1022,7 @@ int kptr_restrict __read_mostly;
  *  N no separator
  *The maximum supported length is 64 bytes of the input. Consider
  *to use print_hex_dump() for the larger input.
+ * - 'a' For a phys_addr_t type (passed by reference)
  *
  * Note: The difference between 'S' and 'F' is that on ia64 and ppc64
  * function pointers are really function descriptors, which contain a
@@ -1112,6 +1113,12 @@ char *pointer(const char *fmt, char *buf, char *end, 
void *ptr,
return netdev_feature_string(buf, end, ptr, spec);
}
break;
+   case 'a':
+   spec.flags |= SPECIAL | SMALL | ZEROPAD;
+   spec.field_width = sizeof(phys_addr_t) * 2;
+   spec.base = 16;
+   return number(buf, end,
+ (unsigned long long) *((phys_addr_t *)ptr), spec);
}
spec.flags |= SMALL;
if (spec.field_width == -1) {
--
The Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
hosted by The Linux Foundation

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 13/15] USB: ehci: make orion and mxc bus glues coexist

2013-01-21 Thread Shawn Guo
On Tue, Jan 22, 2013 at 02:11:18PM +0800, Shawn Guo wrote:
> Alan,
> 
> Thanks for the patch.  I just gave it try.  The USB Host port still
> works for me with a couple of fixes on your changes integrated (one
> for compiling and the other for probing).  So you have my ACK with
> the changes below rolled into your patch.
> 
> Acked-by: Shawn Guo 
> 
Sorry.  I meant a Test tag.

Tested-by: Shawn Guo 

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 13/15] USB: ehci: make orion and mxc bus glues coexist

2013-01-21 Thread Shawn Guo
On Mon, Jan 21, 2013 at 09:37:42PM +, Arnd Bergmann wrote:
> > Arnd, please take a look at
> > 
> > http://marc.info/?l=linux-usbm=135843716515529w=2
> > 
> > I can't test it easily, not being set up for cross compilation.  I'm 
> > waiting to hear from anybody whether it works before submitting it.
> > (There's also a report of memory corruption involving a similar patch 
> > for ehci-omap; it hasn't been tracked down yet.)
> 
> Your patch looks good to me, but it also seems to do some other
> changes that are not required to fix the problem but could wait
> for 3.9 instead. You definitely have my Ack if you are willing
> to take it for 3.8 though.
> 
> Shawn or Sascha should be able to test it.
> 
Alan,

Thanks for the patch.  I just gave it try.  The USB Host port still
works for me with a couple of fixes on your changes integrated (one
for compiling and the other for probing).  So you have my ACK with
the changes below rolled into your patch.

Acked-by: Shawn Guo 

---8<

diff --git a/drivers/usb/host/ehci-mxc.c b/drivers/usb/host/ehci-mxc.c
index 177b354..a685945 100644
--- a/drivers/usb/host/ehci-mxc.c
+++ b/drivers/usb/host/ehci-mxc.c
@@ -37,7 +37,7 @@

 #define DRIVER_DESC "Freescale On-Chip EHCI Host driver"

-static const char hcd_name[] = "ehci-mxc";
+static const char hcd_name[] = "mxc-ehci";

 #define ULPI_VIEWPORT_OFFSET   0x170

@@ -48,7 +48,7 @@ struct ehci_mxc_priv {
 static struct hc_driver __read_mostly ehci_mxc_hc_driver;

 static const struct ehci_driver_overrides ehci_mxc_overrides __initdata = {
-   .extra_priv_size =  sizeof(struct ehci_mxc_priv);
+   .extra_priv_size =  sizeof(struct ehci_mxc_priv),
 };

 static int ehci_mxc_drv_probe(struct platform_device *pdev)

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCHSET] workqueue: remove gcwq and make worker_pool the only backend abstraction

2013-01-21 Thread Joonsoo Kim
Hello, Tejun.

On Wed, Jan 16, 2013 at 05:42:32PM -0800, Tejun Heo wrote:
> Hello,
> 
> Currently, on the backend side, there are two layers of abstraction.
> For each CPU and the special unbound wq-specific CPU, there's one
> global_cwq.  gcwq in turn hosts two worker_pools - one for normal
> priority, the other for highpri - each of which actually serves the
> work items.
> 
> worker_pool is the later addition to support separate pool of workers
> for highpri workqueues.  Stuff was moved to worker_pool on as-needed
> basis and, as a result, the two pools belonging to the same CPU share
> some stuff in the gcwq - most notably the lock and the hash table for
> work items currently being executed.
> 
> It seems like we'll need to support worker pools with custom
> attributes, which is planned to be implemented as extra worker_pools
> for the unbound CPU.  Removing gcwq and having worker_pool as the top
> level abstraction makes things much simpler for such designs.  Also,
> there's scalability benefit to not sharing locking and busy hash among
> different worker pools as worker pools w/ custom attributes are likely
> to have widely different memory / cpu locality characteristics.

Could you tell me why extra worker_pools with custom attributes are needed?
Or could you give a reference link for this?

Thanks.

> In retrospect, it might have been less churn if we just converted to
> have multiple gcwqs per CPU when we were adding highpri pool support.
> Oh well, such is life and the name worker_pool fits the role much
> better anyway at this point.
> 
> This patchset moves the remaining stuff in gcwq to worker_pool and
> then removes gcwq entirely making worker_pool the top level and the
> only backend abstraction.  In the process, this patchset also prepares
> for later addition of worker_pools with custom attributes.
> 
> This patchset shouldn't introduce any visible differences outside of
> workqueue proper and contains the following 17 patches.
> 
>  0001-workqueue-unexport-work_cpu.patch
>  0002-workqueue-use-std_-prefix-for-the-standard-per-cpu-p.patch
>  0003-workqueue-make-GCWQ_DISASSOCIATED-a-pool-flag.patch
>  0004-workqueue-make-GCWQ_FREEZING-a-pool-flag.patch
>  0005-workqueue-introduce-WORK_OFFQ_CPU_NONE.patch
>  0006-workqueue-add-worker_pool-id.patch
>  0007-workqueue-record-pool-ID-instead-of-CPU-in-work-data.patch
>  0008-workqueue-move-busy_hash-from-global_cwq-to-worker_p.patch
>  0009-workqueue-move-global_cwq-cpu-to-worker_pool.patch
>  0010-workqueue-move-global_cwq-lock-to-worker_pool.patch
>  0011-workqueue-make-hotplug-processing-per-pool.patch
>  0012-workqueue-make-freezing-thawing-per-pool.patch
>  0013-workqueue-replace-for_each_worker_pool-with-for_each.patch
>  0014-workqueue-remove-worker_pool-gcwq.patch
>  0015-workqueue-remove-global_cwq.patch
>  0016-workqueue-rename-nr_running-variables.patch
>  0017-workqueue-post-global_cwq-removal-cleanups.patch
> 
> 0001-0002 are misc preps.
> 
> 0003-0004 move flags from gcwq to pool.
> 
> 0005-0007 make work->data off-queue backlink point to worker_pools
> instead of CPUs, which is necessary to move busy_hash to pool.
> 
> 0008-0010 move busy_hash, cpu and locking to pool.
> 
> 0011-0014 make operations per-pool and remove gcwq usages.
> 
> 0015-0017 remove gcwq and cleanup afterwards.
> 
> This patchset is on top of wq/for-3.9 023f27d3d6f ("workqueue: fix
> find_worker_executing_work() brekage from hashtable conversion") and
> available in the following git branch.
> 
>  git://git.kernel.org/pub/scm/linux/kernel/git/tj/wq.git for-3.9-remove-gcwq
> 
> Thanks.
> 
>  include/linux/workqueue.h|   17
>  include/trace/events/workqueue.h |2
>  kernel/workqueue.c   |  897 
> +++
>  3 files changed, 461 insertions(+), 455 deletions(-)
> 
> --
> tejun
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [RFC PATCH] Input: gpio_keys: Fix suspend/resume press event lost

2013-01-21 Thread NeilBrown
On Mon, 21 Jan 2013 15:57:18 -0800 Dmitry Torokhov
 wrote:

> Hi Ivan,
> 
> On Mon, Jan 21, 2013 at 03:15:14PM +0200, Ivan Khoronzhuk wrote:
> > Rebased on linux_omap/master.
> > 
> > During suspend/resume the key press can be lost if time of resume
> > sequence is significant.
> > 
> > If press event cannot be remembered then the driver can read the
> > current button state only in time of interrupt handling. But in some
> > cases when time between IRQ and IRQ handler is significant we can
> > read incorrect state. As a particular case, when device is in suspend
> > we press wakupable key and up it back in a jiffy, the interrupt
> > handler read the state of up but the interrupt source is press indeed.
> > As a result, in a OS like android, we resume then suspend right away
> > because the key state is not changed.
> > 
> > This patch add to gpio_keys framework opportunity to recover lost of
> > press key event at resuming. The variable "key_pressed" from
> > gpio_button_data structure is not used for gpio keys, it is only used
> > for gpio irq keys, so it is logically used to remember press lost
> > while resuming.
> 
> The same could happen if you delay processing of interrupt long enough
> during normal operation. If key is released by the time you get around
> to reading it you will not see a key press.
> 
> To me this sounds like you need to speed up your resume process so that
> you can start serving interrupts quicker.
> 

Agreed.  When I was looking at this I found that any genuine button press
would have at least 70msec between press and release, while the device could
wake up to the point of being able to handle interrupts in about 14msec.
That is enough of a gap to make it pointless to try to 'fix' the code.

With enough verbose debugging enabled that 14msec can easily grow to
hundreds, but then if  you have debugging enabled to can discipline yourself
to hold the button for longer.

Ivan: What sort of delay are you seeing between the button press and the
interrupt routine running?  And can you measure how long the button is
typically down for?

NeilBrown


signature.asc
Description: PGP signature


RE: [PATCH v6 0/4] Adding usb2.0 host-phy support for exynos5250

2013-01-21 Thread Kukjin Kim
Vivek Gautam wrote:
> 
> Changes from v5:
>  - Rebased on top of latest patches:
> usb: phy: samsung: Introducing usb phy driver for hsotg (v9)
> usb: phy: samsung: Add support to set pmu isolation (v6)
>As a result adding hostphy enable mask and hostphy register offsets
>to driver data in order to access the HOSTPHY CONTROL register.
> 
>  - Adding member 'otg' to struct samsung-usbphy so that its consumers
>can call otg->set_host so as to make 'phy' aware of the consumer type:
>   HOST/DEVICE
> 
>  - Adding 'otg' to 'struct s5p_ehci_hcd' and 'struct exynos_ohci_hcd'
>which keeps track of 'otg' of the controllers' phy. This then sets
>the host.
> 
>  - Moved samsung_usbphy_set_type() calls from ehci-s5p and ohci-exynos
>to phy driver itself where based on phy_type it is called.
> 
>  - Added separate macro definition for USB20PHY_CFG register to select
>between host/device type usb link.
> 
>  - Removing unnecessary argument 'phy_type' from
> samsung_usbphy_set_type()
>and samsung_usbphy_cfg_sel().
> 
>  - Addressed few nits:
>   -- added macro for 'KHZ'
>   -- removing useless 'if' from samsung_usbphy_cfg_sel()
>   -- keeping the place of clk_get intact and requesting driver
>  data before that.
> 
> Vivek Gautam (4):
>   ARM: EXYNOS: Update & move usb-phy types to generic include layer
>   usb: phy: samsung: Add host phy support to samsung-phy driver
>   USB: ehci-s5p: Add phy driver support
>   USB: ohci-exynos: Add phy driver support
> 
>  .../devicetree/bindings/usb/samsung-usbphy.txt |   12 +-
>  drivers/usb/host/ehci-s5p.c|   81 +++-
>  drivers/usb/host/ohci-exynos.c |   85 +++-
>  drivers/usb/phy/Kconfig|2 +-
>  drivers/usb/phy/samsung-usbphy.c   |  512
++--
>  include/linux/usb/samsung_usb_phy.h|   16 +
>  6 files changed, 635 insertions(+), 73 deletions(-)
>  create mode 100644 include/linux/usb/samsung_usb_phy.h
> 
> --
> 1.7.6.5

Looks good to me,

Felipe and Greg, I don't know who should take this series anyway, feel free
to add  my ack:

Acked-by: Kukjin Kim 

Thanks.

- Kukjin

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH v2] media: adv7343: accept configuration through platform data

2013-01-21 Thread Prabhakar Lad
From: Lad, Prabhakar 

The current code was implemented with some default configurations,
this default configuration works on board and doesn't work on other.

This patch accepts the configuration through platform data and configures
the encoder depending on the data passed.

Signed-off-by: Lad, Prabhakar 
Cc: Hans Verkuil 
Cc: Laurent Pinchart 
Cc: Mauro Carvalho Chehab 
---
  Changes for v2:
  1: Fixed review comments pointed by Hans.

 drivers/media/i2c/adv7343.c |   36 +
 include/media/adv7343.h |   52 +++
 2 files changed, 83 insertions(+), 5 deletions(-)

diff --git a/drivers/media/i2c/adv7343.c b/drivers/media/i2c/adv7343.c
index 2b5aa67..a058058 100644
--- a/drivers/media/i2c/adv7343.c
+++ b/drivers/media/i2c/adv7343.c
@@ -43,6 +43,7 @@ MODULE_PARM_DESC(debug, "Debug level 0-1");
 struct adv7343_state {
struct v4l2_subdev sd;
struct v4l2_ctrl_handler hdl;
+   const struct adv7343_platform_data *pdata;
u8 reg00;
u8 reg01;
u8 reg02;
@@ -215,12 +216,23 @@ static int adv7343_setoutput(struct v4l2_subdev *sd, u32 
output_type)
/* Enable Appropriate DAC */
val = state->reg00 & 0x03;
 
-   if (output_type == ADV7343_COMPOSITE_ID)
-   val |= ADV7343_COMPOSITE_POWER_VALUE;
-   else if (output_type == ADV7343_COMPONENT_ID)
-   val |= ADV7343_COMPONENT_POWER_VALUE;
+   /* configure default configuration */
+   if (!state->pdata)
+   if (output_type == ADV7343_COMPOSITE_ID)
+   val |= ADV7343_COMPOSITE_POWER_VALUE;
+   else if (output_type == ADV7343_COMPONENT_ID)
+   val |= ADV7343_COMPONENT_POWER_VALUE;
+   else
+   val |= ADV7343_SVIDEO_POWER_VALUE;
else
-   val |= ADV7343_SVIDEO_POWER_VALUE;
+   val = state->pdata->mode_config.sleep_mode << 0 |
+ state->pdata->mode_config.pll_control << 1 |
+ state->pdata->mode_config.dac_3 << 2 |
+ state->pdata->mode_config.dac_2 << 3 |
+ state->pdata->mode_config.dac_1 << 4 |
+ state->pdata->mode_config.dac_6 << 5 |
+ state->pdata->mode_config.dac_5 << 6 |
+ state->pdata->mode_config.dac_4 << 7;
 
err = adv7343_write(sd, ADV7343_POWER_MODE_REG, val);
if (err < 0)
@@ -238,6 +250,17 @@ static int adv7343_setoutput(struct v4l2_subdev *sd, u32 
output_type)
 
/* configure SD DAC Output 2 and SD DAC Output 1 bit to zero */
val = state->reg82 & (SD_DAC_1_DI & SD_DAC_2_DI);
+
+   if (state->pdata && state->pdata->sd_config.sd_dac_out1)
+   val = val | (state->pdata->sd_config.sd_dac_out1 << 1);
+   else if (state->pdata && !state->pdata->sd_config.sd_dac_out1)
+   val = val & ~(state->pdata->sd_config.sd_dac_out1 << 1);
+
+   if (state->pdata && state->pdata->sd_config.sd_dac_out2)
+   val = val | (state->pdata->sd_config.sd_dac_out2 << 2);
+   else if (state->pdata && !state->pdata->sd_config.sd_dac_out2)
+   val = val & ~(state->pdata->sd_config.sd_dac_out2 << 2);
+
err = adv7343_write(sd, ADV7343_SD_MODE_REG2, val);
if (err < 0)
goto setoutput_exit;
@@ -401,6 +424,9 @@ static int adv7343_probe(struct i2c_client *client,
if (state == NULL)
return -ENOMEM;
 
+   /* Copy board specific information here */
+   state->pdata = client->dev.platform_data;
+
state->reg00= 0x80;
state->reg01= 0x00;
state->reg02= 0x20;
diff --git a/include/media/adv7343.h b/include/media/adv7343.h
index d6f8a4e..944757b 100644
--- a/include/media/adv7343.h
+++ b/include/media/adv7343.h
@@ -20,4 +20,56 @@
 #define ADV7343_COMPONENT_ID   (1)
 #define ADV7343_SVIDEO_ID  (2)
 
+/**
+ * adv7343_power_mode - power mode configuration.
+ * @sleep_mode: on enable the current consumption is reduced to micro ampere
+ * level. All DACs and the internal PLL circuit are disabled.
+ * Registers can be read from and written in sleep mode.
+ * @pll_control: PLL and oversampling control. This control allows internal
+ *  PLL 1 circuit to be powered down and the oversampling to be
+ *  switched off.
+ * @dac_1: power on/off DAC 1.
+ * @dac_2: power on/off DAC 2.
+ * @dac_3: power on/off DAC 3.
+ * @dac_4: power on/off DAC 4.
+ * @dac_5: power on/off DAC 5.
+ * @dac_6: power on/off DAC 6.
+ *
+ * Power mode register (Register 0x0), for more info refer REGISTER MAP ACCESS
+ * section of datasheet[1], table 17 page no 30.
+ *
+ * [1] http://www.analog.com/static/imported-files/data_sheets/ADV7342_7343.pdf
+ */
+struct adv7343_power_mode {
+   bool sleep_mode;
+   bool pll_control;
+   bool dac_1;
+   bool dac_2;
+   bool dac_3;

Re: linux-next: build failure after merge of the final tree (gpio-lw tree related)

2013-01-21 Thread Laxman Dewangan

To to Samuel.


On Tuesday 22 January 2013 10:41 AM, Laxman Dewangan wrote:

On Tuesday 22 January 2013 09:40 AM, Stephen Rothwell wrote:

* PGP Signed by an unknown key

Hi all,

After merging the final tree, today's linux-next build (powerpc
allyesconfig) failed like this:

drivers/gpio/gpio-palmas.c: In function 'palmas_gpio_get':
drivers/gpio/gpio-palmas.c:46:2: error: implicit declaration of 
function 'palmas_read' [-Werror=implicit-function-declaration]

drivers/gpio/gpio-palmas.c: In function 'palmas_gpio_set':
drivers/gpio/gpio-palmas.c:62:3: error: implicit declaration of 
function 'palmas_write' [-Werror=implicit-function-declaration]

drivers/gpio/gpio-palmas.c: In function 'palmas_gpio_output':
drivers/gpio/gpio-palmas.c:83:2: error: implicit declaration of 
function 'palmas_update_bits' [-Werror=implicit-function-declaration]

drivers/gpio/gpio-palmas.c: In function 'palmas_gpio_to_irq':
drivers/gpio/gpio-palmas.c:108:2: error: implicit declaration of 
function 'palmas_irq_get_virq' [-Werror=implicit-function-declaration]


Caused by commit 4bb49f0dc999 ("gpio: palmas: Add support for Palams
GPIO").

I have reverted that commit for today.
The changes from the series need to apply on mfd subsystem also. The 
changes are not yet merged and hence the issue.


Requesting Samuel for considering the patch series
[PATCH 0/4] mfd: palma: add RTC and GPIO support

and atleast apply
[PATCH 2/4] mfd: palmas: add apis to access the Palmas' registers



--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: linux-next: build failure after merge of the final tree (gpio-lw tree related)

2013-01-21 Thread Laxman Dewangan

On Tuesday 22 January 2013 09:40 AM, Stephen Rothwell wrote:

* PGP Signed by an unknown key

Hi all,

After merging the final tree, today's linux-next build (powerpc
allyesconfig) failed like this:

drivers/gpio/gpio-palmas.c: In function 'palmas_gpio_get':
drivers/gpio/gpio-palmas.c:46:2: error: implicit declaration of function 
'palmas_read' [-Werror=implicit-function-declaration]
drivers/gpio/gpio-palmas.c: In function 'palmas_gpio_set':
drivers/gpio/gpio-palmas.c:62:3: error: implicit declaration of function 
'palmas_write' [-Werror=implicit-function-declaration]
drivers/gpio/gpio-palmas.c: In function 'palmas_gpio_output':
drivers/gpio/gpio-palmas.c:83:2: error: implicit declaration of function 
'palmas_update_bits' [-Werror=implicit-function-declaration]
drivers/gpio/gpio-palmas.c: In function 'palmas_gpio_to_irq':
drivers/gpio/gpio-palmas.c:108:2: error: implicit declaration of function 
'palmas_irq_get_virq' [-Werror=implicit-function-declaration]

Caused by commit 4bb49f0dc999 ("gpio: palmas: Add support for Palams
GPIO").

I have reverted that commit for today.
The changes from the series need to apply on mfd subsystem also. The 
changes are not yet merged and hence the issue.


Requesting Samuel for considering the patch series
[PATCH 0/4] mfd: palma: add RTC and GPIO support

and atleast apply
[PATCH 2/4] mfd: palmas: add apis to access the Palmas' registers

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


RE: [PATCH 0/2] ARM: Exynos5250: Enabling samsung usb phy

2013-01-21 Thread Kukjin Kim
Felipe Balbi wrote:
> 
> On Fri, Jan 18, 2013 at 03:10:13PM +0200, Felipe Balbi wrote:
> > Hi,
> >
> > On Tue, Dec 18, 2012 at 09:09:40PM +0530, Vivek Gautam wrote:
> > > This patch-set enables the samsung-usbphy driver on exynos5250,
> > > which enables the support for USB2 type and USB3 type phys.
> > > The corresponding phy driver patches are available at:
> > >  1) https://lkml.org/lkml/2012/12/18/201
> > >  2) https://lists.ozlabs.org/pipermail/devicetree-discuss/2012-
> December/024559.html
> > >
> > > Tested this patch-set on exynos5250 with following patch-sets for
> > > USB 2.0 and USB 3.0:
> > >  - https://patchwork.kernel.org/patch/1794651/
> > >  - https://lkml.org/lkml/2012/12/18/201
> > >  - https://lists.ozlabs.org/pipermail/devicetree-discuss/2012-
> December/024559.html
> > >  - http://comments.gmane.org/gmane.linux.usb.general/76352
> > >  - https://lkml.org/lkml/2012/12/13/492
> > >
> > > Vivek Gautam (2):
> > >   ARM: Exynos5250: Enabling samsung-usbphy driver
> > >   ARM: Exynos5250: Enabling USB 3.0 phy for samsung-usbphy driver
> >
> > What should I do with this series ? Is it ready to apply ? If it is,
> > then please resend with Kukjim's Acked-by.
> 
> actually, now that I look again, it's all under arch/arm/, so Kukjim can
> take all of those through his tree ;-)
> 
Yes, once Vivek addresses comments from Sylwester, let me pick up into
Samsung tree :-)

Thanks.

- Kukjin

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v4 2/2] perf stat: add interval printing

2013-01-21 Thread Namhyung Kim
Hi Stephane,

On Mon, 21 Jan 2013 13:38:29 +0100, Stephane Eranian wrote:
> On Mon, Jan 21, 2013 at 3:53 AM, Namhyung Kim  wrote:
>> AFAICS the only caller of print_stat() is cmd_stat() and it'll call this
>> only if interval is 0.  So why not just setting prefix to NULL then?
>>
> I don't understand your point here. Prefix is set ONLY when interval
> is non zero. Prefix is setup before print_counter() so that each counter
> for each interval is timestamped with the same value.

Please see below.


>>> - if (status != -1)
>>> + if (status != -1 && !interval)
>>>   print_stat(argc, argv);

Here, print_stat() is called only if interval is 0.  So no need to check
the interval inside the print_stat(), right?

Thanks,
Namhyung
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


RE: [LSF/MM TOPIC] Re: [dm-devel] Announcement: STEC EnhanceIO SSD caching software for Linux kernel

2013-01-21 Thread Amit Kale
> -Original Message-
> From: Mike Snitzer [mailto:snit...@redhat.com]
> Sent: Monday, January 21, 2013 6:40 PM
> To: Amit Kale
> Cc: Darrick J. Wong; device-mapper development; linux-
> bca...@vger.kernel.org; kent.overstr...@gmail.com; LKML; lsf-
> p...@lists.linux-foundation.org; Joe Thornber
> Subject: Re: [LSF/MM TOPIC] Re: [dm-devel] Announcement: STEC EnhanceIO
> SSD caching software for Linux kernel
> 
> On Mon, Jan 21 2013 at 12:26am -0500,
> Amit Kale  wrote:
> 
> > > -Original Message-
> > > From: Mike Snitzer [mailto:snit...@redhat.com]
> > > Sent: Saturday, January 19, 2013 3:08 AM
> > > To: Darrick J. Wong
> > > Cc: device-mapper development; Amit Kale;
> > > linux-bca...@vger.kernel.org; kent.overstr...@gmail.com; LKML;
> > > lsf...@lists.linux-foundation.org; Joe Thornber
> > > Subject: Re: [LSF/MM TOPIC] Re: [dm-devel] Announcement: STEC
> > > EnhanceIO SSD caching software for Linux kernel
> > >
> > > On Fri, Jan 18 2013 at  4:25pm -0500, Darrick J. Wong
> > >  wrote:
> > >
> > > > Since Joe is putting together a testing tree to compare the three
> > > > caching things, what do you all think of having a(nother) session
> > > > about ssd caching at this year's LSFMM Summit?
> > > >
> > > > [Apologies for hijacking the thread.] [Adding lsf-pc to the cc
> > > > list.]
> > >
> > > Hopefully we'll have some findings on the comparisons well before
> > > LSF (since we currently have some momentum).  But yes it may be
> > > worthwhile to discuss things further and/or report findings.
> >
> > We should have performance comparisons presented well before the
> > summit. It'll be good to have ssd caching session in any case. The
> > likelihood that one of them will be included in Linux kernel before
> > April is very low.
> 
> dm-cache is under active review for upstream inclusion.  I wouldn't
> categorize the chances of dm-cache going upstream when the v3.9 merge
> window opens as "very low".  But even if dm-cache does go upstream it
> doesn't preclude bcache and/or enhanceio from going upstream too.

I agree. We haven't seen a full comparison yet, IMHO. If different solutions 
offer mutually exclusive benefits, it'll be worthwhile including them all.

We haven't submitted EnhanceIO for an inclusion yet. Need more testing from the 
community before we can mark it Beta.
-Amit

PROPRIETARY-CONFIDENTIAL INFORMATION INCLUDED



This electronic transmission, and any documents attached hereto, may contain 
confidential, proprietary and/or legally privileged information. The 
information is intended only for use by the recipient named above. If you 
received this electronic message in error, please notify the sender and delete 
the electronic message. Any disclosure, copying, distribution, or use of the 
contents of information received in error is strictly prohibited, and violators 
will be pursued legally.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] tools lib traceevent: Handle dynamic array's element size properly

2013-01-21 Thread Steven Rostedt
On Mon, 2013-01-21 at 13:44 +0100, Jiri Olsa wrote:
> Fixing the dynamic array format field parsing.
> 
> Currently the event_read_fields function could segfault while parsing
> dynamic array other than string type. The reason is the event->pevent
> does not need to be set and gets dereferenced unconditionaly.
> 
> Also adding proper initialization of field->elementsize based on the
> parsed dynamic type.
> 
> Signed-off-by: Jiri Olsa 
> Cc: Arnaldo Carvalho de Melo 
> Cc: Steven Rostedt 
> Cc: Corey Ashford 
> Cc: Frederic Weisbecker 
> Cc: Ingo Molnar 
> Cc: Namhyung Kim 
> Cc: Paul Mackerras 
> Cc: Peter Zijlstra 
> ---
>  tools/lib/traceevent/event-parse.c | 40 
> +++---
>  tools/lib/traceevent/event-parse.h |  1 +
>  2 files changed, 38 insertions(+), 3 deletions(-)
> 
> diff --git a/tools/lib/traceevent/event-parse.c 
> b/tools/lib/traceevent/event-parse.c
> index f504619..d682df2 100644
> --- a/tools/lib/traceevent/event-parse.c
> +++ b/tools/lib/traceevent/event-parse.c
> @@ -1223,6 +1223,34 @@ static int field_is_long(struct format_field *field)
>   return 0;
>  }
>  
> +static unsigned int field_dynamic_elem_size(struct format_field *field)
> +{
> + /* This covers all FIELD_IS_STRING types. */
> + static struct {
> + char *type;
> + unsigned int size;
> + } table[] = {
> + { "u8",   1 },
> + { "u16",  2 },
> + { "u32",  4 },
> + { "u64",  8 },
> + { "s8",   1 },
> + { "s16",  2 },
> + { "s32",  4 },
> + { "s64",  8 },
> + { "char", 1 },
> + { },
> + };
> + int i;
> +
> + for (i = 0; table[i].type; i++) {
> + if (!strcmp(table[i].type, field->type_dyn))
> + return table[i].size;
> + }
> +
> + return 0;
> +}
> +
>  static int event_read_fields(struct event_format *event, struct format_field 
> **fields)
>  {
>   struct format_field *field = NULL;
> @@ -1390,7 +1418,7 @@ static int event_read_fields(struct event_format 
> *event, struct format_field **f
>   field->type = new_type;
>   strcat(field->type, " ");
>   strcat(field->type, field->name);
> - free_token(field->name);
> + field->type_dyn = field->name;

This is only used in this function (the field_dynamic_elem_size() is
only called here). Can we not add the field->type_dyn, and just use a
local variable here. You just need to make sure you free it correctly.

-- Steve

>   strcat(field->type, brackets);
>   field->name = token;
>   type = read_token();
> @@ -1477,10 +1505,14 @@ static int event_read_fields(struct event_format 
> *event, struct format_field **f
>   if (field->flags & FIELD_IS_ARRAY) {
>   if (field->arraylen)
>   field->elementsize = field->size / 
> field->arraylen;
> + else if (field->flags & FIELD_IS_DYNAMIC)
> + field->elementsize = 
> field_dynamic_elem_size(field);
>   else if (field->flags & FIELD_IS_STRING)
>   field->elementsize = 1;
> - else
> - field->elementsize = event->pevent->long_size;
> + else if (field->flags & FIELD_IS_LONG)
> + field->elementsize = event->pevent ?
> +  event->pevent->long_size :
> +  sizeof(long);
>   } else
>   field->elementsize = field->size;
>  
> @@ -1496,6 +1528,7 @@ fail:
>  fail_expect:
>   if (field) {
>   free(field->type);
> + free(field->type_dyn);
>   free(field->name);
>   free(field);
>   }
> @@ -5500,6 +5533,7 @@ static void free_format_fields(struct format_field 
> *field)
>   while (field) {
>   next = field->next;
>   free(field->type);
> + free(field->type_dyn);
>   free(field->name);
>   free(field);
>   field = next;
> diff --git a/tools/lib/traceevent/event-parse.h 
> b/tools/lib/traceevent/event-parse.h
> index 7be7e89..4d54af2 100644
> --- a/tools/lib/traceevent/event-parse.h
> +++ b/tools/lib/traceevent/event-parse.h
> @@ -174,6 +174,7 @@ struct format_field {
>   struct format_field *next;
>   struct event_format *event;
>   char*type;
> + char*type_dyn;
>   char*name;
>   int offset;
>   int size;


--
To unsubscribe from this list: send the line "unsubscribe 

linux-next: Tree for Jan 22

2013-01-21 Thread Stephen Rothwell
Hi all,

Changes since 20130121:

The powerpc tree still had a build failure.

The usb tree lost its build failure.

The gpio-lw tree gained a build failure so I used the version from
next-20130121.

The akpm tree lost a patch that turned up elsewhere.



I have created today's linux-next tree at
git://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git
(patches at http://www.kernel.org/pub/linux/kernel/next/ ).  If you
are tracking the linux-next tree using git, you should not use "git pull"
to do so as that will try to merge the new linux-next release with the
old one.  You should use "git fetch" as mentioned in the FAQ on the wiki
(see below).

You can see which trees have been included by looking in the Next/Trees
file in the source.  There are also quilt-import.log and merge.log files
in the Next directory.  Between each merge, the tree was built with
a ppc64_defconfig for powerpc and an allmodconfig for x86_64. After the
final fixups (if any), it is also built with powerpc allnoconfig (32 and
64 bit), ppc44x_defconfig and allyesconfig (minus
CONFIG_PROFILE_ALL_BRANCHES - this fails its final link) and i386, sparc,
sparc64 and arm defconfig. These builds also have
CONFIG_ENABLE_WARN_DEPRECATED, CONFIG_ENABLE_MUST_CHECK and
CONFIG_DEBUG_INFO disabled when necessary.

Below is a summary of the state of the merge.

We are up to 211 trees (counting Linus' and 28 trees of patches pending
for Linus' tree), more are welcome (even if they are currently empty).
Thanks to those who have contributed, and to those who haven't, please do.

Status of my local build tests will be at
http://kisskb.ellerman.id.au/linux-next .  If maintainers want to give
advice about cross compilers/configs that work, we are always open to add
more builds.

Thanks to Randy Dunlap for doing many randconfig builds.  And to Paul
Gortmaker for triage and bug fixes.

There is a wiki covering stuff to do with linux-next at
http://linux.f-seidel.de/linux-next/pmwiki/ .  Thanks to Frank Seidel.
-- 
Cheers,
Stephen Rothwells...@canb.auug.org.au

$ git checkout master
$ git reset --hard stable
Merging origin/master (9a92841 Merge branch 'drm-fixes' of 
git://people.freedesktop.org/~airlied/linux)
Merging fixes/master (d287b87 Merge branch 'for-linus' of 
git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs)
Merging kbuild-current/rc-fixes (02f3e53 Merge branch 'yem-kconfig-rc-fixes' of 
git://gitorious.org/linux-kconfig/linux-kconfig into kbuild/rc-fixes)
Merging arm-current/fixes (210b184 Merge branch 'for-rmk/virt/hyp-boot/fixes' 
of git://git.kernel.org/pub/scm/linux/kernel/git/will/linux into fixes)
Merging m68k-current/for-linus (e7e29b4 m68k: Wire up finit_module)
Merging powerpc-merge/merge (e6449c9 powerpc: Add missing NULL terminator to 
avoid boot panic on PPC40x)
Merging sparc/master (04cef49 sparc: kernel/sbus.c: fix memory leakage)
Merging net/master (d721a17 isdn/gigaset: fix zero size border case in debug 
dump)
Merging sound-current/for-linus (ec50b4c ALSA: hda - Add fixup for Acer AO725 
laptop)
Merging pci-current/for-linus (444ee9b PCI: remove depends on 
CONFIG_EXPERIMENTAL)
Merging wireless/master (4668cce ath9k: disable the tasklet before taking the 
PCU lock)
Merging driver-core.current/driver-core-linus (7d1f9ae Linux 3.8-rc4)
Merging tty.current/tty-linus (ebebd49 8250/16?50: Add support for Broadcom 
TruManage redirected serial port)
Merging usb.current/usb-linus (ad2e632 Merge tag 'fixes-for-v3.8-rc5' of 
git://git.kernel.org/pub/scm/linux/kernel/git/balbi/usb into usb-linus)
Merging staging.current/staging-linus (7dfc833 staging/sb105x: PARPORT config 
is not good enough must use PARPORT_PC)
Merging char-misc.current/char-misc-linus (33080c1 Drivers: hv: balloon: Fix a 
memory leak)
Merging input-current/for-linus (b666263 Input: document that unregistering 
managed devices is not necessary)
Merging md-current/for-linus (a9add5d md/raid5: add blktrace calls)
Merging audit-current/for-linus (c158a35 audit: no leading space in 
audit_log_d_path prefix)
Merging crypto-current/master (a2c0911 crypto: caam - Updated SEC-4.0 device 
tree binding for ERA information.)
Merging ide/master (9974e43 ide: fix generic_ide_suspend/resume Oops)
Merging dwmw2/master (084a0ec x86: add CONFIG_X86_MOVBE option)
CONFLICT (content): Merge conflict in arch/x86/Kconfig
Merging sh-current/sh-fixes-for-linus (4403310 SH: Convert out[bwl] macros to 
inline functions)
Merging irqdomain-current/irqdomain/merge (a0d271c Linux 3.6)
Merging devicetree-current/devicetree/merge (ab28698 of: define struct device 
in of_platform.h if !OF_DEVICE and !OF_ADDRESS)
Merging spi-current/spi/merge (d3601e5 spi/sh-hspi: fix return value check in 
hspi_probe().)
Merging gpio-current/gpio/merge (bc1008c gpio/mvebu-gpio: Make mvebu-gpio 
depend on OF_CONFIG)
Merging rr-fixes/fixes (9a92841 Merge branch 'drm-fixes' of 
git://people.free

Re: [PATCH 30/33] video: Convert to devm_ioremap_resource()

2013-01-21 Thread Jingoo Han
On Monday, January 21, 2013 7:09 PM, Thierry wrote
> 
> Convert all uses of devm_request_and_ioremap() to the newly introduced
> devm_ioremap_resource() which provides more consistent error handling.
> 
> devm_ioremap_resource() provides its own error messages so all explicit
> error messages can be removed from the failure code paths.
> 
> Signed-off-by: Thierry Reding 
> Cc: Florian Tobias Schandinat 
> Cc: linux-fb...@vger.kernel.org
> ---
>  drivers/video/exynos/exynos_dp_core.c | 8 +++-
>  drivers/video/jz4740_fb.c | 6 +++---
>  drivers/video/omap2/dss/hdmi.c| 8 +++-
>  drivers/video/omap2/vrfb.c| 9 -
>  drivers/video/s3c-fb.c| 7 +++
>  5 files changed, 16 insertions(+), 22 deletions(-)
> 

For drivers/video/s3c-fb.c, drivers/video/exynos/exynos_dp_core.c

Acked-by: Jingoo Han 


Best regards,
Jingoo Han


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


linux-next: build failure after merge of the final tree (gpio-lw tree related)

2013-01-21 Thread Stephen Rothwell
Hi all,

After merging the final tree, today's linux-next build (powerpc
allyesconfig) failed like this:

drivers/gpio/gpio-palmas.c: In function 'palmas_gpio_get':
drivers/gpio/gpio-palmas.c:46:2: error: implicit declaration of function 
'palmas_read' [-Werror=implicit-function-declaration]
drivers/gpio/gpio-palmas.c: In function 'palmas_gpio_set':
drivers/gpio/gpio-palmas.c:62:3: error: implicit declaration of function 
'palmas_write' [-Werror=implicit-function-declaration]
drivers/gpio/gpio-palmas.c: In function 'palmas_gpio_output':
drivers/gpio/gpio-palmas.c:83:2: error: implicit declaration of function 
'palmas_update_bits' [-Werror=implicit-function-declaration]
drivers/gpio/gpio-palmas.c: In function 'palmas_gpio_to_irq':
drivers/gpio/gpio-palmas.c:108:2: error: implicit declaration of function 
'palmas_irq_get_virq' [-Werror=implicit-function-declaration]

Caused by commit 4bb49f0dc999 ("gpio: palmas: Add support for Palams
GPIO").

I have reverted that commit for today.
-- 
Cheers,
Stephen Rothwells...@canb.auug.org.au


pgpOJhKUwbDxj.pgp
Description: PGP signature


  1   2   3   4   5   6   7   8   9   10   >