On Mon, Jan 25, 2016 at 11:18:24AM +, Juri Lelli wrote:
> Hi,
>
> On 25/01/16 15:20, Viresh Kumar wrote:
> > On 25-01-16, 15:16, Gautham R. Shenoy wrote:
> > > Currently next_policy() explicitly checks if a policy is the last
> > > policy in the cpuf
Currently next_policy() explicitly checks if a policy is the last
policy in the cpufreq_policy_list. Use the standard list_is_last
primitive instead.
Cc: Viresh Kumar
Signed-off-by: Gautham R. Shenoy
---
The earlier version one was based on an Juri's experimental branch.
I have based this one
On Wed, Jan 27, 2016 at 10:10:20AM +, Juri Lelli wrote:
> On 27/01/16 11:39, Gautham R Shenoy wrote:
> > On Mon, Jan 25, 2016 at 11:18:24AM +, Juri Lelli wrote:
> > > Hi,
> > >
> > > On 25/01/16 15:20, Viresh Kumar wrote:
> > >
.
With a kernbench run, there were no regression when compared to 4.5-rc3.
FWIW, Tested-by: Gautham R. Shenoy
>
> Thanks,
> Rafael
--
Thanks and Regards
gautham.
Hello Rafael,
On Fri, Feb 05, 2016 at 03:11:54AM +0100, Rafael J. Wysocki wrote:
[..snip..]
> Index: linux-pm/drivers/cpufreq/cpufreq_performance.c
> ===
> --- linux-pm.orig/drivers/cpufreq/cpufreq_performance.c
> +++
On Wed, Feb 10, 2016 at 10:45:14AM +0530, Gautham R Shenoy wrote:
> Hello Rafael,
>
> On Fri, Feb 05, 2016 at 03:11:54AM +0100, Rafael J. Wysocki wrote:
> [..snip..]
> > Index: linux-pm/drivers/cpufreq/cpu
On Tue, Jun 21, 2016 at 09:47:19PM +0200, Peter Zijlstra wrote:
> On Tue, Jun 21, 2016 at 03:43:56PM -0400, Tejun Heo wrote:
> > On Tue, Jun 21, 2016 at 09:37:09PM +0200, Peter Zijlstra wrote:
> > > Hurm.. So I've applied it, just to get this issue sorted, but I'm not
> > > entirely sure I like
Hi Shreyas,
On Wed, May 18, 2016 at 12:37:56PM +0530, Shreyas B Prabhu wrote:
[..snip..]
> >> diff --git a/arch/powerpc/kernel/exceptions-64s.S
> >> b/arch/powerpc/kernel/exceptions-64s.S
> >> index 7716ceb..7ebfbb0 100644
> >> --- a/arch/powerpc/kernel/exceptions-64s.S
> >> +++
On Wed, May 18, 2016 at 12:21:17PM +0530, Shreyas B Prabhu wrote:
> With this patch, r5 which is the third parameter to
> power_powersave_common contains the return address that needs to be
> written to SRR0. So here I'm keeping r5 unaltered and using r7 for the MSR.
Ok.
Reviewed-by:
On Tue, May 03, 2016 at 01:54:34PM +0530, Shreyas B. Prabhu wrote:
> Move idle related macros to a common location asm/cpuidle.h so that
> they can be used for stop instruction support.
>
> Signed-off-by: Shreyas B. Prabhy
Reviewed-by: Gautham R. Shenoy
--
Thanks and Regards
gautham.
On Mon, May 23, 2016 at 08:48:35PM +0530, Shreyas B. Prabhu wrote:
> idle_power7.S handles idle entry/exit for POWER7, POWER8 and in next
> patch for POWER9. Rename the file to a non-hardware specific
> name.
>
> Signed-off-by: Shreyas B. Prabhu
Reviewed-by: Gautham R. Shenoy
t;
> Suggested-by: Gautham R. Shenoy
> Signed-off-by: Shreyas B. Prabhu
> ---
> New in v3
>
> arch/powerpc/kernel/exceptions-64s.S| 6 +++---
> arch/powerpc/kernel/idle_power_common.S | 16
> arch/powerpc/kvm/book3s_hv_rmhandlers.S | 4 ++--
&
_HWTHREAD_STATE to power7_powersave_common
> from power7_enter_nap_mode and make it more generic by passing the rfid
> address as a function parameter.
>
> Also make function name more generic.
>
> Reviewed-by: Gautham R. Shenoy
> Signed-off-by: Shreyas B. Prabhu
> ---
>
On Mon, May 23, 2016 at 08:48:38PM +0530, Shreyas B. Prabhu wrote:
> Create a function for saving SPRs before entering deep idle states.
> This function can be reused for POWER9 deep idle states.
>
> Signed-off-by: Shreyas B. Prabhu
Reviewed-by: Gautham R. Shenoy
Hi Tejun,
On Thu, Jun 16, 2016 at 03:39:05PM -0400, Tejun Heo wrote:
> On Thu, Jun 16, 2016 at 02:45:48PM +0200, Peter Zijlstra wrote:
> > Subject: workqueue: Fix setting affinity of unbound worker threads
> > From: Peter Zijlstra
> > Date: Thu Jun 16 14:38:42 CEST 2016
> >
> > With commit
d mask and ends up overriding it to cpu_possible_mask.
>
> CPU_ONLINE callbacks should be able to put kthreads on the CPU which
> is coming online. Update select_fallback_rq() so that it follows
> cpu_online() rather than cpu_active() for kthreads.
>
> Signed-off-by: Tejun Heo
>
On Thu, Jan 28, 2016 at 12:55:36PM +0530, Shilpasri G Bhat wrote:
> This will free the dynamically allocated memory of'chips' on
> module exit.
>
> Signed-off-by: Shilpasri G Bhat
Reviewed-by: Gautham R. Shenoy
--
Thanks and Regards
gautham.
the chip ids for all cores in the array 'core_to_chip_map' and use it
> in the hotpath.
>
> Reported-by: Anton Blanchard
> Signed-off-by: Shilpasri G Bhat
Reviewed-by: Gautham R. Shenoy
--
Thanks and Regards
gautham.
ts
> like throttle below nominal frequency and OCC_RESET are reduced to
> pr_warn/pr_warn_once as pointed by MFG to not mark them as critical
> messages. This patch adds 'throttle_reason' to struct chip to store the
> throttle reason.
>
> Signed-off-by: Shilpasri G Bhat
Rev
Hi Shilpa,
A minor nit.
On Thu, Jan 28, 2016 at 12:55:41PM +0530, Shilpasri G Bhat wrote:
[..snip..]
> +
> +What:
> /sys/devices/system/cpu/cpufreq/chip*/throttle_reasons/
> +Date:Jan 2016
> +Contact: Linux kernel mailing list
> + Linux for
Hi Viresh,
>
> What I can suggest is:
> - Move this directory inside cpuX/cpufreq/ directory, in a similar way
> as to how we create 'stats' directory today.
> - You can then get policy->cpu, to get chip->id out of it.
> - The only disadvantage here is that the same chip directory will be
>
Hello Joonas,
On Wed, Feb 03, 2016 at 04:24:28PM +0200, Joonas Lahtinen wrote:
> Use distinctive name for cpu_hotplug.dep_map to avoid the actual
> cpu_hotplug.lock appearing as cpu_hotplug.lock#2 in lockdep splats.
>
> Cc: Gautham R. Shenoy
> Cc: Rafael J. Wysocki
> Cc: Int
ask of the worker pool when it comes
online.
b) the cpumask of the worker pool when the second CPU in the pool's
cpumask comes online.
Reported-by: Abdul Haleem
Cc: Peter Zijlstra
Cc: Thomas Gleixner
Cc: Tejun Heo
Cc: Michael Ellerman
Signed-off-by: Gautham R. Shenoy
---
kernel/wor
6000 2fa3
409eff1c 813f0378 2f890001 419eff10 <0fe0> 4b08 6000 6000
---[ end trace cbc1c5cfbc9591d0 ]---
The patches are based on 4.7-rc2. I have tested the patches on a
multi-node x86_64 and a ppc64
Gautham R. Shenoy (2):
workqueue: Move wq_update_unboun
Cc: Michael Ellerman
Signed-off-by: Gautham R. Shenoy
---
kernel/workqueue.c | 8
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index e1c0e99..e412794 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -4638,6 +4638,10
Hi Shreyas,
On Mon, May 23, 2016 at 08:48:40PM +0530, Shreyas B. Prabhu wrote:
> @@ -412,7 +517,8 @@ subcore_state_restored:
> first_thread_in_core:
>
> /*
> - * First thread in the core waking up from fastsleep. It needs to
> + * First thread in the core waking up from any
CR value.
>
> This patch adds support for this new mechanism in cpuidle powernv driver.
>
> Cc: Rafael J. Wysocki
> Cc: Daniel Lezcano
> Cc: linux...@vger.kernel.org
> Cc: Michael Ellerman
> Cc: Paul Mackerras
> Cc: linuxppc-...@lists.ozlabs.org
> Signe
> Bits 60:63 - Requested Level
> Used to specify which power-saving level must be entered on executing
> stop instruction
>
> This patch adds support for stop instruction and PSSCR handling.
This version looks good to me.
Reviewed-by: Gautham R. Shenoy
--
Thanks and Regards
gautham.
Hi Peter,
On Tue, Jun 14, 2016 at 01:22:34PM +0200, Peter Zijlstra wrote:
> On Tue, Jun 07, 2016 at 08:44:03PM +0530, Gautham R. Shenoy wrote:
>
> I'm still puzzled why we don't see this on x86. Afaict there's nothing
> PPC specific about this.
You are right. On PPC, at boot
On Wed, Jun 15, 2016 at 01:32:49PM +0200, Peter Zijlstra wrote:
> On Wed, Jun 15, 2016 at 03:49:36PM +0530, Gautham R Shenoy wrote:
>
> > Also, with the first patch in the series (which ensures that
> > restore_unbound_workers are called *after* the new workers for the
>
Hello Tejun,
On Wed, Jun 15, 2016 at 11:53:50AM -0400, Tejun Heo wrote:
> Hello,
>
> On Tue, Jun 07, 2016 at 08:44:02PM +0530, Gautham R. Shenoy wrote:
> > Currently in the CPU_ONLINE workqueue handler, the
> > restore_unbound_workers_cpumask() will never call
>
Hi Peter, Thomas,
On Tue, Jun 07, 2016 at 08:44:01PM +0530, Gautham R. Shenoy wrote:
> Hi,
>
> This patchset fixes a couple of issues in the CPU_ONLINE notification
> handling for the workqueues with respect to unbounded worker
threads.
Any thoughts on these patches ? They fix
Hello Viresh,
On Thu, Feb 04, 2016 at 01:55:38PM +0530, Viresh Kumar wrote:
> On 04-02-16, 13:44, Gautham R Shenoy wrote:
> > In a a two policy system, to run ondemand on one and conservative on the
> > other,
> > won't the driver have CPUFREQ_HAVE_GO
On Thu, Jan 21, 2016 at 03:08:59PM +0530, Shilpasri G Bhat wrote:
> Signed-off-by: Shilpasri G Bhat
Reviewed-by: Gautham R. Shenoy
--
Thanks and Regards
gautham.
unnecessary overhead
> > in a hot path. So instead of calling cpu_to_chip_id() everytime cache
> > the chip ids for all cores in the array 'core_to_chip_map' and use it
> > in the hotpath.
> >
> > Reported-by: Anton Blanchard
> > Signed-off-by: Shilpasri G Bhat
Currently next_policy() explicitly checks if a policy is the last
policy in the cpufreq_policy_list. Use the standard list_is_last
primitive instead.
Cc: Viresh Kumar
Signed-off-by: Gautham R. Shenoy
---
drivers/cpufreq/cpufreq.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions
Hello Srikar,
Thanks for taking a look at the patch.
On Mon, Dec 07, 2020 at 05:40:42PM +0530, Srikar Dronamraju wrote:
> * Gautham R. Shenoy [2020-12-04 10:18:45]:
>
> > From: "Gautham R. Shenoy"
>
>
>
> >
> > static int
Hello Srikar,
On Mon, Dec 07, 2020 at 06:10:39PM +0530, Srikar Dronamraju wrote:
> * Gautham R. Shenoy [2020-12-04 10:18:46]:
>
> > From: "Gautham R. Shenoy"
> >
> > On POWER systems, groups of threads within a core sharing the L2-cache
> > can be indic
On Mon, Dec 07, 2020 at 06:41:38PM +0530, Srikar Dronamraju wrote:
> * Gautham R. Shenoy [2020-12-04 10:18:47]:
>
> > From: "Gautham R. Shenoy"
> >
> >
> > Signed-off-by: Gautham R. Shenoy
> > ---
> >
> > +extern bool thread_group
From: "Gautham R. Shenoy"
On POWER10 systems, the L2 cache is at the SMT4 small core level. The
following commits ensure that L2 cache gets correctly discovered and
the Last-Level-Cache domain (LLC) is set to the SMT sched-domain.
790a166 powerpc/smp: Parse ibm,thread-groups wit
(Missed cc'ing Cc Peter in the original posting)
On Fri, Apr 02, 2021 at 11:07:54AM +0530, Gautham R. Shenoy wrote:
> From: "Gautham R. Shenoy"
>
> On POWER10 systems, the L2 cache is at the SMT4 small core level. The
> following commits ensure that L2 cache get
Hello Mel,
On Mon, Apr 12, 2021 at 11:48:19AM +0100, Mel Gorman wrote:
> On Mon, Apr 12, 2021 at 11:06:19AM +0100, Valentin Schneider wrote:
> > On 12/04/21 10:37, Mel Gorman wrote:
> > > On Mon, Apr 12, 2021 at 11:54:36AM +0530, Srikar Dronamraju wrote:
> > >> * Gau
On Mon, Apr 12, 2021 at 06:33:55PM +0200, Michal Suchánek wrote:
> On Mon, Apr 12, 2021 at 04:24:44PM +0100, Mel Gorman wrote:
> > On Mon, Apr 12, 2021 at 02:21:47PM +0200, Vincent Guittot wrote:
> > > > > Peter, Valentin, Vincent, Mel, etal
> > > > >
> > > > > On architectures where we have
Hello Mel,
On Mon, Apr 12, 2021 at 04:24:44PM +0100, Mel Gorman wrote:
> On Mon, Apr 12, 2021 at 02:21:47PM +0200, Vincent Guittot wrote:
> > > > Peter, Valentin, Vincent, Mel, etal
> > > >
> > > > On architectures where we have multiple levels of cache access latencies
> > > > within a DIE, (For
901 - 944 of 944 matches
Mail list logo