[PATCH v2] rcu: Add comment documenting how rcu_seq_snap works

2018-05-11 Thread Joel Fernandes (Google)
rcu_seq_snap may be tricky for someone looking at it for the first time. Lets document how it works with an example to make it easier. Signed-off-by: Joel Fernandes (Google) --- v2 changes: Corrections as suggested by Randy. kernel/rcu/rcu.h | 24 +++- 1 file changed, 23

[PATCH] rcu: trace: Remove Startedleaf from trace events comment

2018-05-11 Thread Joel Fernandes (Google)
As part of the gp_seq clean up, the Startleaf condition doesn't occur anymore. Remove it from the comment in the trace event file. Signed-off-by: Joel Fernandes (Google) <j...@joelfernandes.org> --- include/trace/events/rcu.h | 1 - 1 file changed, 1 deletion(-) diff --git a/include

[PATCH] rcu: trace: Remove Startedleaf from trace events comment

2018-05-11 Thread Joel Fernandes (Google)
As part of the gp_seq clean up, the Startleaf condition doesn't occur anymore. Remove it from the comment in the trace event file. Signed-off-by: Joel Fernandes (Google) --- include/trace/events/rcu.h | 1 - 1 file changed, 1 deletion(-) diff --git a/include/trace/events/rcu.h b/include/trace

[PATCH] rcu: Add comment documenting how rcu_seq_snap works

2018-05-11 Thread Joel Fernandes (Google)
rcu_seq_snap may be tricky for someone looking at it for the first time. Lets document how it works with an example to make it easier. Signed-off-by: Joel Fernandes (Google) <j...@joelfernandes.org> --- kernel/rcu/rcu.h | 23 ++- 1 file changed, 22 insertions(+), 1 de

[PATCH] rcu: Add comment documenting how rcu_seq_snap works

2018-05-11 Thread Joel Fernandes (Google)
rcu_seq_snap may be tricky for someone looking at it for the first time. Lets document how it works with an example to make it easier. Signed-off-by: Joel Fernandes (Google) --- kernel/rcu/rcu.h | 23 ++- 1 file changed, 22 insertions(+), 1 deletion(-) diff --git a/kernel

Re: [PATCH] proc/stat: Separate out individual irq counts into /proc/stat_irqs

2018-04-19 Thread Joel Fernandes (Google)
Hi, Or maintain array of registered irqs and iterate over them only. >>> Right, we can allocate a bitmap of used irqs to do that. >>> I have another idea. perf record shows mutex_lock/mutex_unlock at the top. Most of them are irq mutex not seqfile mutex as there are many

Re: [PATCH] proc/stat: Separate out individual irq counts into /proc/stat_irqs

2018-04-19 Thread Joel Fernandes (Google)
Hi, Or maintain array of registered irqs and iterate over them only. >>> Right, we can allocate a bitmap of used irqs to do that. >>> I have another idea. perf record shows mutex_lock/mutex_unlock at the top. Most of them are irq mutex not seqfile mutex as there are many

Re: [PATCH] sched: support dynamiQ cluster

2018-04-13 Thread Joel Fernandes (Google)
On Fri, Apr 6, 2018 at 5:58 AM, Morten Rasmussen wrote: > On Thu, Apr 05, 2018 at 06:22:48PM +0200, Vincent Guittot wrote: >> Hi Morten, >> >> On 5 April 2018 at 17:46, Morten Rasmussen wrote: >> > On Wed, Apr 04, 2018 at 03:43:17PM +0200,

Re: [PATCH] sched: support dynamiQ cluster

2018-04-13 Thread Joel Fernandes (Google)
On Fri, Apr 6, 2018 at 5:58 AM, Morten Rasmussen wrote: > On Thu, Apr 05, 2018 at 06:22:48PM +0200, Vincent Guittot wrote: >> Hi Morten, >> >> On 5 April 2018 at 17:46, Morten Rasmussen wrote: >> > On Wed, Apr 04, 2018 at 03:43:17PM +0200, Vincent Guittot wrote: >> >> On 4 April 2018 at 12:44,

Re: [RFC PATCH] tracepoint: Provide tracepoint_kernel_find_by_name

2018-03-27 Thread Joel Fernandes (Google)
On Tue, Mar 27, 2018 at 6:27 AM, Mathieu Desnoyers wrote: >>> +static void find_tp(struct tracepoint *tp, void *priv) >>> +{ >>> + struct tp_find_args *args = priv; >>> + >>> + if (!strcmp(tp->name, args->name)) { >>> +

Re: [RFC PATCH] tracepoint: Provide tracepoint_kernel_find_by_name

2018-03-27 Thread Joel Fernandes (Google)
On Tue, Mar 27, 2018 at 6:27 AM, Mathieu Desnoyers wrote: >>> +static void find_tp(struct tracepoint *tp, void *priv) >>> +{ >>> + struct tp_find_args *args = priv; >>> + >>> + if (!strcmp(tp->name, args->name)) { >>> + WARN_ON_ONCE(args->tp); >>> +

Re: [PATCH 0/3] [RFC] init, tracing: Add initcall trace events

2018-03-26 Thread Joel Fernandes (Google)
Hi Steve, On Fri, Mar 23, 2018 at 8:02 AM, Steven Rostedt wrote: > A while ago we had a boot tracer. But it was eventually removed: > commit 30dbb20e68e6f ("tracing: Remove boot tracer"). > > The rational was because there is already a initcall_debug boot option > that

Re: [PATCH 0/3] [RFC] init, tracing: Add initcall trace events

2018-03-26 Thread Joel Fernandes (Google)
Hi Steve, On Fri, Mar 23, 2018 at 8:02 AM, Steven Rostedt wrote: > A while ago we had a boot tracer. But it was eventually removed: > commit 30dbb20e68e6f ("tracing: Remove boot tracer"). > > The rational was because there is already a initcall_debug boot option > that causes printk()s of all

Re: [RFC PATCH] tracepoint: Provide tracepoint_kernel_find_by_name

2018-03-26 Thread Joel Fernandes (Google)
Hi Mathieu, On Mon, Mar 26, 2018 at 12:10 PM, Mathieu Desnoyers wrote: > Provide an API allowing eBPF to lookup core kernel tracepoints by name. > > Given that a lookup by name explicitly requires tracepoint definitions > to be unique for a given name (no

Re: [RFC PATCH] tracepoint: Provide tracepoint_kernel_find_by_name

2018-03-26 Thread Joel Fernandes (Google)
Hi Mathieu, On Mon, Mar 26, 2018 at 12:10 PM, Mathieu Desnoyers wrote: > Provide an API allowing eBPF to lookup core kernel tracepoints by name. > > Given that a lookup by name explicitly requires tracepoint definitions > to be unique for a given name (no duplicate keys), include a >

Re: [PATCH] staging: android: ashmem: Fix possible deadlock in ashmem_ioctl

2018-03-19 Thread Joel Fernandes (Google)
On Tue, Feb 27, 2018 at 10:59 PM, Yisheng Xie wrote: > ashmem_mutex may create a chain of dependencies like: > > CPU0CPU1 > mmap syscall ioctl syscall > -> mmap_sem (acquired) -> ashmem_ioctl >

Re: [PATCH] staging: android: ashmem: Fix possible deadlock in ashmem_ioctl

2018-03-19 Thread Joel Fernandes (Google)
On Tue, Feb 27, 2018 at 10:59 PM, Yisheng Xie wrote: > ashmem_mutex may create a chain of dependencies like: > > CPU0CPU1 > mmap syscall ioctl syscall > -> mmap_sem (acquired) -> ashmem_ioctl > -> ashmem_mmap

Re: [for-next][PATCH 15/16] ftrace: Add freeing algorithm to free ftrace_mod_maps

2017-10-07 Thread Joel Fernandes (Google)
Hi Steve, On Sat, Oct 7, 2017 at 6:32 AM, Steven Rostedt <rost...@goodmis.org> wrote: > On Fri, 6 Oct 2017 23:41:25 -0700 > "Joel Fernandes (Google)" <joel.open...@gmail.com> wrote: > >> Hi Steve, >> >> On Fri, Oct 6, 2017 at 11:07 AM, Stev

Re: [for-next][PATCH 15/16] ftrace: Add freeing algorithm to free ftrace_mod_maps

2017-10-07 Thread Joel Fernandes (Google)
Hi Steve, On Sat, Oct 7, 2017 at 6:32 AM, Steven Rostedt wrote: > On Fri, 6 Oct 2017 23:41:25 -0700 > "Joel Fernandes (Google)" wrote: > >> Hi Steve, >> >> On Fri, Oct 6, 2017 at 11:07 AM, Steven Rostedt wrote: >> > From: "Steven Rostedt (VMw

Re: [for-next][PATCH 15/16] ftrace: Add freeing algorithm to free ftrace_mod_maps

2017-10-07 Thread Joel Fernandes (Google)
Hi Steve, On Fri, Oct 6, 2017 at 11:07 AM, Steven Rostedt wrote: > From: "Steven Rostedt (VMware)" > > The ftrace_mod_map is a descriptor to save module init function names in > case they were traced, and the trace output needs to reference the function

Re: [for-next][PATCH 15/16] ftrace: Add freeing algorithm to free ftrace_mod_maps

2017-10-07 Thread Joel Fernandes (Google)
Hi Steve, On Fri, Oct 6, 2017 at 11:07 AM, Steven Rostedt wrote: > From: "Steven Rostedt (VMware)" > > The ftrace_mod_map is a descriptor to save module init function names in > case they were traced, and the trace output needs to reference the function > name from the function address. But

Re: [PATCH v7 1/2] sched/deadline: Add support for SD_PREFER_SIBLING on find_later_rq()

2017-08-18 Thread Joel Fernandes (Google)
Hi Byungchul, On Thu, Aug 17, 2017 at 11:05 PM, Byungchul Park wrote: > It would be better to avoid pushing tasks to other cpu within > a SD_PREFER_SIBLING domain, instead, get more chances to check other > siblings. > > Signed-off-by: Byungchul Park

Re: [PATCH v7 1/2] sched/deadline: Add support for SD_PREFER_SIBLING on find_later_rq()

2017-08-18 Thread Joel Fernandes (Google)
Hi Byungchul, On Thu, Aug 17, 2017 at 11:05 PM, Byungchul Park wrote: > It would be better to avoid pushing tasks to other cpu within > a SD_PREFER_SIBLING domain, instead, get more chances to check other > siblings. > > Signed-off-by: Byungchul Park > --- > kernel/sched/deadline.c | 55 >

Re: [PATCH v6 0/2] Make find_later_rq() choose a closer cpu in topology

2017-08-17 Thread Joel Fernandes (Google)
On Thu, Aug 17, 2017 at 6:25 PM, Byungchul Park wrote: > On Mon, Aug 07, 2017 at 12:50:32PM +0900, Byungchul Park wrote: >> When cpudl_find() returns any among free_cpus, the cpu might not be >> closer than others, considering sched domain. For example: >> >>this_cpu:

Re: [PATCH v6 0/2] Make find_later_rq() choose a closer cpu in topology

2017-08-17 Thread Joel Fernandes (Google)
On Thu, Aug 17, 2017 at 6:25 PM, Byungchul Park wrote: > On Mon, Aug 07, 2017 at 12:50:32PM +0900, Byungchul Park wrote: >> When cpudl_find() returns any among free_cpus, the cpu might not be >> closer than others, considering sched domain. For example: >> >>this_cpu: 15 >>free_cpus: 0,

Re: [Eas-dev] [PATCH V3 1/3] sched: cpufreq: Allow remote cpufreq callbacks

2017-07-27 Thread Joel Fernandes (Google)
On Thu, Jul 27, 2017 at 12:55 PM, Saravana Kannan wrote: > On 07/26/2017 08:30 PM, Viresh Kumar wrote: >> >> On 26-07-17, 14:00, Saravana Kannan wrote: >>> >>> No, the alternative is to pass it on to the CPU freq driver and let it >>> decide what it wants to do. That's the

Re: [Eas-dev] [PATCH V3 1/3] sched: cpufreq: Allow remote cpufreq callbacks

2017-07-27 Thread Joel Fernandes (Google)
On Thu, Jul 27, 2017 at 12:55 PM, Saravana Kannan wrote: > On 07/26/2017 08:30 PM, Viresh Kumar wrote: >> >> On 26-07-17, 14:00, Saravana Kannan wrote: >>> >>> No, the alternative is to pass it on to the CPU freq driver and let it >>> decide what it wants to do. That's the whole point if having a

Re: [Eas-dev] [PATCH V4 0/3] sched: cpufreq: Allow remote callbacks

2017-07-27 Thread Joel Fernandes (Google)
On Thu, Jul 27, 2017 at 12:21 AM, Juri Lelli wrote: [..] >> > >> > But even without that, if you see the routine >> > init_entity_runnable_average() in fair.c, the new tasks are >> > initialized in a way that they are seen as heavy tasks. And so even >> > for the first time

Re: [Eas-dev] [PATCH V4 0/3] sched: cpufreq: Allow remote callbacks

2017-07-27 Thread Joel Fernandes (Google)
On Thu, Jul 27, 2017 at 12:21 AM, Juri Lelli wrote: [..] >> > >> > But even without that, if you see the routine >> > init_entity_runnable_average() in fair.c, the new tasks are >> > initialized in a way that they are seen as heavy tasks. And so even >> > for the first time they run, freq should

Re: [Eas-dev] [PATCH V4 1/3] sched: cpufreq: Allow remote cpufreq callbacks

2017-07-27 Thread Joel Fernandes (Google)
On Thu, Jul 27, 2017 at 12:14 AM, Viresh Kumar <viresh.ku...@linaro.org> wrote: > On 26-07-17, 23:13, Joel Fernandes (Google) wrote: >> On Wed, Jul 26, 2017 at 10:50 PM, Viresh Kumar <viresh.ku...@linaro.org> >> wrote: >> > On 26-07-17, 22:34, Joel Fernandes

Re: [Eas-dev] [PATCH V4 1/3] sched: cpufreq: Allow remote cpufreq callbacks

2017-07-27 Thread Joel Fernandes (Google)
On Thu, Jul 27, 2017 at 12:14 AM, Viresh Kumar wrote: > On 26-07-17, 23:13, Joel Fernandes (Google) wrote: >> On Wed, Jul 26, 2017 at 10:50 PM, Viresh Kumar >> wrote: >> > On 26-07-17, 22:34, Joel Fernandes (Google) wrote: >> >> On Wed, Jul 26, 2017 at

Re: [Eas-dev] [PATCH V4 0/3] sched: cpufreq: Allow remote callbacks

2017-07-27 Thread Joel Fernandes (Google)
Hi Viresh, On Wed, Jul 26, 2017 at 10:46 PM, Viresh Kumar <viresh.ku...@linaro.org> wrote: > On 26-07-17, 22:14, Joel Fernandes (Google) wrote: >> Also one more comment about this usecase: >> >> You mentioned in our discussion at [2] sometime back, about the >>

Re: [Eas-dev] [PATCH V4 0/3] sched: cpufreq: Allow remote callbacks

2017-07-27 Thread Joel Fernandes (Google)
Hi Viresh, On Wed, Jul 26, 2017 at 10:46 PM, Viresh Kumar wrote: > On 26-07-17, 22:14, Joel Fernandes (Google) wrote: >> Also one more comment about this usecase: >> >> You mentioned in our discussion at [2] sometime back, about the >> question of initial utilizatio

Re: [Eas-dev] [PATCH V4 1/3] sched: cpufreq: Allow remote cpufreq callbacks

2017-07-27 Thread Joel Fernandes (Google)
On Wed, Jul 26, 2017 at 10:50 PM, Viresh Kumar <viresh.ku...@linaro.org> wrote: > On 26-07-17, 22:34, Joel Fernandes (Google) wrote: >> On Wed, Jul 26, 2017 at 2:22 AM, Viresh Kumar <viresh.ku...@linaro.org> >> wrote: >> > @@ -221,7 +226,7 @@ st

Re: [Eas-dev] [PATCH V4 1/3] sched: cpufreq: Allow remote cpufreq callbacks

2017-07-27 Thread Joel Fernandes (Google)
On Wed, Jul 26, 2017 at 10:50 PM, Viresh Kumar wrote: > On 26-07-17, 22:34, Joel Fernandes (Google) wrote: >> On Wed, Jul 26, 2017 at 2:22 AM, Viresh Kumar >> wrote: >> > @@ -221,7 +226,7 @@ static void sugov_update_single(struct >> >

Re: [Eas-dev] [PATCH V4 2/3] cpufreq: schedutil: Process remote callback for shared policies

2017-07-26 Thread Joel Fernandes (Google)
On Wed, Jul 26, 2017 at 2:22 AM, Viresh Kumar wrote: > This patch updates the schedutil governor to process cpufreq utilization > update hooks called for remote CPUs where the remote CPU is managed by > the cpufreq policy of the local CPU. > > Based on initial work from

Re: [Eas-dev] [PATCH V4 2/3] cpufreq: schedutil: Process remote callback for shared policies

2017-07-26 Thread Joel Fernandes (Google)
On Wed, Jul 26, 2017 at 2:22 AM, Viresh Kumar wrote: > This patch updates the schedutil governor to process cpufreq utilization > update hooks called for remote CPUs where the remote CPU is managed by > the cpufreq policy of the local CPU. > > Based on initial work from Steve Muckle. > >

Re: [Eas-dev] [PATCH V4 1/3] sched: cpufreq: Allow remote cpufreq callbacks

2017-07-26 Thread Joel Fernandes (Google)
Hi Viresh, On Wed, Jul 26, 2017 at 2:22 AM, Viresh Kumar wrote: > We do not call cpufreq callbacks from scheduler core for remote > (non-local) CPUs currently. But there are cases where such remote > callbacks are useful, specially in the case of shared cpufreq policies.

Re: [Eas-dev] [PATCH V4 1/3] sched: cpufreq: Allow remote cpufreq callbacks

2017-07-26 Thread Joel Fernandes (Google)
Hi Viresh, On Wed, Jul 26, 2017 at 2:22 AM, Viresh Kumar wrote: > We do not call cpufreq callbacks from scheduler core for remote > (non-local) CPUs currently. But there are cases where such remote > callbacks are useful, specially in the case of shared cpufreq policies. > > This patch updates

Re: [Eas-dev] [PATCH V4 0/3] sched: cpufreq: Allow remote callbacks

2017-07-26 Thread Joel Fernandes (Google)
Hi Viresh, On Wed, Jul 26, 2017 at 2:22 AM, Viresh Kumar wrote: > > With Android UI and benchmarks the latency of cpufreq response to > certain scheduling events can become very critical. Currently, callbacks > into schedutil are only made from the scheduler if the

Re: [Eas-dev] [PATCH V4 0/3] sched: cpufreq: Allow remote callbacks

2017-07-26 Thread Joel Fernandes (Google)
Hi Viresh, On Wed, Jul 26, 2017 at 2:22 AM, Viresh Kumar wrote: > > With Android UI and benchmarks the latency of cpufreq response to > certain scheduling events can become very critical. Currently, callbacks > into schedutil are only made from the scheduler if the target CPU of the > event is

Re: [PATCH 04/32] ring-buffer: Redefine the unimplemented RINGBUF_TIME_TIME_STAMP

2017-07-02 Thread Joel Fernandes (Google)
On Mon, Jun 26, 2017 at 3:49 PM, Tom Zanussi wrote: > RINGBUF_TYPE_TIME_STAMP is defined but not used, and from what I can > gather was reserved for something like an absolute timestamp feature > for the ring buffer, if not a complete replacement of the current >

Re: [PATCH 04/32] ring-buffer: Redefine the unimplemented RINGBUF_TIME_TIME_STAMP

2017-07-02 Thread Joel Fernandes (Google)
On Mon, Jun 26, 2017 at 3:49 PM, Tom Zanussi wrote: > RINGBUF_TYPE_TIME_STAMP is defined but not used, and from what I can > gather was reserved for something like an absolute timestamp feature > for the ring buffer, if not a complete replacement of the current > time_delta scheme. > > This code

Re: [PATCH 00/32] tracing: Inter-event (e.g. latency) support

2017-07-01 Thread Joel Fernandes (Google)
Hi Tom, Nice series and nice ELC talk as well. Thanks. On Mon, Jun 26, 2017 at 3:49 PM, Tom Zanussi wrote: > This patchset adds support for 'inter-event' quantities to the trace > event subsystem. The most important example of inter-event quantities > are latencies,

Re: [PATCH 00/32] tracing: Inter-event (e.g. latency) support

2017-07-01 Thread Joel Fernandes (Google)
Hi Tom, Nice series and nice ELC talk as well. Thanks. On Mon, Jun 26, 2017 at 3:49 PM, Tom Zanussi wrote: > This patchset adds support for 'inter-event' quantities to the trace > event subsystem. The most important example of inter-event quantities > are latencies, or the time differences

Re: [RFC v3 1/5] sched/core: add capacity constraints to CPU controller

2017-03-24 Thread Joel Fernandes (Google)
Hi Patrick, On Thu, Mar 23, 2017 at 3:32 AM, Patrick Bellasi wrote: [..] >> > which can be used to defined tunable root constraints when CGroups are >> > not available, and becomes RO when CGroups are. >> > >> > Can this be eventually an acceptable option? >> > >> > In

Re: [RFC v3 1/5] sched/core: add capacity constraints to CPU controller

2017-03-24 Thread Joel Fernandes (Google)
Hi Patrick, On Thu, Mar 23, 2017 at 3:32 AM, Patrick Bellasi wrote: [..] >> > which can be used to defined tunable root constraints when CGroups are >> > not available, and becomes RO when CGroups are. >> > >> > Can this be eventually an acceptable option? >> > >> > In any case I think that this

Re: [RFC v3 1/5] sched/core: add capacity constraints to CPU controller

2017-03-24 Thread Joel Fernandes (Google)
Hi Tejun, >> That's also why the proposed interface has now been defined as a extension of >> the CPU controller in such a way to keep a consistent view. >> >> This controller is already used by run-times like Android to "scope" apps by >> constraining the amount of CPUs resource they are

Re: [RFC v3 1/5] sched/core: add capacity constraints to CPU controller

2017-03-24 Thread Joel Fernandes (Google)
Hi Tejun, >> That's also why the proposed interface has now been defined as a extension of >> the CPU controller in such a way to keep a consistent view. >> >> This controller is already used by run-times like Android to "scope" apps by >> constraining the amount of CPUs resource they are

Re: [RFC v3 1/5] sched/core: add capacity constraints to CPU controller

2017-03-22 Thread Joel Fernandes (Google)
Hi, On Mon, Mar 20, 2017 at 11:08 AM, Patrick Bellasi wrote: > On 20-Mar 13:15, Tejun Heo wrote: >> Hello, >> >> On Tue, Feb 28, 2017 at 02:38:38PM +, Patrick Bellasi wrote: [..] >> > These attributes: >> > a) are tunable at all hierarchy levels, i.e. root group too

Re: [RFC v3 1/5] sched/core: add capacity constraints to CPU controller

2017-03-22 Thread Joel Fernandes (Google)
Hi, On Mon, Mar 20, 2017 at 11:08 AM, Patrick Bellasi wrote: > On 20-Mar 13:15, Tejun Heo wrote: >> Hello, >> >> On Tue, Feb 28, 2017 at 02:38:38PM +, Patrick Bellasi wrote: [..] >> > These attributes: >> > a) are tunable at all hierarchy levels, i.e. root group too >> >> This usually is

Re: [RFC v3 1/5] sched/core: add capacity constraints to CPU controller

2017-03-13 Thread Joel Fernandes (Google)
On Tue, Feb 28, 2017 at 6:38 AM, Patrick Bellasi wrote: > The CPU CGroup controller allows to assign a specified (maximum) > bandwidth to tasks within a group, however it does not enforce any > constraint on how such bandwidth can be consumed. > With the integration of

Re: [RFC v3 1/5] sched/core: add capacity constraints to CPU controller

2017-03-13 Thread Joel Fernandes (Google)
On Tue, Feb 28, 2017 at 6:38 AM, Patrick Bellasi wrote: > The CPU CGroup controller allows to assign a specified (maximum) > bandwidth to tasks within a group, however it does not enforce any > constraint on how such bandwidth can be consumed. > With the integration of schedutil, the scheduler

Re: [RFC v3 5/5] sched/{core,cpufreq_schedutil}: add capacity clamping for RT/DL tasks

2017-03-13 Thread Joel Fernandes (Google)
Hi Patrick, On Tue, Feb 28, 2017 at 6:38 AM, Patrick Bellasi wrote: > Currently schedutil enforce a maximum OPP when RT/DL tasks are RUNNABLE. > Such a mandatory policy can be made more tunable from userspace thus > allowing for example to define a reasonable max

Re: [RFC v3 5/5] sched/{core,cpufreq_schedutil}: add capacity clamping for RT/DL tasks

2017-03-13 Thread Joel Fernandes (Google)
Hi Patrick, On Tue, Feb 28, 2017 at 6:38 AM, Patrick Bellasi wrote: > Currently schedutil enforce a maximum OPP when RT/DL tasks are RUNNABLE. > Such a mandatory policy can be made more tunable from userspace thus > allowing for example to define a reasonable max capacity (i.e. > frequency)

<    1   2   3   4   5   6