From: Jason Baron
Sometimes we would like to revert a particular fix. Currently, this
is not easy because we want to keep all other fixes active and we
could revert only the last applied patch.
One solution would be to apply new patch that implemented all
the reverted functions like in the
From: Jason Baron
Sometimes we would like to revert a particular fix. Currently, this
is not easy because we want to keep all other fixes active and we
could revert only the last applied patch.
One solution would be to apply new patch that implemented all
the reverted functions like in the
We are going to simplify the API and code by removing the registration
step. This would require calling init/free functions from enable/disable
ones.
This patch just moves the code the code to prevent more forward
declarations.
This patch does not change the code except of two forward
module_put() is currently never called in klp_complete_transition() when
klp_force is set. As a result, we might keep the reference count even when
klp_enable_patch() fails and klp_cancel_transition() is called.
This might make an assumption that a module might get blocked in some
strange init
We are going to simplify the API and code by removing the registration
step. This would require calling init/free functions from enable/disable
ones.
This patch just moves the code the code to prevent more forward
declarations.
This patch does not change the code except of two forward
module_put() is currently never called in klp_complete_transition() when
klp_force is set. As a result, we might keep the reference count even when
klp_enable_patch() fails and klp_cancel_transition() is called.
This might make an assumption that a module might get blocked in some
strange init
The definition of struct klp_func might be a bit confusing.
The original function is defined by name as a string.
The new function is defined by name as a function pointer
casted to unsigned long.
This patch adds helper macros that hide the different types.
The functions are defined just by the
The address of the to be patched function and new function is stored
in struct klp_func as:
void *new_func;
unsigned long old_addr;
The different naming scheme and type is derived from the way how
the addresses are set. @old_addr is assigned at runtime using
kallsyms-based
The definition of struct klp_func might be a bit confusing.
The original function is defined by name as a string.
The new function is defined by name as a function pointer
casted to unsigned long.
This patch adds helper macros that hide the different types.
The functions are defined just by the
The address of the to be patched function and new function is stored
in struct klp_func as:
void *new_func;
unsigned long old_addr;
The different naming scheme and type is derived from the way how
the addresses are set. @old_addr is assigned at runtime using
kallsyms-based
livepatch: Atomic replace feature
The atomic replace allows to create cumulative patches. They
are useful when you maintain many livepatches and want to remove
one that is lower on the stack. In addition it is very useful when
more patches touch the same function and there are dependencies
livepatch: Atomic replace feature
The atomic replace allows to create cumulative patches. They
are useful when you maintain many livepatches and want to remove
one that is lower on the stack. In addition it is very useful when
more patches touch the same function and there are dependencies
On Tue, 28 Aug 2018 12:54:08 +
Matteo Croce wrote:
> With kernel 4.19.0-rc1 virtio_console hangs very often.
> I can always trigger the bug by pasting some characters in the
> terminal window, the console will stop receiving keypresses, but I can
> still see output from the console.
>
On Tue, 28 Aug 2018 12:54:08 +
Matteo Croce wrote:
> With kernel 4.19.0-rc1 virtio_console hangs very often.
> I can always trigger the bug by pasting some characters in the
> terminal window, the console will stop receiving keypresses, but I can
> still see output from the console.
>
On 8/28/18 7:25 AM, Michal Hocko wrote:
> On Tue 28-08-18 11:05:39, Mikhail Zaslonko wrote:
>> Within show_valid_zones() the function test_pages_in_a_zone() should be
>> called for online memory blocks only. Otherwise it might lead to the
>> VM_BUG_ON due to uninitialized struct pages (when
On 8/28/18 7:25 AM, Michal Hocko wrote:
> On Tue 28-08-18 11:05:39, Mikhail Zaslonko wrote:
>> Within show_valid_zones() the function test_pages_in_a_zone() should be
>> called for online memory blocks only. Otherwise it might lead to the
>> VM_BUG_ON due to uninitialized struct pages (when
On Tue, Aug 28, 2018 at 7:22 AM, Geert Uytterhoeven
wrote:
> Since commit ad67b74d2469d9b8 ("printk: hash addresses printed with
> %p"), the virtual memory layout printed during boot up contains "ptrval"
> instead of actual addresses:
>
> Memory: 501296K/524288K available (6144K kernel code,
On Tue, Aug 28, 2018 at 7:22 AM, Geert Uytterhoeven
wrote:
> Since commit ad67b74d2469d9b8 ("printk: hash addresses printed with
> %p"), the virtual memory layout printed during boot up contains "ptrval"
> instead of actual addresses:
>
> Memory: 501296K/524288K available (6144K kernel code,
Hi!
On Tue, Aug 28, 2018 at 02:36:07PM +0200, Greg KH wrote:
> On Tue, Aug 28, 2018 at 10:38:17AM +, Ahmed S. Darwish wrote:
> > [ re-send; forgotten lkml CC added; sorry ]
> >
> > Hi,
> >
> > On Sat, 18 Aug 2018 17:57:24 +0200, Greg KH wrote:
> > [...]
> > > addition of some new IIO drivers.
Hi!
On Tue, Aug 28, 2018 at 02:36:07PM +0200, Greg KH wrote:
> On Tue, Aug 28, 2018 at 10:38:17AM +, Ahmed S. Darwish wrote:
> > [ re-send; forgotten lkml CC added; sorry ]
> >
> > Hi,
> >
> > On Sat, 18 Aug 2018 17:57:24 +0200, Greg KH wrote:
> > [...]
> > > addition of some new IIO drivers.
On Tue, Aug 28, 2018 at 7:23 AM, Geert Uytterhoeven
wrote:
> Since commit ad67b74d2469d9b8 ("printk: hash addresses printed with
> %p"), the virtual memory layout printed during boot up contains "ptrval"
> instead of actual addresses.
>
> Instead of changing the printing to "%px", and leaking
On Tue, Aug 28, 2018 at 7:23 AM, Geert Uytterhoeven
wrote:
> Since commit ad67b74d2469d9b8 ("printk: hash addresses printed with
> %p"), the virtual memory layout printed during boot up contains "ptrval"
> instead of actual addresses.
>
> Instead of changing the printing to "%px", and leaking
Em Thu, Aug 09, 2018 at 06:30:15PM +0200, Jiri Olsa escreveu:
> On Thu, Aug 09, 2018 at 04:17:13PM +0200, Oleg Nesterov wrote:
> > On 08/09, Jiri Olsa wrote:
> > > - if (err)
> > > + if (err) {
> > > + bp->attr.disabled = 1;
> > > return err;
> >
> > Yes, but on the second
Em Thu, Aug 09, 2018 at 06:30:15PM +0200, Jiri Olsa escreveu:
> On Thu, Aug 09, 2018 at 04:17:13PM +0200, Oleg Nesterov wrote:
> > On 08/09, Jiri Olsa wrote:
> > > - if (err)
> > > + if (err) {
> > > + bp->attr.disabled = 1;
> > > return err;
> >
> > Yes, but on the second
2018-07-20 17:50 GMT+09:00 Masahiro Yamada :
> Catch up with the new binding of the Denali IP where three clocks,
> "nand", "nand_x", "ecc" are required.
>
> For UniPhier SoCs, the "nand_x" and "ecc" are tied up because they
> are both 200MHz.
>
> Signed-off-by: Masahiro Yamada
> ---
Both
2018-07-20 17:50 GMT+09:00 Masahiro Yamada :
> Catch up with the new binding of the Denali IP where three clocks,
> "nand", "nand_x", "ecc" are required.
>
> For UniPhier SoCs, the "nand_x" and "ecc" are tied up because they
> are both 200MHz.
>
> Signed-off-by: Masahiro Yamada
> ---
Both
2018-07-26 16:09 GMT+09:00 Keiji Hayashibara :
> This series adds SPI pin-mux node and SPI node for UniPhier SoCs.
>
> Changes since v1:
> - remove the node for MCSSI which support multiple channels.
> Since the current driver doesn't support MCSSI, it deletes this node.
Applied. Thanks.
>
2018-07-26 16:09 GMT+09:00 Keiji Hayashibara :
> This series adds SPI pin-mux node and SPI node for UniPhier SoCs.
>
> Changes since v1:
> - remove the node for MCSSI which support multiple channels.
> Since the current driver doesn't support MCSSI, it deletes this node.
Applied. Thanks.
>
On Tue, 21 Aug 2018 16:31:46 +0200
Ricardo Ribalda Delgado wrote:
> We should only iomap the area of the chip that is memory mapped.
> Otherwise we could be mapping devices beyond the memory space or that
> belong to other devices.
>
Can you add
Fixes: ebd71e3a4861 ("mtd: maps:
On Tue, 21 Aug 2018 16:31:46 +0200
Ricardo Ribalda Delgado wrote:
> We should only iomap the area of the chip that is memory mapped.
> Otherwise we could be mapping devices beyond the memory space or that
> belong to other devices.
>
Can you add
Fixes: ebd71e3a4861 ("mtd: maps:
Em Thu, Aug 09, 2018 at 02:03:00PM +0200, Jiri Olsa escreveu:
> hi,
> Milind reported that modify_user_hw_breakpoint wouldn't
> allow the breakpoint changing if the new attr had 'disabled'
> set to true.
>
> I found a case where it actualy prevents ptrace user interface
> to change the
Em Thu, Aug 09, 2018 at 02:03:00PM +0200, Jiri Olsa escreveu:
> hi,
> Milind reported that modify_user_hw_breakpoint wouldn't
> allow the breakpoint changing if the new attr had 'disabled'
> set to true.
>
> I found a case where it actualy prevents ptrace user interface
> to change the
Since commit ad67b74d2469d9b8 ("printk: hash addresses printed with
%p"), the virtual memory layout printed during boot up contains "ptrval"
instead of actual addresses.
Instead of changing the printing to "%px", and leaking virtual memory
layout information again, just remove the printing
Since commit ad67b74d2469d9b8 ("printk: hash addresses printed with
%p"), the virtual memory layout printed during boot up contains "ptrval"
instead of actual addresses.
Instead of changing the printing to "%px", and leaking virtual memory
layout information again, just remove the printing
Since commit ad67b74d2469d9b8 ("printk: hash addresses printed with
%p"), the virtual memory layout printed during boot up contains "ptrval"
instead of actual addresses:
Memory: 501296K/524288K available (6144K kernel code, 528K rwdata, 1944K
rodata, 1024K init, 7584K bss, 22992K reserved,
Since commit ad67b74d2469d9b8 ("printk: hash addresses printed with
%p"), the virtual memory layout printed during boot up contains "ptrval"
instead of actual addresses:
Memory: 501296K/524288K available (6144K kernel code, 528K rwdata, 1944K
rodata, 1024K init, 7584K bss, 22992K reserved,
Thanks for your suggestion. Should I send another patch?
At 2018-08-28 21:23:05, "Boris Brezillon" wrote:
>On Tue, 28 Aug 2018 21:21:16 +0800
>Liu Xiang wrote:
>
>> If the size of spi-nor flash is larger than 16MB, the read_opcode
>> is set to SPINOR_OP_READ_1_1_4_4B, and
Thanks for your suggestion. Should I send another patch?
At 2018-08-28 21:23:05, "Boris Brezillon" wrote:
>On Tue, 28 Aug 2018 21:21:16 +0800
>Liu Xiang wrote:
>
>> If the size of spi-nor flash is larger than 16MB, the read_opcode
>> is set to SPINOR_OP_READ_1_1_4_4B, and
Em Tue, Aug 28, 2018 at 11:10:47AM -0300, Arnaldo Carvalho de Melo escreveu:
> Em Mon, Aug 27, 2018 at 11:06:21AM +0200, Martin Liška escreveu:
> > On 08/23/2018 04:12 PM, Arnaldo Carvalho de Melo wrote:
> > > Em Thu, Aug 23, 2018 at 02:29:34PM +0200, Martin Liška escreveu:
> > >> The patch
Em Tue, Aug 28, 2018 at 11:10:47AM -0300, Arnaldo Carvalho de Melo escreveu:
> Em Mon, Aug 27, 2018 at 11:06:21AM +0200, Martin Liška escreveu:
> > On 08/23/2018 04:12 PM, Arnaldo Carvalho de Melo wrote:
> > > Em Thu, Aug 23, 2018 at 02:29:34PM +0200, Martin Liška escreveu:
> > >> The patch
Hi,
On 28.08.2018 14:58, Alexey Budankov wrote:
> Hi Andi,
>
> On 28.08.2018 11:59, Jiri Olsa wrote:
>> On Mon, Aug 27, 2018 at 08:03:21PM +0300, Alexey Budankov wrote:
>>>
>>> Currently in record mode the tool implements trace writing serially.
>>> The algorithm loops over mapped per-cpu data
On Tue, 28 Aug 2018 22:16:08 +0800 (CST)
"Liu Xiang" wrote:
> Thanks for your suggestion. Should I send another patch?
Yes please.
Hi,
On 28.08.2018 14:58, Alexey Budankov wrote:
> Hi Andi,
>
> On 28.08.2018 11:59, Jiri Olsa wrote:
>> On Mon, Aug 27, 2018 at 08:03:21PM +0300, Alexey Budankov wrote:
>>>
>>> Currently in record mode the tool implements trace writing serially.
>>> The algorithm loops over mapped per-cpu data
On Tue, 28 Aug 2018 22:16:08 +0800 (CST)
"Liu Xiang" wrote:
> Thanks for your suggestion. Should I send another patch?
Yes please.
On Mon, Aug 27, 2018 at 8:52 PM Rob Herring wrote:
>
> In preparation to remove the node name pointer from struct device_node,
> convert printf users to use the %pOFn format specifier.
>
> Cc: "David S. Miller"
> Cc: sparcli...@vger.kernel.org
> Signed-off-by: Rob Herring
> ---
> @@ -201,7
On Mon, Aug 27, 2018 at 8:52 PM Rob Herring wrote:
>
> In preparation to remove the node name pointer from struct device_node,
> convert printf users to use the %pOFn format specifier.
>
> Cc: "David S. Miller"
> Cc: sparcli...@vger.kernel.org
> Signed-off-by: Rob Herring
> ---
> @@ -201,7
On Tue, 28 Aug 2018 15:46:38 +0200
Peter Zijlstra wrote:
> On Mon, Aug 27, 2018 at 02:44:57PM +1000, Nicholas Piggin wrote:
>
> > powerpc may be able to use the unmap granule thing to improve
> > its page size dependent flushes, but it might prefer to go
> > a different way and track start-end
On Tue, 28 Aug 2018 15:46:38 +0200
Peter Zijlstra wrote:
> On Mon, Aug 27, 2018 at 02:44:57PM +1000, Nicholas Piggin wrote:
>
> > powerpc may be able to use the unmap granule thing to improve
> > its page size dependent flushes, but it might prefer to go
> > a different way and track start-end
Em Thu, Aug 23, 2018 at 02:29:34PM +0200, Martin Liška escreveu:
> The patch changes interpretation of:
> callq *0x8(%rbx)
>
> from:
> 0.26 │ → callq *8
> to:
> 0.26 │ → callq *0x8(%rbx)
>
> in this can an address is followed by a register, thus
> one can't parse only address.
>
Em Thu, Aug 23, 2018 at 02:29:34PM +0200, Martin Liška escreveu:
> The patch changes interpretation of:
> callq *0x8(%rbx)
>
> from:
> 0.26 │ → callq *8
> to:
> 0.26 │ → callq *0x8(%rbx)
>
> in this can an address is followed by a register, thus
> one can't parse only address.
>
Em Mon, Aug 27, 2018 at 11:06:21AM +0200, Martin Liška escreveu:
> On 08/23/2018 04:12 PM, Arnaldo Carvalho de Melo wrote:
> > Em Thu, Aug 23, 2018 at 02:29:34PM +0200, Martin Liška escreveu:
> >> The patch changes interpretation of:
> >> callq *0x8(%rbx)
> >>
> >> from:
> >> 0.26 │ → callq
Em Mon, Aug 27, 2018 at 11:06:21AM +0200, Martin Liška escreveu:
> On 08/23/2018 04:12 PM, Arnaldo Carvalho de Melo wrote:
> > Em Thu, Aug 23, 2018 at 02:29:34PM +0200, Martin Liška escreveu:
> >> The patch changes interpretation of:
> >> callq *0x8(%rbx)
> >>
> >> from:
> >> 0.26 │ → callq
Hi,
On 28.08.2018 11:59, Jiri Olsa wrote:
> On Mon, Aug 27, 2018 at 08:03:21PM +0300, Alexey Budankov wrote:
>>
>> Currently in record mode the tool implements trace writing serially.
>> The algorithm loops over mapped per-cpu data buffers and stores ready
>> data chunks into a trace file using
Hi,
On 28.08.2018 11:59, Jiri Olsa wrote:
> On Mon, Aug 27, 2018 at 08:03:21PM +0300, Alexey Budankov wrote:
>>
>> Currently in record mode the tool implements trace writing serially.
>> The algorithm loops over mapped per-cpu data buffers and stores ready
>> data chunks into a trace file using
On 08/28/2018 01:35 AM, Jarkko Sakkinen wrote:
> On Mon, Aug 27, 2018 at 02:15:34PM -0700, Dave Hansen wrote:
>> On 08/27/2018 11:53 AM, Jarkko Sakkinen wrote:
>>> +struct sgx_epc_page_ops {
>>> + bool (*get)(struct sgx_epc_page *epc_page);
>>> + void (*put)(struct sgx_epc_page *epc_page);
>>>
On 08/28/2018 01:35 AM, Jarkko Sakkinen wrote:
> On Mon, Aug 27, 2018 at 02:15:34PM -0700, Dave Hansen wrote:
>> On 08/27/2018 11:53 AM, Jarkko Sakkinen wrote:
>>> +struct sgx_epc_page_ops {
>>> + bool (*get)(struct sgx_epc_page *epc_page);
>>> + void (*put)(struct sgx_epc_page *epc_page);
>>>
On 8/17/18 5:00 AM, Oscar Salvador wrote:
> From: Oscar Salvador
>
> Currently, unregister_mem_sect_under_nodes() tries to allocate a nodemask_t
> in order to check whithin the loop which nodes have already been unlinked,
> so we do not repeat the operation on them.
>
> NODEMASK_ALLOC calls
On 8/17/18 5:00 AM, Oscar Salvador wrote:
> From: Oscar Salvador
>
> Currently, unregister_mem_sect_under_nodes() tries to allocate a nodemask_t
> in order to check whithin the loop which nodes have already been unlinked,
> so we do not repeat the operation on them.
>
> NODEMASK_ALLOC calls
> -Original Message-
> From: Vinod
> Sent: Tuesday, August 21, 2018 9:26 PM
> To: Radhey Shyam Pandey
> Cc: dan.j.willi...@intel.com; Michal Simek ; Appana
> Durga Kedareswara Rao ; l...@metafoo.de;
> dmaeng...@vger.kernel.org; linux-arm-ker...@lists.infradead.org; linux-
>
> -Original Message-
> From: Vinod
> Sent: Tuesday, August 21, 2018 9:26 PM
> To: Radhey Shyam Pandey
> Cc: dan.j.willi...@intel.com; Michal Simek ; Appana
> Durga Kedareswara Rao ; l...@metafoo.de;
> dmaeng...@vger.kernel.org; linux-arm-ker...@lists.infradead.org; linux-
>
Utilization clamping allows to clamp the utilization of a CPU within a
[util_min, util_max] range. This range depends on the set of currently
RUNNABLE tasks on a CPU, where each task references two "clamp groups"
defining the util_min and the util_max clamp values to be considered for
that task.
Utilization clamping allows to clamp the utilization of a CPU within a
[util_min, util_max] range. This range depends on the set of currently
RUNNABLE tasks on a CPU, where each task references two "clamp groups"
defining the util_min and the util_max clamp values to be considered for
that task.
Utilization clamp values enforced on a CPU by a task can be updated at
run-time, for example via a sched_setattr syscall, while a task is
currently RUNNABLE on that CPU. In these cases, the task can be already
refcounting a clamp group for its CPU and thus we need to update this
reference to
Utilization clamp values enforced on a CPU by a task can be updated at
run-time, for example via a sched_setattr syscall, while a task is
currently RUNNABLE on that CPU. In these cases, the task can be already
refcounting a clamp group for its CPU and thus we need to update this
reference to
On 28/08/18 14:33, Geert Uytterhoeven wrote:
To preserve alphabetical sort order.
Fixes: 97c2b5cba2044f1c ("mfd: madera: Add register definitions for Cirrus Logic
Madera codecs")
Signed-off-by: Geert Uytterhoeven
---
MAINTAINERS | 32
1 file changed, 16
On 28/08/18 14:33, Geert Uytterhoeven wrote:
To preserve alphabetical sort order.
Fixes: 97c2b5cba2044f1c ("mfd: madera: Add register definitions for Cirrus Logic
Madera codecs")
Signed-off-by: Geert Uytterhoeven
---
MAINTAINERS | 32
1 file changed, 16
Each time a frequency update is required via schedutil, a frequency is
selected to (possibly) satisfy the utilization reported by the CFS
class. However, when utilization clamping is in use, the frequency
selection should consider the requirements suggested by userspace, for
example, to:
- boost
Utilization clamping requires to map each different clamp value
into one of the available clamp groups used by the scheduler's fast-path
to account for RUNNABLE tasks. Thus, each time a TG's clamp value
sysfs attribute is updated via:
cpu_util_{min,max}_write_u64()
we need to get (if possible)
Each time a frequency update is required via schedutil, a frequency is
selected to (possibly) satisfy the utilization reported by the CFS
class. However, when utilization clamping is in use, the frequency
selection should consider the requirements suggested by userspace, for
example, to:
- boost
Utilization clamping requires to map each different clamp value
into one of the available clamp groups used by the scheduler's fast-path
to account for RUNNABLE tasks. Thus, each time a TG's clamp value
sysfs attribute is updated via:
cpu_util_{min,max}_write_u64()
we need to get (if possible)
The cgroup's CPU controller allows to assign a specified (maximum)
bandwidth to the tasks of a group. However this bandwidth is defined and
enforced only on a temporal base, without considering the actual
frequency a CPU is running on. Thus, the amount of computation completed
by a task within an
The cgroup's CPU controller allows to assign a specified (maximum)
bandwidth to the tasks of a group. However this bandwidth is defined and
enforced only on a temporal base, without considering the actual
frequency a CPU is running on. Thus, the amount of computation completed
by a task within an
When a util_max clamped task sleeps, its clamp constraints are removed
from the CPU. However, the blocked utilization on that CPU can still be
higher than the max clamp value enforced while that task was running.
This max clamp removal when a CPU is going to be idle could thus allow
unwanted CPU
When a task group refcounts a new clamp group, we need to ensure that
the new clamp values are immediately enforced to all its tasks which are
currently RUNNABLE. This is to ensure that all currently RUNNABLE tasks
are boosted and/or clamped as requested as soon as possible.
Let's ensure that,
The number of clamp groups supported is limited and defined at compile
time. However, a malicious user can currently ask for many different
clamp values thus consuming all the available clamp groups.
Since on properly configured systems we expect only a limited set of
different clamp values, the
Clamp values cannot be tuned at the root cgroup level. Moreover, because
of the delegation model requirements and how the parent clamps
propagation works, if we want to enable subgroups to set a non null
util.min, we need to be able to configure the root group util.min to the
allow the maximum
In order to properly support hierarchical resources control, the cgroup
delegation model requires that attribute writes from a child group never
fail but still are (potentially) constrained based on parent's assigned
resources. This requires to properly propagate and aggregate parent
attributes
When a task's util_clamp value is configured via sched_setattr(2), this
value has to be properly accounted in the corresponding clamp group
every time the task is enqueued and dequeued. When cgroups are also in
use, per-task clamp values have to be aggregated to those of the CPU's
controller's
When a util_max clamped task sleeps, its clamp constraints are removed
from the CPU. However, the blocked utilization on that CPU can still be
higher than the max clamp value enforced while that task was running.
This max clamp removal when a CPU is going to be idle could thus allow
unwanted CPU
When a task group refcounts a new clamp group, we need to ensure that
the new clamp values are immediately enforced to all its tasks which are
currently RUNNABLE. This is to ensure that all currently RUNNABLE tasks
are boosted and/or clamped as requested as soon as possible.
Let's ensure that,
The number of clamp groups supported is limited and defined at compile
time. However, a malicious user can currently ask for many different
clamp values thus consuming all the available clamp groups.
Since on properly configured systems we expect only a limited set of
different clamp values, the
When a task's util_clamp value is configured via sched_setattr(2), this
value has to be properly accounted in the corresponding clamp group
every time the task is enqueued and dequeued. When cgroups are also in
use, per-task clamp values have to be aggregated to those of the CPU's
controller's
Clamp values cannot be tuned at the root cgroup level. Moreover, because
of the delegation model requirements and how the parent clamps
propagation works, if we want to enable subgroups to set a non null
util.min, we need to be able to configure the root group util.min to the
allow the maximum
In order to properly support hierarchical resources control, the cgroup
delegation model requires that attribute writes from a child group never
fail but still are (potentially) constrained based on parent's assigned
resources. This requires to properly propagate and aggregate parent
attributes
Currently schedutil enforces a maximum frequency when RT tasks are
RUNNABLE. Such a mandatory policy can be made more tunable from
userspace thus allowing for example to define a max frequency which is
still reasonable for the execution of a specific RT workload. This
will contribute to make the
Currently schedutil enforces a maximum frequency when RT tasks are
RUNNABLE. Such a mandatory policy can be made more tunable from
userspace thus allowing for example to define a max frequency which is
still reasonable for the execution of a specific RT workload. This
will contribute to make the
The utilization is a well defined property of tasks and CPUs with an
in-kernel representation based on power-of-two values.
The current representation, in the [0..SCHED_CAPACITY_SCALE] range,
allows efficient computations in hot-paths and a sufficient fixed point
arithmetic precision.
However, the
The utilization is a well defined property of tasks and CPUs with an
in-kernel representation based on power-of-two values.
The current representation, in the [0..SCHED_CAPACITY_SCALE] range,
allows efficient computations in hot-paths and a sufficient fixed point
arithmetic precision.
However, the
The limited number of clamp groups is required to have both an effective
and efficient run-time tracking of the clamp groups required by RUNNABLE
tasks. However, being a limited number imposes some constraints on its
usage at run-time. Specifically, a System Management Software should
"reserve"
The limited number of clamp groups is required to have both an effective
and efficient run-time tracking of the clamp groups required by RUNNABLE
tasks. However, being a limited number imposes some constraints on its
usage at run-time. Specifically, a System Management Software should
"reserve"
Utilization clamping requires each CPU to know which clamp values are
assigned to tasks that are currently RUNNABLE on that CPU.
Multiple tasks can be assigned the same clamp value and tasks with
different clamp values can be concurrently active on the same CPU.
Thus, a proper data structure is
Utilization clamping requires each CPU to know which clamp values are
assigned to tasks that are currently RUNNABLE on that CPU.
Multiple tasks can be assigned the same clamp value and tasks with
different clamp values can be concurrently active on the same CPU.
Thus, a proper data structure is
This is a respin of:
https://lore.kernel.org/lkml/20180806163946.28380-1-patrick.bell...@arm.com/
Which has been rebased on v4.19-rc1.
Thanks for all the valuable comments collected so far!
Further comments and feedbacks are more than welcome!
Cheers Patrick
Main changes in v4
This is a respin of:
https://lore.kernel.org/lkml/20180806163946.28380-1-patrick.bell...@arm.com/
Which has been rebased on v4.19-rc1.
Thanks for all the valuable comments collected so far!
Further comments and feedbacks are more than welcome!
Cheers Patrick
Main changes in v4
The SCHED_DEADLINE scheduling class provides an advanced and formal
model to define tasks requirements which can be translated into proper
decisions for both task placements and frequencies selections.
Other classes have a more simplified model which is essentially based on
the relatively simple
The SCHED_DEADLINE scheduling class provides an advanced and formal
model to define tasks requirements which can be translated into proper
decisions for both task placements and frequencies selections.
Other classes have a more simplified model which is essentially based on
the relatively simple
On Mon, 2018-08-27 at 19:10 -0700, Andy Lutomirski wrote:
> On Mon, Aug 27, 2018 at 6:31 PM, Rik van Riel
> wrote:
>
> > What is special about this path wrt nmi_uaccess_ok that is
> > not also true for the need_flush branch right above it?
> >
> > What am I missing?
>
> Nothing. My patch is
On Mon, 2018-08-27 at 19:10 -0700, Andy Lutomirski wrote:
> On Mon, Aug 27, 2018 at 6:31 PM, Rik van Riel
> wrote:
>
> > What is special about this path wrt nmi_uaccess_ok that is
> > not also true for the need_flush branch right above it?
> >
> > What am I missing?
>
> Nothing. My patch is
On Tue, Aug 28, 2018 at 03:46:38PM +0200, Peter Zijlstra wrote:
> On Mon, Aug 27, 2018 at 02:44:57PM +1000, Nicholas Piggin wrote:
>
> > powerpc may be able to use the unmap granule thing to improve
> > its page size dependent flushes, but it might prefer to go
> > a different way and track
On Tue, Aug 28, 2018 at 03:46:38PM +0200, Peter Zijlstra wrote:
> On Mon, Aug 27, 2018 at 02:44:57PM +1000, Nicholas Piggin wrote:
>
> > powerpc may be able to use the unmap granule thing to improve
> > its page size dependent flushes, but it might prefer to go
> > a different way and track
801 - 900 of 1380 matches
Mail list logo