1)Add magic for declarations of variables of popular kernel type
like spinlock_t, list_head, wait_queue_head_t and other.
2)Add a set of specially handled declaration extentions
like __attribute, __aligned and other.
3)Simplify pci_bus_* magic
Signed-off-by: Kirill V Tkhai tk...@yandex.ru
---
The current throttling logic always skips RT class if rq-rt is throttled.
It doesn't handle the special case when RT tasks are the only running tasks
in the rq. So it's possible CPU picks idle task up when RTs are available.
This patch aims to avoid the above situation. The modified
I need a little rework of this patch. I'll send it later.
Sorry for the noise.
Kirill
27.10.2012, 14:36, Kirill Tkhai tk...@yandex.ru:
The current throttling logic always skips RT class if rq-rt is throttled.
It doesn't handle the special case when RT tasks are the only running tasks
Add magic for declarations of variables of popular kernel type
like spinlock_t, list_head, wait_queue_head_t and others.
Signed-off-by: Kirill V Tkhai tk...@yandex.ru
---
scripts/tags.sh | 39 ++-
1 files changed, 34 insertions(+), 5 deletions(-)
mode
Add rules for definitions which is generally used in asm-offsets files.
Signed-off-by: Kirill V Tkhai tk...@yandex.ru
CC: Michal Marek mma...@suse.cz
CC: Andrew Morton a...@linux-foundation.org
---
scripts/tags.sh |4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git
27.03.2013, 01:35, Michal Marek mma...@suse.cz:
On Sat, Mar 23, 2013 at 02:58:20PM +0400, Kirill Tkhai wrote:
Add rules for definitions which is generally used in asm-offsets files.
Signed-off-by: Kirill V Tkhai tk...@yandex.ru
CC: Michal Marek mma...@suse.cz
CC: Andrew Morton
Simple return
Signed-off-by: Kirill V Tkhai tk...@yandex.ru
CC: Steven Rostedt rost...@goodmis.org
CC: Ingo Molnar mi...@kernel.org
CC: Peter Zijlstra pet...@infradead.org
CC: linux-rt-users linux-rt-us...@vger.kernel.org
---
kernel/sched/stop_task.c |4 ++--
1 file changed, 2 insertions(+),
It's possible a situation when rq-rt is throttled or
it has no child entities and there are RT tasks ready
for execution in the rq which are the only tasks
of TASK_RUNNING state. In this case pick_next_task
takes idle tasks and idle wastes cpu time.
The patch change logic of pre_schedule a little
On 2013/2/1 5:57, Kirill Tkhai wrote:
31.01.2013, 20:08, Steven Rostedt rost...@goodmis.org:
On Mon, 2013-01-28 at 03:46 +0400, Kirill Tkhai wrote:
The patch aims to decrease the number of calls of push_rt_task()
in push_rt_tasks().
It's not necessary to push more than
On 2013/2/5 15:22, Kirill Tkhai wrote:
Suppose we have a large number of cpus(say 4096), with the last one running
a low-priority task on it. Is it possible with this patch we will never
reach
the last cpu in case that previous cpu has complete the pulled task?
Yes. But this patch
From: Libo Chen libo.c...@huawei.com
On 2013-1-29 4:23, Kirill Tkhai wrote:
Just switched pinned task is not able to be pushed. If the rq had had
several RT tasks before they have already been considered as candidates
to be pushed (or pulled).
Signed-off-by: Kirill V Tkhai tk
On Tue, 2013-01-29 at 00:23 +0400, Kirill Tkhai wrote:
Just switched pinned task is not able to be pushed. If the rq had had
several RT tasks before they have already been considered as candidates
to be pushed (or pulled).
Thanks, but I have one minor nit.
Signed-off-by: Kirill V Tkhai
There are several places of consecutive calls of dequeue_task_rt()
and put_prev_task_rt() in the scheduler. For example, function
rt_mutex_setprio() does it.
The both calls lead to update_curr_rt(), the second of it receives
zeroed delta_exec. The only effective action in this case is call of
Function next_prio() has been removed and pull_rt_task() is the only
user of pick_next_highest_task_rt() at the moment.
pull_rt_task is not interested in p-nr_cpus_allowed, its only interest
is the fact that cpu is allowed to execute p. If nr_cpus_allowed == 1,
cpu != task_cpu(p) and cpu is
31.01.2013, 20:08, Steven Rostedt rost...@goodmis.org:
On Mon, 2013-01-28 at 03:46 +0400, Kirill Tkhai wrote:
The patch aims to decrease the number of calls of push_rt_task()
in push_rt_tasks().
It's not necessary to push more than 'num_online_cpus() - 1' tasks.
If just pushed task
1)Add magic for declarations of variables of popular kernel type
like spinlock_t, list_head, wait_queue_head_t and other.
2)Add a set of specially handled declaration extentions
like __attribute, __aligned and other.
3)Simplify pci_bus_* magic
Signed-off-by: Kirill V Tkhai tk...@yandex.ru
Cc:
The patch aims to decrease the number of calls of push_rt_task()
in push_rt_tasks().
It's not necessary to push more than 'num_online_cpus() - 1' tasks.
If just pushed task doesn't leave its new CPU during our local call
of push_rt_tasks() than we won't push another task to the CPU.
If it leave
Just switched pinned task is not able to be pushed. If the rq had had
several RT tasks before they have already been considered as candidates
to be pushed (or pulled).
Signed-off-by: Kirill V Tkhai tk...@yandex.ru
CC: Steven Rostedt rost...@goodmis.org
CC: Ingo Molnar mi...@kernel.org
CC: Peter
Function __enqueue_rt_entity() adds an empty queue to leaf_rt_rq_list.
So, pick_next_highest_task_rt() picks empty queues. Fix it.
Signed-off-by: Kirill Tkhai tk...@yandex.ru
---
kernel/sched/rt.c |2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/kernel/sched/rt.c b
Second version. Add every not empty queue once. The patch:
Function __enqueue_rt_entity() adds an empty queue to leaf_rt_rq_list.
So, pick_next_highest_task_rt() picks empty queues. Fix it.
Signed-off-by: Kirill Tkhai tk...@yandex.ru
---
kernel/sched/rt.c |2 +-
1 files changed, 1
Again, the rt_nr_running hasn't been incremented yet. This patch will
add it when the rq gets a second task.
-- Steve
Right, thanks for the explanation
Kirill
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More
scripts/tags.sh: Add magic for pci access functions
Make [ce]tags find the pci_bus_read_config_* and pci_bus_write_config_*
definitions
Signed-off-by: Kirill Tkhai tk...@yandex.ru
---
scripts/tags.sh |8 ++--
1 files changed, 6 insertions(+), 2 deletions(-)
diff --git a/scripts/tags.sh b
Reschedule rq-curr if the first RT task has just been
pulled to the rq.
Signed-off-by: Kirill V Tkhai tk...@yandex.ru
CC: Steven Rostedt rost...@goodmis.org
CC: Ingo Molnar mi...@kernel.org
CC: Peter Zijlstra pet...@infradead.org
---
kernel/sched/rt.c |7 +--
1 file changed, 5
The most probably, next after pull_rt_task action will be picking a task
from the rq. So it's useless to pull tasks whose (corresponding to rq)
rt_rq is throttled.
Signed-off-by: Kirill V Tkhai tk...@yandex.ru
CC: Steven Rostedt rost...@goodmis.org
CC: Ingo Molnar mi...@kernel.org
CC: Peter
16.11.2012, 00:36, Steven Rostedt rost...@goodmis.org:
Doing my INBOX maintenance (clean up), I've stumbled on this thread
again. I'm not sure the changes here are hopeless.
On Mon, 2012-06-04 at 13:27 +0800, Yong Zhang wrote:
On Fri, Jun 01, 2012 at 08:45:16PM +0400, Kirill Tkhai wrote
The members rt_nr_total, rt_nr_migratory, overloaded and pushable_tasks are
properties of cpu runqueue, not group rt_rq.
Signed-off-by: Kirill V Tkhai tk...@yandex.ru
CC: Steven Rostedt rost...@goodmis.org
CC: Ingo Molnar mi...@kernel.org
CC: Peter Zijlstra pet...@infradead.org
CC: linux-rt-users
20.12.2012, 21:53, Thomas Gleixner t...@linutronix.de:
On Tue, 18 Dec 2012, Kirill Tkhai wrote:
The members rt_nr_total, rt_nr_migratory, overloaded and pushable_tasks are
properties of cpu runqueue, not group rt_rq.
Why?
Because, they depend on number and properties of all processes
21.12.2012, 03:07, Steven Rostedt rost...@goodmis.org:
On Fri, 2012-12-21 at 02:16 +0400, Kirill Tkhai wrote:
20.12.2012, 21:53, Thomas Gleixner t...@linutronix.de:
On Tue, 18 Dec 2012, Kirill Tkhai wrote:
The members rt_nr_total, rt_nr_migratory, overloaded and pushable_tasks
The patch aims not to pull tasks of throttled rt_rqs
in pre_schedule_rt() because thay are not able to be
picked in pick_next_task_rt().
There are three places where pull_rt_task() is used:
1)pre_schedule_rt()
If we pull a task of a throttled rt_rq it won't be picked
by pick_next_task_rt(),
The current throttling logic always skips RT class if rq-rt is throttled.
It doesn't handle the special case when RT tasks are the only running tasks
in the rq. So it's possible CPU picks idle task up when RTs are available.
This patch aims to avoid the above situation. The modified
Add new flag ENQUEUE_NO_CLK to skip microscopic rq-clock update.
This update is less even than in places where skip_clock_update is used.
Signed-off-by: Kirill Tkhai tk...@yandex.ru
CC: Ingo Molnar mi...@redhat.com
CC: Peter Zijlstra pet...@infradead.org
CC: Steven Rostedt rost...@goodmis.org
: 99
rcu_preempt
se.statistics.nr_wakeups : 2015010
se.statistics.nr_wakeups_parallel: 3738
Signed-off-by: Kirill Tkhai tk...@yandex.ru
CC: Steven Rostedt rost...@goodmis.org
CC: Ingo Molnar mi...@redhat.com
CC: Peter Zijlstra pet
Hi, Peter,
15.07.2013, 10:32, Peter Zijlstra pet...@infradead.org:
On Sat, Jul 13, 2013 at 07:45:49PM +0400, Kirill Tkhai wrote:
---
include/linux/sched.h | 1 +
kernel/sched/core.c | 29 +
kernel/sched/debug.c | 7 +++
kernel/sched/stats.h
16.07.2013, 00:19, Peter Zijlstra pet...@infradead.org:
On Mon, Jul 15, 2013 at 06:14:34PM +0400, Kirill Tkhai wrote:
#ifdef CONFIG_SMP
+ p-state = TASK_WAKING;
+ smp_wmb();
+
This too is broken; the loop below needs to be completed first,
otherwise we change p-state while
Use helpers where possible (All directories except arch/.)
Signed-off-by: Kirill Tkhai tk...@yandex.ru
CC: Steven Rostedt rost...@goodmis.org
CC: Ingo Molnar mi...@redhat.com
CC: Peter Zijlstra pet...@infradead.org
---
drivers/base/devtmpfs.c |3 +--
drivers/base
Helpers for replacement repeating patterns:
1)spin_unlock(lock);
schedule();
2)spin_unlock_irq(lock);
schedule();
(The same for raw_spinlock_t)
This allows to prevent excess preempt_schedule(), which can happen on
preemptible kernel.
Signed-off-by: Kirill Tkhai tk...@yandex.ru
CC
Use helpers where possible (All directories except arch/.)
Signed-off-by: Kirill Tkhai tk...@yandex.ru
CC: Steven Rostedt rost...@goodmis.org
CC: Ingo Molnar mi...@redhat.com
CC: Peter Zijlstra pet...@infradead.org
CC: LKML
---
drivers/base/devtmpfs.c |3 +--
drivers
the same as replaced unlock/schedule,
but also they help to prevent excess preempt_schedule() call
which can happen on preemptible kernel (from *_unlock*() code).
Signed-off-by: Kirill Tkhai tk...@yandex.ru
CC: Steven Rostedt rost...@goodmis.org
CC: Ingo Molnar mi...@redhat.com
CC: Peter Zijlstra pet
18.06.2013, 21:28, Peter Zijlstra pet...@infradead.org:
On Tue, Jun 18, 2013 at 07:36:52PM +0400, Kirill Tkhai wrote:
Helpers for replacement repeating patterns:
1)spin_unlock(lock);
schedule();
2)spin_unlock_irq(lock);
schedule();
I just noticed this; the existing
1)Add __maybe_unused
__always_unused
__cacheline_aligned
__cacheline_aligned_in_smp
ACPI_EXPORT_SYMBOL
to the list.
2)Regroup and cleanup(spaces to tabs; make the same alignment).
Signed-off-by: Kirill Tkhai tk
away.
The check is better suitable for push_rt_task().
So, kill the check, now unusable rt_overloaded() and rto_count.
Move root_domain's refcount to the bottom of the structure
to keep its fields aligned.
Signed-off-by: Kirill Tkhai tk...@yandex.ru
CC: Ingo Molnar mi...@redhat.com
CC
Add __maybe_unused
__always_unused
__cacheline_aligned
__cacheline_aligned_in_smp
ACPI_EXPORT_SYMBOL
to the list.
Signed-off-by: Kirill Tkhai tk...@yandex.ru
CC: Michal Marek mma...@suse.cz
CC: Andrew Morton a...@linux
14.09.2013, 22:48, Paul E. McKenney paul...@linux.vnet.ibm.com:
On Sat, Sep 14, 2013 at 05:03:20PM +0400, Kirill Tkhai wrote:
When a system has a sparse cpumask and CONFIG_RCU_NOCB_CPU_ALL is enabled,
rcu_spawn_nocb_kthreads() creates nocb threads for nonexistent CPUS.
The problem
812cb83a5 (sparc64: Implement HAVE_CONTEXT_TRACKING).
Reverting that commit fixes the problem.
The patch below fixes the problem:
Signed-off-by: Kirill Tkhai tk...@yandex.ru
---
arch/sparc/kernel/kgdb_64.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/sparc/kernel/kgdb_64.c b/arch/sparc
-rt.highest_prio.curr is less.
The patch below fixes the problem.
It looks like all version have this bug, so I CC'ed stable mailing list.
Signed-off-by: Kirill Tkhai tk...@yandex.ru
CC: Ingo Molnar mi...@redhat.com
CC: Peter Zijlstra pet...@infradead.org
CC: Steven Rostedt rost...@goodmis.org
CC: sta
This patchset makes RT class to fit generic scheme, which is used in fair and
deadline classes.
Number of tasks of throttled rt_rq and its children is being decremented from
rq-nr_running, when rt_rq becomes throlled.
---
Kirill Tkhai (4):
sched/rt: Sum number of all children tasks
is beeing
substracted from rq-nr_running.
Signed-off-by: Kirill Tkhai tk...@yandex.ru
CC: Peter Zijlstra pet...@infradead.org
CC: Ingo Molnar mi...@kernel.org
---
kernel/sched/rt.c| 73 ++
kernel/sched/sched.h |2 +
2 files changed, 63
Two accessors for RT_GROUP_SCHED and !RT_GROUP_SCHED cases.
Signed-off-by: Kirill Tkhai tk...@yandex.ru
CC: Peter Zijlstra pet...@infradead.org
CC: Ingo Molnar mi...@kernel.org
---
kernel/sched/rt.c | 17 +++--
1 file changed, 15 insertions(+), 2 deletions(-)
diff --git a/kernel
This reverts commit 4c6c4e38c4e9 [sched/core: Fix endless loop in
pick_next_task()], which is not necessary after [sched/rt: Substract number
of tasks of throttled queues from rq-nr_running]
Signed-off-by: Kirill Tkhai tk...@yandex.ru
CC: Peter Zijlstra pet...@infradead.org
CC: Ingo Molnar mi
(except
throttled rt queues).
Empty queues are not able to be queued and all of the places, which
use rt_nr_running, just compare it with zero, so we do not break
anything here.
Signed-off-by: Kirill Tkhai tk...@yandex.ru
CC: Peter Zijlstra pet...@infradead.org
CC: Ingo Molnar mi...@kernel.org
18.03.2014, 15:08, Preeti Murthy preeti.l...@gmail.com:
On Sat, Mar 15, 2014 at 3:44 AM, Kirill Tkhai tk...@yandex.ru wrote:
{inc,dec}_rt_tasks used to count entities which are directly queued
on rt_rq. If an entity was not a task (i.e., it is some queue), its
children were not counted
21.02.2014, 20:52, Juri Lelli juri.le...@gmail.com:
On Fri, 21 Feb 2014 17:36:41 +0100
Juri Lelli juri.le...@gmail.com wrote:
On Fri, 21 Feb 2014 11:37:15 +0100
Peter Zijlstra pet...@infradead.org wrote:
On Thu, Feb 20, 2014 at 02:16:00AM +0400, Kirill Tkhai wrote:
Since deadline
On 21.02.2014 20:36, Juri Lelli wrote:
On Fri, 21 Feb 2014 11:37:15 +0100
Peter Zijlstra pet...@infradead.org wrote:
On Thu, Feb 20, 2014 at 02:16:00AM +0400, Kirill Tkhai wrote:
Since deadline tasks share rt bandwidth, we must care about
bandwidth timer set. Otherwise rt_time may grow up
25.02.2014, 18:14, Juri Lelli juri.le...@gmail.com:
On Sat, 22 Feb 2014 04:56:59 +0400
Kirill Tkhai tk...@yandex.ru wrote:
On 21.02.2014 20:36, Juri Lelli wrote:
On Fri, 21 Feb 2014 11:37:15 +0100
Peter Zijlstra pet...@infradead.org wrote:
On Thu, Feb 20, 2014 at 02:16:00AM +0400
In deadline class we do not have group scheduling.
So, let's remove unnecessary
X = X;
equations.
Signed-off-by: Kirill Tkhai ktk...@parallels.com
CC: Juri Lelli juri.le...@gmail.com
CC: Peter Zijlstra pet...@infradead.org
CC: Ingo Molnar mi...@redhat.com
---
kernel/sched/deadline.c
On Вт, 2014-02-25 at 17:05 +0100, Juri Lelli wrote:
Destroy rt bandwidth timer when rq has no more RT tasks, even when
CONFIG_RT_GROUP_SCHED is not set.
Signed-off-by: Juri Lelli juri.le...@gmail.com
---
kernel/sched/rt.c | 10 +++---
1 file changed, 7 insertions(+), 3 deletions(-)
26.02.2014, 13:07, Peter Zijlstra pet...@infradead.org:
On Wed, Feb 26, 2014 at 03:37:38AM +0100, Mike Galbraith wrote:
BTW, I noticed you can no longer turn the turn the noisy thing off since
we grew DL. I added an old SGI boot parameter to tell it to go away.
You're talking about the
26.02.2014, 13:35, Kirill Tkhai tk...@yandex.ru:
26.02.2014, 13:07, Peter Zijlstra pet...@infradead.org:
On Wed, Feb 26, 2014 at 03:37:38AM +0100, Mike Galbraith wrote:
BTW, I noticed you can no longer turn the turn the noisy thing off since
we grew DL. I added an old SGI boot
]
Fix that.
Signed-off-by: Kirill Tkhai ktk...@parallels.com
CC: Thomas Gleixner t...@linutronix.de
CC: Peter Zijlstra pet...@infradead.org
CC: Ingo Molnar mi...@redhat.com
---
kernel/sched/core.c | 4
1 file changed, 4 insertions(+)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
9 root RT 0 000 S 0.0 0.0 0:00.24 migration/0
В Чт, 27/02/2014 в 14:24 +0400, Kirill Tkhai пишет:
[PATCH]sched/core: Return possibility to set RT and DL classes back
I found that it's impossible to set
to complitelly get rid of the race.
So, let's use a fix, which is fast and simple and almost complitelly
reduces the race. It makes race probability very small.
Signed-off-by: Kirill Tkhai tk...@yandex.ru
CC: Peter Zijlstra pet...@infradead.org
CC: Ingo Molnar mi...@redhat.com
---
kernel/sched/rt.c | 16
update_curr_dl() calls a little.
Signed-off-by: Kirill Tkhai ktk...@parallels.com
CC: Juri Lelli juri.le...@gmail.com
CC: Peter Zijlstra pet...@infradead.org
CC: Ingo Molnar mi...@redhat.com
---
kernel/sched/deadline.c | 10 --
kernel/sched/rt.c | 7 +++
2 files changed, 15
We close idle_exit_fair() bracket in case of we've pulled something or we've
received
task of high priority class.
Signed-off-by: Kirill Tkhai ktk...@parallels.com
CC: Peter Zijlstra pet...@infradead.org
CC: Ingo Molnar mi...@redhat.com
---
kernel/sched/fair.c | 15
(), interrupts are still disabled.
The solution is to check for available tasks in DL and RT
classes instead of checking for sum.
Signed-off-by: Kirill Tkhai ktk...@parallels.com
CC: Peter Zijlstra pet...@infradead.org
CC: Ingo Molnar mi...@redhat.com
---
kernel/sched/fair.c |4 +++-
kernel/sched
tasks.
Signed-off-by: Kirill Tkhai ktk...@parallels.com
CC: Peter Zijlstra pet...@infradead.org
CC: Ingo Molnar mi...@redhat.com
Peter, this finaly fixes problem with RT throttling.
Should I send all three patches as one series?
---
kernel/sched/fair.c |2 +-
1 file changed, 1 insertion(+), 1
12.03.2014, 14:39, Nicholas Mc Guire der.h...@hofr.at:
On Wed, 12 Mar 2014, Steven Rostedt wrote:
Peter,
I'm going through my inbox (over a year old), and found this patch from
Kirill. It looks fine to me. You can apply it with my
Acked-by: Steven Rostedt rost...@goodmis.org
--
finish_arch_post_lock_switch() has finished. If mm is the same,
then TIF_SWITCH_MM on the second won't be set.
The second rare but possible issue is zeroing of post_schedule()
on a wrong cpu.
So, lets fix this and unify preempt_count state.
Signed-off-by: Kirill Tkhai ktk...@parallels.com
CC: Peter
On 13.02.2014 20:00, Peter Zijlstra wrote:
On Thu, Feb 13, 2014 at 07:51:56PM +0400, Kirill Tkhai wrote:
For archs without __ARCH_WANT_UNLOCKED_CTXSW set this means
that all newly created tasks execute finish_arch_post_lock_switch()
and post_schedule() with preemption enabled.
That's IA64
В Птн, 14/02/2014 в 10:52 +, Catalin Marinas пишет:
On Thu, Feb 13, 2014 at 09:32:22PM +0400, Kirill Tkhai wrote:
On 13.02.2014 20:00, Peter Zijlstra wrote:
On Thu, Feb 13, 2014 at 07:51:56PM +0400, Kirill Tkhai wrote:
For archs without __ARCH_WANT_UNLOCKED_CTXSW set this means
В Птн, 14/02/2014 в 12:21 +, Catalin Marinas пишет:
On Fri, Feb 14, 2014 at 11:16:09AM +, Kirill Tkhai wrote:
В Птн, 14/02/2014 в 10:52 +, Catalin Marinas пишет:
On Thu, Feb 13, 2014 at 09:32:22PM +0400, Kirill Tkhai wrote:
Look at ARM64's finish_arch_post_lock_switch
В Птн, 14/02/2014 в 12:35 +, Catalin Marinas пишет:
On Thu, Feb 13, 2014 at 07:51:56PM +0400, Kirill Tkhai wrote:
Preemption state on enter in finish_task_switch() is different
in cases of context_switch() and schedule_tail().
In the first case we have it twice disabled: at the start
В Птн, 14/02/2014 в 15:49 +, Catalin Marinas пишет:
On Fri, Feb 14, 2014 at 12:44:01PM +, Kirill Tkhai wrote:
В Птн, 14/02/2014 в 12:35 +, Catalin Marinas пишет:
On Thu, Feb 13, 2014 at 07:51:56PM +0400, Kirill Tkhai wrote:
Preemption state on enter in finish_task_switch
In deadline class we do not have group scheduling like in RT.
dl_nr_total is the same as dl_nr_running. So, one of them should
be removed.
Signed-off-by: Kirill Tkhai tk...@yandex.ru
CC: Juri Lelli juri.le...@gmail.com
CC: Peter Zijlstra pet...@infradead.org
CC: Ingo Molnar mi...@redhat.com
Since deadline tasks share rt bandwidth, we must care about
bandwidth timer set. Otherwise rt_time may grow up to infinity
in update_curr_dl(), if there are no other available RT tasks
on top level bandwidth.
I'm going to decide the problem the way below. Almost untested
because of I skipped
21.02.2014, 14:37, Peter Zijlstra pet...@infradead.org:
On Thu, Feb 20, 2014 at 02:16:00AM +0400, Kirill Tkhai wrote:
Since deadline tasks share rt bandwidth, we must care about
bandwidth timer set. Otherwise rt_time may grow up to infinity
in update_curr_dl(), if there are no other
21.02.2014, 15:39, Kirill Tkhai tk...@yandex.ru:
21.02.2014, 14:37, Peter Zijlstra pet...@infradead.org:
On Thu, Feb 20, 2014 at 02:16:00AM +0400, Kirill Tkhai wrote:
Since deadline tasks share rt bandwidth, we must care about
bandwidth timer set. Otherwise rt_time may grow up
21.02.2014, 16:44, Juri Lelli juri.le...@gmail.com:
On Fri, 21 Feb 2014 16:09:25 +0400
Kirill Tkhai tk...@yandex.ru wrote:
21.02.2014, 15:39, Kirill Tkhai tk...@yandex.ru:
21.02.2014, 14:37, Peter Zijlstra pet...@infradead.org:
On Thu, Feb 20, 2014 at 02:16:00AM +0400, Kirill Tkhai
06.01.2014, 07:56, Allen Pais allen.p...@oracle.com:
In the attempt of get PREEMPT_RT working on sparc64 using
linux-stable-rt version 3.10.22-rt19+, the kernel crash
with the following trace:
[ 1487.027884] I7: rt_mutex_setprio+0x3c/0x2c0
[ 1487.027885] Call Trace:
[ 1487.027887]
11.02.2014, 16:17, tip-bot for Peter Zijlstra tip...@zytor.com:
Commit-ID: 606dba2e289446600a0b68422ed2019af5355c12
Gitweb: http://git.kernel.org/tip/606dba2e289446600a0b68422ed2019af5355c12
Author: Peter Zijlstra pet...@infradead.org
AuthorDate: Sat, 11 Feb 2012 06:05:00 +0100
Signed-off-by: Kirill Tkhai ktk...@parallels.com
CC: Thomas Gleixner t...@linutronix.de
---
kernel/smpboot.c |2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/smpboot.c b/kernel/smpboot.c
index eb89e18..c6e1c56 100644
--- a/kernel/smpboot.c
+++ b/kernel/smpboot.c
12.02.2014, 18:06, Peter Zijlstra pet...@infradead.org:
On Wed, Feb 12, 2014 at 11:00:53AM +0400, Kirill Tkhai wrote:
@@ -4748,7 +4743,7 @@ static void migrate_tasks(unsigned int dead_cpu)
if (rq-nr_running == 1)
break;
- next = pick_next_task
On 03/18/2014 05:14 PM, Kirill Tkhai wrote:
18.03.2014, 15:08, Preeti Murthy preeti.l...@gmail.com:
On Sat, Mar 15, 2014 at 3:44 AM, Kirill Tkhai tk...@yandex.ru wrote:
{inc,dec}_rt_tasks used to count entities which are directly queued
on rt_rq. If an entity was not a task (i.e
these situations.
Signed-off-by: Kirill Tkhai tk...@yandex.ru
CC: Steven Rostedt rost...@goodmis.org
CC: Ingo Molnar mi...@redhat.com
CC: Peter Zijlstra pet...@infradead.org
---
include/linux/spinlock.h | 27 +++
include/linux/spinlock_api_smp.h | 37
On Wed, 2013-06-12 at 09:07 -0400, Steven Rostedt wrote:
On Wed, 2013-06-12 at 14:15 +0200, Peter Zijlstra wrote:
So I absolutely hate this API because people can (and invariably will)
abuse it; much like they did/do preempt_enable_no_resched().
Me too.
IIRC Thomas even maps
On 12/06/13 17:07, Steven Rostedt wrote:
On Wed, 2013-06-12 at 14:15 +0200, Peter Zijlstra wrote:
So I absolutely hate this API because people can (and invariably will)
abuse it; much like they did/do preempt_enable_no_resched().
Me too.
IIRC Thomas even maps preempt_enable_no_resched()
Helpers for replacement repeating patterns:
1)raw_spin_unlock_irq(lock);
schedule();
2)raw_spin_unlock_irqrestore(lock, flags);
schedule();
(The same for spinlock_t)
They allow to prevent excess preempt_schedule(), which can happen on
preemptible kernel.
Signed-off-by: Kirill Tkhai tk
17.06.2013, 18:29, Steven Rostedt rost...@goodmis.org:
On Fri, 2013-06-14 at 18:40 +0400, Kirill Tkhai wrote:
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 58453b8..381e493 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -3125,6 +3125,30 @@ asmlinkage
of struct rt_rq and functions connected
with it: nobody uses it since now.
Signed-off-by: Kirill Tkhai tk...@yandex.ru
CC: Steven Rostedt rost...@goodmis.org
CC: Ingo Molnar mi...@redhat.com
CC: Peter Zijlstra pet...@infradead.org
---
kernel/sched/rt.c| 82
Current gps support in hp_wmi_rfkill_setup() looks like bad copy/past.
It leads to kernel panic on my HP530 laptop. So I did:
1)Fix wwan/gps register_*_error label order
2)Fix rfkill_set_hw_state() wrong argument in case of gps
Signed-off-by: Kirill Tkhai tk...@yandex.ru
CC: Matthew Garrett
One more error.. I'll resend patch to x86-platform mailing list
01.06.2013, 01:53, Kirill Tkhai tk...@yandex.ru:
Current gps support in hp_wmi_rfkill_setup() looks like bad copy/past.
It leads to kernel panic on my HP530 laptop. So I did:
1)Fix wwan/gps register_*_error label order
2)Fix
.
Furthermore, it doesn't handle single rt_se case.
3)Make pretty task_tick_rt() more pretty.
Signed-off-by: Kirill Tkhai tk...@yandex.ru
CC: Steven Rostedt rost...@goodmis.org
CC: Ingo Molnar mi...@redhat.com
CC: Peter Zijlstra pet...@infradead.org
---
kernel/sched/rt.c | 49
The only one task can replace waker.
Signed-off-by: Kirill Tkhai tk...@yandex.ru
CC: Steven Rostedt rost...@goodmis.org
CC: Ingo Molnar mi...@redhat.com
CC: Peter Zijlstra pet...@infradead.org
---
kernel/sched/core.c |2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/kernel
cfs_rq is declarated twice. place_entity() doesn't change cfs_rq,
so it's erratum. Fix that.
(and use above declarated se instead of p-se)
Signed-off-by: Kirill Tkhai tk...@yandex.ru
CC: Ingo Molnar mi...@redhat.com
CC: Peter Zijlstra pet...@infradead.org
CC: Steven Rostedt rost...@goodmis.org
suitable for push_rt_task().
So, kill the check, now unusable rt_overloaded() and rto_count.
Move root_domain's refcount to the bottom of the structure
to keep its fields aligned.
Signed-off-by: Kirill Tkhai tk...@yandex.ru
CC: Ingo Molnar mi...@redhat.com
CC: Peter Zijlstra pet...@infradead.org
crossing border.
Signed-off-by: Kirill Tkhai tk...@yandex.ru
CC: Frederic Weisbecker fweis...@gmail.com
CC: Peter Zijlstra pet...@infradead.org
CC: Ingo Molnar mi...@kernel.org
---
kernel/sched/deadline.c |4 ++--
kernel/sched/fair.c |8
kernel/sched/rt.c|4
of write_lock().
This function does not change any structures, and
read_lock() is enough.
Signed-off-by: Kirill Tkhai ktk...@parallels.com
CC: Peter Zijlstra pet...@infradead.org
CC: Ingo Molnar mi...@kernel.org
---
kernel/cpu.c | 33 -
1 file changed, 20
This series fixes migration of throttled tasks and hierarchies
from dead cpu. Currently, they do not migrate at all, and they
may forever stay on dead cpu like in jail.
The first patch fixes migration, the second patch adds a little
set of cases, we should warn in dmesg.
---
Kirill Tkhai (2
it has to do.
Signed-off-by: Kirill Tkhai ktk...@parallels.com
CC: Peter Zijlstra pet...@infradead.org
CC: Ingo Molnar mi...@kernel.org
---
kernel/sched/core.c | 75 ++-
1 file changed, 20 insertions(+), 55 deletions(-)
diff --git a/kernel/sched
В Ср, 11/06/2014 в 12:57 +0200, Peter Zijlstra пишет:
On Wed, Jun 11, 2014 at 01:52:10PM +0400, Kirill Tkhai wrote:
Currently migrate_tasks() skips throttled tasks,
because they are not pickable by pick_next_task().
These tasks stay on dead cpu even after they
becomes unthrottled
11.06.2014, 15:24, Srikar Dronamraju sri...@linux.vnet.ibm.com:
* Kirill Tkhai ktk...@parallels.com [2014-06-11 13:52:10]:
Currently migrate_tasks() skips throttled tasks,
because they are not pickable by pick_next_task().
Before migrate_tasks() is called, we do call set_rq_offline
1 - 100 of 2515 matches
Mail list logo