On 23/02/21 12:00, Sebastian Andrzej Siewior wrote:
> On 2021-02-23 11:49:07 [+0100], Juri Lelli wrote:
> > Hi,
> Hi,
>
> > I'm seeing the following splat right after boot (or during late boot
> > phases) with v5.11-rt7 (LOCKDEP enabled).
> …
> > [
Hi,
I'm seeing the following splat right after boot (or during late boot
phases) with v5.11-rt7 (LOCKDEP enabled).
[ 85.273583] [ cut here ]
[ 85.273588] WARNING: CPU: 5 PID: 1416 at include/linux/seqlock.h:271
nft_counter_eval+0x95/0x130 [nft_counter]
[ 85.273600]
The following commit has been merged into the sched/core branch of tip:
Commit-ID: e0ee463c93c43b1657ad69cf2678ff5bf1b754fe
Gitweb:
https://git.kernel.org/tip/e0ee463c93c43b1657ad69cf2678ff5bf1b754fe
Author:Juri Lelli
AuthorDate:Mon, 08 Feb 2021 08:35:54 +01:00
Committer
The following commit has been merged into the sched/core branch of tip:
Commit-ID: 156ec6f42b8d300dbbf382738ff35c8bad8f4c3a
Gitweb:
https://git.kernel.org/tip/156ec6f42b8d300dbbf382738ff35c8bad8f4c3a
Author:Juri Lelli
AuthorDate:Mon, 08 Feb 2021 08:35:53 +01:00
Committer
The following commit has been merged into the sched/core branch of tip:
Commit-ID: f2ebf3f45f7a68b67d456296e5efbb58577fb771
Gitweb:
https://git.kernel.org/tip/f2ebf3f45f7a68b67d456296e5efbb58577fb771
Author:Juri Lelli
AuthorDate:Mon, 08 Feb 2021 08:35:54 +01:00
Committer
The following commit has been merged into the sched/core branch of tip:
Commit-ID: 0abadfdf696f648ed32fa1bd16d4e0358de19bab
Gitweb:
https://git.kernel.org/tip/0abadfdf696f648ed32fa1bd16d4e0358de19bab
Author:Juri Lelli
AuthorDate:Mon, 08 Feb 2021 08:35:53 +01:00
Committer
Hi,
On 06/02/21 12:48, t...@redhat.com wrote:
> From: Tom Rix
>
> When the BUG_ON check for (flags != ENQUEUE_REPLENISH) was created, the
> flag was set to ENQUEUE_REPLENISH in rt_mutex_setprio(), now it is or-ed
> in. So the checking logic needs to change.
>
> Fixes: 1de64443d755
ck) to
ensure hrtick hrtimer reprogramming is entirely guarded by the base
lock, so that no race conditions can occur.
Co-developed-by: Daniel Bristot de Oliveira
Signed-off-by: Daniel Bristot de Oliveira
Co-developed-by: Luis Claudio R. Goncalves
Signed-off-by: Luis Claudio R. Goncalves
Signe
ristot de Oliveira
Co-developed-by: Luis Claudio R. Goncalves
Signed-off-by: Luis Claudio R. Goncalves
Signed-off-by: Juri Lelli
---
kernel/sched/core.c | 2 +-
kernel/sched/deadline.c | 4 ++--
kernel/sched/fair.c | 4 ++--
kernel/sched/features.h | 1 +
kernel/sched/sched.
e it only to service DEADLINE and
leave NORMAL task preemption points less fine grained.
Series available at
https://github.com/jlelli/linux.git sched/hrtick-fixes
Hope they both make sense. Comments, questions and suggestions are more
than welcome.
Best,
Juri
Juri Lelli (2):
sched/
-off-by: Daniel Bristot de Oliveira
> Cc: Ingo Molnar
> Cc: Peter Zijlstra
> Cc: Juri Lelli
> Cc: Vincent Guittot
> Cc: Dietmar Eggemann
> Cc: Steven Rostedt
> Cc: Ben Segall
> Cc: Mel Gorman
> Cc: Daniel Bristot de Oliveira
> Cc: Li Zefan
> Cc: Tejun Heo
&
ow frequency switching system with cpufreq gov schedutil has
> a DL task (sugov) per frequency domain running which participates in DL
> bandwidth management.
>
> Reviewed-by: Quentin Perret
> Signed-off-by: Dietmar Eggemann
Looks good to me, thanks!
Acked-by: Juri Lelli
Best,
Juri
The following commit has been merged into the sched/urgent branch of tip:
Commit-ID: 2279f540ea7d05f22d2f0c4224319330228586bc
Gitweb:
https://git.kernel.org/tip/2279f540ea7d05f22d2f0c4224319330228586bc
Author:Juri Lelli
AuthorDate:Tue, 17 Nov 2020 07:14:32 +01:00
ot de Oliveira
Signed-off-by: Juri Lelli
---
v1->v2: Replace dl_boosted with inline funcion (Valentin)
v1: 20201105075021.1302386-1-juri.le...@redhat.com
---
include/linux/sched.h | 10 -
kernel/sched/core.c | 11 ++---
kernel/sched/deadline.c | 97 ++--
Hi Valentin,
On 05/11/20 15:49, Valentin Schneider wrote:
>
> Hi Juri,
>
> On 05/11/20 07:50, Juri Lelli wrote:
> > He also provided a simple reproducer creating the situation below:
> >
> > So the execution order of locking steps are the following
> > (
ot de Oliveira
Signed-off-by: Juri Lelli
---
This is actually a v2 attempt (didn't keep $SUBJECT since it's quite
different than v1 [1]) to fix this problem.
v1 was admittedly pretty ugly. Hope this looks slightly better (even
though there is of course overhead associated to the additional
pointer).
The following commit has been merged into the sched/urgent branch of tip:
Commit-ID: a73f863af4ce9730795eab7097fb2102e6854365
Gitweb:
https://git.kernel.org/tip/a73f863af4ce9730795eab7097fb2102e6854365
Author:Juri Lelli
AuthorDate:Tue, 13 Oct 2020 07:31:14 +02:00
el/sched/sched.h| 51 ++---
> kernel/sched/topology.c | 1 +
> 3 files changed, 63 insertions(+), 33 deletions(-)
These look now good to me. Thanks a lot!
Acked-by: Juri Lelli
Best,
Juri
On 13/10/20 10:26, Patrick Bellasi wrote:
>
> On Tue, Oct 13, 2020 at 07:31:14 +0200, Juri Lelli
> wrote...
>
> > Commit 765cc3a4b224e ("sched/core: Optimize sched_feat() for
> > !CONFIG_SCHED_DEBUG builds") made sched features static for
> > !CONFIG_SC
The following commit has been merged into the sched/urgent branch of tip:
Commit-ID: da912c29a4a552588cbfa895487d9d5523b6faa7
Gitweb:
https://git.kernel.org/tip/da912c29a4a552588cbfa895487d9d5523b6faa7
Author:Juri Lelli
AuthorDate:Tue, 13 Oct 2020 07:31:14 +02:00
ing ifdefs.
Fixes: 765cc3a4b224e ("sched/core: Optimize sched_feat() for
!CONFIG_SCHED_DEBUG builds")
Co-developed-by: Daniel Bristot de Oliveira
Signed-off-by: Daniel Bristot de Oliveira
Signed-off-by: Juri Lelli
---
v1->v2
- use CONFIG_JUMP_LABEL (and not the old HAVE_JUMP_LABEL)
ing ifdefs.
Fixes: 765cc3a4b224e ("sched/core: Optimize sched_feat() for
!CONFIG_SCHED_DEBUG builds")
Co-developed-by: Daniel Bristot de Oliveira
Signed-off-by: Daniel Bristot de Oliveira
Signed-off-by: Juri Lelli
---
kernel/sched/core.c | 2 +-
kernel/sched/sched.h | 13 ++---
On 09/10/20 14:47, Juri Lelli wrote:
> The following BUG has been reported (slightly edited):
>
> BUG: using smp_processor_id() in preemptible [] code: handler106/3082
> caller is flow_lookup.isra.15+0x2c/0xf0 [openvswitch]
> CPU: 46 PID: 3082 Comm: handler106 Not
(and migratable).
Fix it by adding get/put_cpu_light(), so that, even if preempted, the
task executing this code is not migrated (operation is also guarded by
ovs_mutex mutex).
Signed-off-by: Juri Lelli
---
net/openvswitch/flow_table.c | 8 +++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git
On 06/10/20 16:48, Peter Zijlstra wrote:
> On Tue, Oct 06, 2020 at 04:37:04PM +0200, Juri Lelli wrote:
> > On 06/10/20 15:48, Peter Zijlstra wrote:
> > > On Tue, Oct 06, 2020 at 12:20:43PM +0100, Valentin Schneider wrote:
> > > >
> > > > On 05/10/20 15:57
On 06/10/20 15:48, Peter Zijlstra wrote:
> On Tue, Oct 06, 2020 at 12:20:43PM +0100, Valentin Schneider wrote:
> >
> > On 05/10/20 15:57, Peter Zijlstra wrote:
> > > In order to minimize the interference of migrate_disable() on lower
> > > priority tasks, which can be deprived of runtime due to
Hi,
On 05/10/20 16:57, Peter Zijlstra wrote:
> Replace a bunch of cpumask_any*() instances with
> cpumask_any*_distribute(), by injecting this little bit of random in
> cpu selection, we reduce the chance two competing balance operations
> working off the same lowest_mask pick the same CPU.
>
>
On 06/10/20 09:56, luca abeni wrote:
> Hi,
>
> sorry for the late reply... Anyway, I am currently testing this
> patchset (and trying to use it for the "SCHED_DEADLINE-based cgroup
> scheduling" patchset).
> And during my tests I had a doubt:
>
>
>
> O
Hi,
On 26/09/20 00:20, Peng Liu wrote:
> I created another root domain(contains 2 CPUs) besides the default
> one, and the global default rt bandwidth is 95%. Then launched a
> DL process which need 25% bandwidth and moved it to the new root
> domain, so far so good.
>
> Then I tried to change
;
> This problem is avoided by removing the throttle state from the boosted
> thread while boosting it (by TASK A in the example above), allowing it to
> be queued and run boosted.
>
> The next replenishment will take care of the runtime overrun, pushing
> the deadline further a
Hi,
On 15/09/20 23:20, Peng Liu wrote:
> When user changes sched_rt_{runtime, period}_us, then
>
> sched_rt_handler()
> --> sched_dl_bandwidth_validate()
> {
> new_bw = global_rt_runtime()/global_rt_period();
>
> for_each_possible_cpu(cpu) {
>
Hi Pavel,
On 09/09/20 00:22, Pavel Machek wrote:
> Hi!
>
> > This is RFC v2 of Peter's SCHED_DEADLINE server infrastructure
> > implementation [1].
> >
> > SCHED_DEADLINE servers can help fixing starvation issues of low priority
> > tasks (e.g.,
> > SCHED_OTHER) when higher priority tasks
On 04/09/20 11:26, Daniel Bristot de Oliveira wrote:
> As discussed with Juri and Peter.
>
> Signed-off-by: Daniel Bristot de Oliveira
> Cc: Ingo Molnar
> Cc: Peter Zijlstra
> Cc: Juri Lelli
> Cc: Vincent Guittot
> Cc: Dietmar Eggemann
> Cc: Steven Rostedt
ed = 0;
> BUG_ON(!p->dl.dl_boosted || flags != ENQUEUE_REPLENISH);
> return;
> }
Ah, right, thanks for looking into this issue!
Wonder if we should be calling __dl_clear_params() instead of just
clearing dl_throttled, but what you propose makes sense to me.
Acked-by: Juri Lelli
Best,
Juri
On 07/08/20 16:13, pet...@infradead.org wrote:
> On Fri, Aug 07, 2020 at 03:43:53PM +0200, Juri Lelli wrote:
>
> > Right, but I fear we won't be able to keep current behavior for wakeups:
> > RT with highest prio always gets scheduled right away?
>
> If you consider RT th
On 07/08/20 15:55, luca abeni wrote:
> On Fri, 7 Aug 2020 15:43:53 +0200
> Juri Lelli wrote:
>
> > On 07/08/20 15:28, luca abeni wrote:
> > > Hi Juri,
> > >
> > > On Fri, 7 Aug 2020 11:56:04 +0200
> > > Juri Lelli wrote:
> > >
On 07/08/20 15:41, luca abeni wrote:
> Hi Juri,
>
> On Fri, 7 Aug 2020 15:30:41 +0200
> Juri Lelli wrote:
> [...]
> > > In the meanwhile, I have some questions/comments after a first quick
> > > look.
> > >
> > > If I understand well, the pat
On 07/08/20 15:28, luca abeni wrote:
> Hi Juri,
>
> On Fri, 7 Aug 2020 11:56:04 +0200
> Juri Lelli wrote:
>
> > Starting deadline server for lower priority classes right away when
> > first task is enqueued might break guarantees
>
> Which guarantees are y
Hi Luca,
On 07/08/20 15:16, luca abeni wrote:
> Hi Juri,
>
> thanks for sharing the v2 patchset!
>
> In the next days I'll have a look at it, and try some tests...
Thanks!
> In the meanwhile, I have some questions/comments after a first quick
> look.
>
> If I understand well, the patchset
On 07/08/20 13:30, Daniel Bristot de Oliveira wrote:
> On 8/7/20 12:46 PM, pet...@infradead.org wrote:
> > On Fri, Aug 07, 2020 at 11:56:04AM +0200, Juri Lelli wrote:
> >> Starting deadline server for lower priority classes right away when
> >> first task is enqu
of time after it has been enqueued.
Use pick/put functions to manage starvation monitor status.
Signed-off-by: Juri Lelli
---
kernel/sched/fair.c | 57 ++--
kernel/sched/sched.h | 4
2 files changed, 59 insertions(+), 2 deletions(-)
diff --git a/kernel
From: Peter Zijlstra
In preparation of introducing !task sched_dl_entity; move the
bandwidth accounting into {en.de}queue_dl_entity().
Signed-off-by: Peter Zijlstra (Intel)
---
kernel/sched/deadline.c | 128 ++--
kernel/sched/sched.h| 6 ++
2 files
From: Peter Zijlstra
Use deadline servers to service fair tasks.
This patch adds a fair_server deadline entity which acts as a container
for fair entities and can be used to fix starvation when higher priority
(wrt fair) tasks are monopolizing CPU(s).
Signed-off-by: Peter Zijlstra (Intel)
---
From: Peter Zijlstra
Low priority tasks (e.g., SCHED_OTHER) can suffer starvation if tasks
with higher priority (e.g., SCHED_FIFO) monopolize CPU(s).
RT Throttling has been introduced a while ago as a (mostly debug)
countermeasure one can utilize to reserve some CPU time for low priority
tasks
From: Peter Zijlstra
Create a single function that initializes a sched_dl_entity.
Signed-off-by: Peter Zijlstra (Intel)
---
kernel/sched/core.c | 5 +
kernel/sched/deadline.c | 22 +++---
kernel/sched/sched.h| 5 +
3 files changed, 17 insertions(+), 15
ise anyway after I'm back from pto :), but try to
see if this might actually fly. The feature seems to be very much needed.
Thanks!
Juri
1 - https://lore.kernel.org/lkml/20190726145409.947503...@infradead.org/
Juri Lelli (1):
sched/fair: Implement starvation monitor
Peter Zijlstra (5):
sc
From: Peter Zijlstra
All classes use sched_entity::exec_start to track runtime and have
copies of the exact same code around to compute runtime.
Collapse all that.
Signed-off-by: Peter Zijlstra (Intel)
---
include/linux/sched.h| 2 +-
kernel/sched/deadline.c | 17 +++---
Hi,
On 07/07/20 00:04, Peng Liu wrote:
> 'commit 840d719604b0 ("sched/deadline: Update rq_clock of later_rq when
> pushing a task")'
> introduced the update_rq_clock() to fix the "used-before-update" bug.
>
> 'commit f4904815f97a ("sched/deadline: Fix double accounting of rq/running bw
> in
On 13/07/20 15:22, Juri Lelli wrote:
[...]
> Gentle ping about this issue (mainly addressing relevant maintainers and
> potential reviewers). It's easily reproducible with PREEMPT_RT.
Ping. Any comment at all? :-)
Thanks,
Juri
Hi,
On 06/07/20 08:45, Juri Lelli wrote:
> On 03/07/20 19:11, He Zhe wrote:
> >
> >
> > On 7/3/20 4:12 PM, Juri Lelli wrote:
> > > Hi,
> > >
> > > On 10/04/20 19:47, zhe...@windriver.com wrote:
> > >> From: He Zhe
> > >>
Thomas)
>
> git://git.kernel.org/pub/scm/linux/kernel/git/frederic/linux-dynticks.git
> timers/softirq-v2
>
> HEAD: 5545d80b7b9bd69ede1c17fda194ac6620e7063f
>
> Thanks,
> Frederic
> ---
Testing of this set looks good (even with RT). Feel free to add
Tes
On 03/07/20 19:11, He Zhe wrote:
>
>
> On 7/3/20 4:12 PM, Juri Lelli wrote:
> > Hi,
> >
> > On 10/04/20 19:47, zhe...@windriver.com wrote:
> >> From: He Zhe
> >>
> >> commit b5e683d5cab8 ("eventfd: track eventfd_signal() recursion dept
Hi,
On 10/04/20 19:47, zhe...@windriver.com wrote:
> From: He Zhe
>
> commit b5e683d5cab8 ("eventfd: track eventfd_signal() recursion depth")
> introduces a percpu counter that tracks the percpu recursion depth and
> warn if it greater than zero, to avoid potential deadlock and stack
>
On 02/07/20 16:32, Frederic Weisbecker wrote:
> On Thu, Jul 02, 2020 at 11:59:59AM +0200, Juri Lelli wrote:
> > On 02/07/20 01:20, Frederic Weisbecker wrote:
> > > On Wed, Jul 01, 2020 at 06:35:04PM +0200, Juri Lelli wrote:
> > > > Guess you might be faster
On 02/07/20 01:20, Frederic Weisbecker wrote:
> On Wed, Jul 01, 2020 at 06:35:04PM +0200, Juri Lelli wrote:
> > Guess you might be faster to understand what I'm missing. :-)
>
> So, did you port only this patch or the whole set in order to
> trigger this?
>
> If it was th
>clk. As a possible outcome, timers may expire way too
> early, the worst case being that the highest wheel levels get spuriously
> processed again.
>
> To prevent from that, make sure that base->next_expiry doesn't get below
> base->clk.
>
> Signed-off-by: Fr
On 29/06/20 14:42, Frederic Weisbecker wrote:
> On Mon, Jun 29, 2020 at 02:36:51PM +0200, Juri Lelli wrote:
> > Hi,
> >
> > On 16/06/20 22:46, Frederic Weisbecker wrote:
> > > On Tue, Jun 16, 2020 at 08:57:57AM +0200, Juri Lelli wrote:
> > > &
Hi,
On 16/06/20 22:46, Frederic Weisbecker wrote:
> On Tue, Jun 16, 2020 at 08:57:57AM +0200, Juri Lelli wrote:
> > Sure. Let me know if you find anything.
>
> I managed to reproduce. With "threadirqs" and without
> "tsc=reliable". I see tons of spurious
Hi,
On 24/06/20 23:13, Sai Harshini Nimmala wrote:
> The original commit 9659e1ee removes checking the cpu_active_mask
> while finding the best cpu to place a deadline task, citing the reason that
> this mask rarely changes and removing the check will give performance
> gains.
> However, on
The following commit has been merged into the sched/urgent branch of tip:
Commit-ID: 1863cc11225e3ea2cd005473f9addc52513ab1bc
Gitweb:
https://git.kernel.org/tip/1863cc11225e3ea2cd005473f9addc52513ab1bc
Author:Juri Lelli
AuthorDate:Wed, 17 Jun 2020 09:29:19 +02:00
The following commit has been merged into the sched/urgent branch of tip:
Commit-ID: 93a952a81bf31bffaf21eca1b530245acce12597
Gitweb:
https://git.kernel.org/tip/93a952a81bf31bffaf21eca1b530245acce12597
Author:Juri Lelli
AuthorDate:Mon, 19 Nov 2018 16:32:01 +01:00
The following commit has been merged into the sched/urgent branch of tip:
Commit-ID: 5bf857422d6b36b1edff43348054edd3379d069d
Gitweb:
https://git.kernel.org/tip/5bf857422d6b36b1edff43348054edd3379d069d
Author:Juri Lelli
AuthorDate:Wed, 17 Jun 2020 09:29:19 +02:00
The following commit has been merged into the sched/urgent branch of tip:
Commit-ID: 195819207674143c790809f97f96102c7fada077
Gitweb:
https://git.kernel.org/tip/195819207674143c790809f97f96102c7fada077
Author:Juri Lelli
AuthorDate:Mon, 19 Nov 2018 16:32:01 +01:00
Hi,
On 17/06/20 22:49, Daniel Wagner wrote:
> Hi Juri,
>
> On Wed, Jun 17, 2020 at 09:29:19AM +0200, Juri Lelli wrote:
> > This happens because dl_boosted flag is currently not initialized by
> > __dl_clear_params() (unlike the other flags) and setup_new_dl_entity()
>
it.
Initialize dl_boosted to 0.
Reported-by: syzbot+5ac8bac25f95e8b22...@syzkaller.appspotmail.com
Signed-off-by: Juri Lelli
Tested-by: Daniel Wagner
---
kernel/sched/deadline.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index 504d2f51b0d6
On 15/06/20 23:07, Frederic Weisbecker wrote:
> On Thu, May 21, 2020 at 07:00:20PM +0200, Juri Lelli wrote:
[...]
> > sysjitter-2377 [004] 100.438495: sched_switch: sysjitter:2377
> > [120] R ==> ksoftirqd/4:31 [120]
> > ksoftirqd/4-31[004] 1
Hi,
On 21/05/20 19:00, Juri Lelli wrote:
> On 21/05/20 02:44, Frederic Weisbecker wrote:
> > On Wed, May 20, 2020 at 08:47:10PM +0200, Juri Lelli wrote:
> > > On 20/05/20 19:02, Frederic Weisbecker wrote:
> > > > On Wed, May 20, 2020 at 06:49:25PM +0200, Juri Lelli w
hed/sched.h
> +++ b/kernel/sched/sched.h
> @@ -310,11 +310,11 @@ void __dl_add(struct dl_bw *dl_b, u64 tsk_bw, int cpus)
> __dl_update(dl_b, -((s32)tsk_bw / cpus));
> }
>
> -static inline
> -bool __dl_overflow(struct dl_bw *dl_b, int cpus, u64 old_bw, u64 new_bw)
> +static inline bool __dl_overflow(struct dl_bw *dl_b, unsigned long cap,
> + u64 old_bw, u64 new_bw)
> {
> return dl_b->bw != -1 &&
> -dl_b->bw * cpus < dl_b->total_bw - old_bw + new_bw;
> +cap_scale(dl_b->bw, cap) < dl_b->total_bw - old_bw + new_bw;
> }
>
> extern void init_dl_bw(struct dl_bw *dl_b);
> --
Acked-by: Juri Lelli
(cpu == task_cpu(p) && cap == max_cap)) {
> + max_cap = cap;
> + max_cpu = cpu;
> + }
> + }
> }
>
> - if (!cpumask_empty(later_mask))
> - return 1;
> + if (cpumask_empty(later_m
*
> + * The function will return true if the CPU original capacity of the
> + * @cpu scaled by SCHED_CAPACITY_SCALE >= runtime/deadline ratio of the
> + * task and false otherwise.
> + */
> +static inline bool dl_task_fits_capacity(struct task_struct *p, int cpu)
> +{
> + unsigned long cap = arch_scale_cpu_capacity(cpu);
> +
> + return cap_scale(p->dl.dl_deadline, cap) >= p->dl.dl_runtime;
> +}
> +
> extern void init_dl_bw(struct dl_bw *dl_b);
> extern int sched_dl_global_validate(void);
> extern void sched_dl_do_global(void);
> --
Acked-by: Juri Lelli
;
> + } else {
> + return __dl_bw_capacity(i);
> + }
> +}
> #else
> static inline struct dl_bw *dl_bw_of(int i)
> {
> @@ -79,6 +107,11 @@ static inline int dl_bw_cpus(int i)
> {
> return 1;
> }
> +
> +static inline unsigned long dl_bw_capacity(int i)
> +{
> + return SCHED_CAPACITY_SCALE;
> +}
> #endif
>
> static inline
> --
Acked-by: Juri Lelli
"sched RCU must be held");
> +
> + if (cpumask_subset(rd->span, cpu_active_mask))
> + return cpumask_weight(rd->span);
> +
> + cpus = 0;
> +
> for_each_cpu_and(i, rd->span, cpu_active_mask)
> cpus++;
>
> --
Acked-by: Juri Lelli
On 21/05/20 02:44, Frederic Weisbecker wrote:
> On Wed, May 20, 2020 at 08:47:10PM +0200, Juri Lelli wrote:
> > On 20/05/20 19:02, Frederic Weisbecker wrote:
> > > On Wed, May 20, 2020 at 06:49:25PM +0200, Juri Lelli wrote:
> > > > On 20/05/20 18
On 20/05/20 19:02, Frederic Weisbecker wrote:
> On Wed, May 20, 2020 at 06:49:25PM +0200, Juri Lelli wrote:
> > On 20/05/20 18:24, Frederic Weisbecker wrote:
> >
> > Hummm, so I enabled 'timer:*', anything else you think I should be
> > looking at?
>
&
Hi Peter,
On 26/07/19 16:54, Peter Zijlstra wrote:
>
> Cc: Daniel Bristot de Oliveira
> Cc: Luca Abeni
> Cc: Juri Lelli
> Cc: Dmitry Vyukov
> Signed-off-by: Peter Zijlstra (Intel)
> ---
> include/linux/sched/sysctl.h |3 +++
> kernel/s
On 20/05/20 18:24, Frederic Weisbecker wrote:
> Hi Juri,
>
> On Wed, May 20, 2020 at 04:04:02PM +0200, Juri Lelli wrote:
> > After tasks enter or leave a runqueue (wakeup/block) SCHED full_nohz
> > dependency is checked (via sched_update_tick_dependency()). In case ti
stopped.
Signed-off-by: Juri Lelli
---
Hi,
I noticed what seems to be the problem described in the changelog while
running sysjitter [1] on a PREEMPT system setup for isolation and full
nozh, i.e. ... skew_tick=1 nohz=on nohz_full=4-35 rcu_nocbs=4-35
threadirqs (36 CPUs box).
Starting sysjitter
On 12/05/20 14:39, Dietmar Eggemann wrote:
> On 11/05/2020 10:01, Juri Lelli wrote:
> > On 06/05/20 17:09, Dietmar Eggemann wrote:
> >> On 06/05/2020 14:37, Juri Lelli wrote:
> >>> On 06/05/20 12:54, Dietmar Eggemann wrote:
> >>>>
On 06/05/20 17:09, Dietmar Eggemann wrote:
> On 06/05/2020 14:37, Juri Lelli wrote:
> > On 06/05/20 12:54, Dietmar Eggemann wrote:
> >> On 27/04/2020 10:37, Dietmar Eggemann wrote:
>
> [...]
>
> >> There is an issue w/ excl. cpusets and cpuset.sched_load_
Hello,
On 19/12/19 11:35, Juri Lelli wrote:
> Power Management and Scheduling in the Linux Kernel (OSPM-summit) IV edition
>
> May 11-13, 2019
> Scuola Superiore Sant'Anna
> Pisa, Italy
>
Quick reminder that OSPM-summit IV edition is happening next week!
Not in Pisa (f
On 06/05/20 12:54, Dietmar Eggemann wrote:
> On 27/04/2020 10:37, Dietmar Eggemann wrote:
>
> [...]
>
> > diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
> > index 4ae22bfc37ae..eb23e6921d94 100644
> > --- a/kernel/sched/deadline.c
> > +++ b/kernel/sched/deadline.c
> > @@ -69,6
On 30/04/20 00:51, Michel Lespinasse wrote:
> On Thu, Apr 30, 2020 at 12:28 AM Juri Lelli wrote:
> > > --- a/include/linux/rbtree.h
> > > +++ b/include/linux/rbtree.h
> > > @@ -141,12 +141,18 @@ static inline void rb_insert_color_cache
> > &g
Hi,
On 29/04/20 17:32, Peter Zijlstra wrote:
> I've always been bothered by the endless (fragile) boilerplate for
> rbtree, and I recently wrote some rbtree helpers for objtool and
> figured I should lift them into the kernel and use them more widely.
>
> Provide:
>
> partial-order; less()
On 09/10/19 14:12, Scott Wood wrote:
> On Wed, 2019-10-09 at 09:27 +0200, Juri Lelli wrote:
> > On 09/10/19 01:25, Scott Wood wrote:
> > > On Tue, 2019-10-01 at 10:52 +0200, Juri Lelli wrote:
> > > > On 30/09/19 11:24, Scott Wood wrote:
> > > > > On Mon
On 09/10/19 01:25, Scott Wood wrote:
> On Tue, 2019-10-01 at 10:52 +0200, Juri Lelli wrote:
> > On 30/09/19 11:24, Scott Wood wrote:
> > > On Mon, 2019-09-30 at 09:12 +0200, Juri Lelli wrote:
> >
> > [...]
> >
> > > > Hummm, I was a
char *what2)
>* Kernel threads bound to a single CPU can safely use
>* smp_processor_id():
>*/
> - if (cpumask_equal(current->cpus_ptr, cpumask_of(this_cpu)))
> + if (current->nr_cpus_allowed == 1)
> goto out;
Makes sense to me.
Reviewed-by: Juri Lelli
Thanks,
Juri
Hi Valentin,
On 01/10/19 11:29, Valentin Schneider wrote:
> (expanded the Cc list)
> RT/DL folks, any thought on the thing?
Even if I like your idea and it looks theoretically the right thing to
do, I'm not sure we want it in practice if it adds complexity to CFS.
I personally never noticed
On 30/09/19 11:24, Scott Wood wrote:
> On Mon, 2019-09-30 at 09:12 +0200, Juri Lelli wrote:
[...]
> > Hummm, I was actually more worried about the fact that we call free_old_
> > cpuset_bw_dl() only if p->state != TASK_WAKING.
>
> Oh, right. :-P Not sure what I ha
On 27/09/19 11:40, Scott Wood wrote:
> On Fri, 2019-09-27 at 10:11 +0200, Juri Lelli wrote:
> > Hi Scott,
> >
> > On 27/07/19 00:56, Scott Wood wrote:
> > > With the changes to migrate disabling, ->set_cpus_allowed() no longer
> > > gets deferred u
Hi Scott,
On 27/07/19 00:56, Scott Wood wrote:
> With the changes to migrate disabling, ->set_cpus_allowed() no longer
> gets deferred until migrate_enable(). To avoid releasing the bandwidth
> while the task may still be executing on the old CPU, move the subtraction
> to ->migrate_task_rq().
>
Hi,
On 30/08/19 13:24, Peter Zijlstra wrote:
> On Thu, Aug 08, 2019 at 11:45:46AM +0200, Juri Lelli wrote:
> > I'd like to take this last sentence back, I was able to run a few boot +
> > hackbench + shutdown cycles with the following applied (guess too much
> > debug
Hi,
On 05/09/19 10:18, Chunyan Zhang wrote:
> From: Vincent Wang
>
> A deadlock issue is found when executing a cpu hotplug stress test on
> android phones with cpuset and scheduil enabled.
>
> When CPUx is plugged out, the hotplug thread that calls cpu_down()
> will hold cpu_hotplug_lock and
Hi Alessio,
On 03/09/19 15:27, Alessio Balsini wrote:
> Hi Peter,
>
> While testing your series (peterz/sched/wip-deadline 7a9e91d3fe951), I ended
> up
> in a panic at boot on a x86_64 kvm guest, would you please have a look? Here
> attached the backtrace.
> Happy to test any suggestion that
consistently.
Fixes: d4200ab75cdd ("genirq: Handle missing work_struct in
irq_set_affinity_notifier()")
Signed-off-by: Juri Lelli
---
Hi,
This applies to v4.19.59-rt24 (and to all the other branches that have
the patch that introduced the issue). v5.2-rtx doesn't have this probl
On 21/08/19 01:43, Li, Philip wrote:
> > Subject: Re: [RT PATCH v2] net/xfrm/xfrm_ipcomp: Protect scratch buffer with
> > local_lock
> >
> > Hi,
> >
> > On 20/08/19 13:35, kbuild test robot wrote:
> > > Hi Juri,
> > >
> > > Thank you for the patch! Yet something to improve:
> > >
> > > [auto
On 19/08/19 15:57, Steven Rostedt wrote:
> On Mon, 19 Aug 2019 14:27:31 +0200
> Juri Lelli wrote:
>
> > The following BUG has been reported while running ipsec tests.
>
> Thanks!
>
> I'm still in the process of backporting patches to fix some bugs that
> s
Hi,
On 20/08/19 13:35, kbuild test robot wrote:
> Hi Juri,
>
> Thank you for the patch! Yet something to improve:
>
> [auto build test ERROR on linus/master]
> [cannot apply to v5.3-rc5 next-20190819]
> [if your patch is applied to the wrong git tree, please drop us a note to
> help improve
ist
rmqueue_bulk
<-- spin_lock(>lock) - BUG
Fix this by replacing get_cpu() with a local lock to protect
ipcomp_scratches buffers used by ipcomp_(de)compress().
Suggested-by: Sebastian Andrzej Siewior
Signed-off-by: Juri Lelli
--
This v2 applies to
Hi,
Not sure if this has been reported before, but we noticed that LTP test
perf_event_open02 [1] fails on 4.19.59-rt24 (and of course passes on the
corresponding stable kernel):
--- 4.19.59 ---
#
/mnt/tests/ltp-full-20190517/testcases/kernel/syscalls/perf_event_open/perf_event_open02
-v
at
On 13/08/19 15:09, Sebastian Andrzej Siewior wrote:
> On 2019-07-31 12:37:15 [+0200], Juri Lelli wrote:
> > Hi,
> Hi,
>
> > Both v4.19-rt and v5.2-rt need this.
> >
> > Mainline "sched: Mark hrtimers to expire in hard interrupt context"
> >
1 - 100 of 2448 matches
Mail list logo