F:drivers/net/ethernet/freescale/fec_ptp.c
> F: drivers/net/ethernet/freescale/fec.h
> F: Documentation/devicetree/bindings/net/fsl-fec.txt
>
> +FREESCALE SOC SPECIFIC DRIVER
FREESCALE SOC DRIVERS
> +M: Scott Wood
Please CC me at this address, not the NXP address th
dd GUTS driver for QorIQ platforms
> dt: move guts devicetree doc out of powerpc directory
> powerpc/fsl: move mpc85xx.h to include/linux/fsl
> mmc: sdhci-of-esdhc: fix host version for T4240-R1.0-R2.0
Acked-by: Scott Wood
-Scott
On Thu, 2015-11-05 at 12:47 +0100, Laurent Vivier wrote:
> When I try to cross compile a ppc64 kernel, it generally
> fails on the VDSO stage. This is true for powerpc64 cross-
> compiler, but also when I try to build a ppc64le kernel
> on a ppc64 host.
>
> VDSO64L fails:
>
> VDSO64L arch/power
On Fri, 2015-11-06 at 23:22 +0100, Laurent Vivier wrote:
> Le 06/11/2015 22:09, Scott Wood a écrit :
> > On Thu, 2015-11-05 at 12:47 +0100, Laurent Vivier wrote:
> > > When I try to cross compile a ppc64 kernel, it generally
> > > fails on the VDSO stage. This
On Fri, 2015-11-20 at 17:56 +, Al Viro wrote:
> On Fri, Nov 20, 2015 at 06:07:59PM +0100, Christophe Leroy wrote:
> > Al,
> >
> > We've been running Kernel 3.18 for several monthes on our embedded
> > boards, and we have a recurring Oops in link_path_walk()
> > It doesn't happen very often (ap
On Wed, 2015-06-10 at 14:27 +0800, Wenwei Tao wrote:
> Hugetlb VMAs are not mergeable, that means a VMA couldn't have VM_HUGETLB
> and
> VM_MERGEABLE been set in the same time. So we use VM_HUGETLB to indicate new
> mergeable VMAs. Because of that a VMA which has VM_HUGETLB been set is a
> hugetl
On Mon, 2015-09-14 at 08:21 +0200, Christophe Leroy wrote:
> memset() uses instruction dcbz to speed up clearing by not wasting time
> loading cache line with data that will be overwritten.
> Some platform like mpc52xx do no have cache active at startup and
> can therefore not use memset(). Allthou
On Mon, 2015-09-14 at 17:44 +0200, Christophe LEROY wrote:
> Le 14/09/2015 17:20, Scott Wood a écrit :
> > On Mon, 2015-09-14 at 08:21 +0200, Christophe Leroy wrote:
> > > memset() uses instruction dcbz to speed up clearing by not wasting time
> > > loading cache
On Sat, 2015-09-12 at 11:57 +0200, christophe leroy wrote:
> Le 11/09/2015 03:24, Michael Ellerman a écrit :
> > On Thu, 2015-09-10 at 17:05 -0500, Scott Wood wrote:
> > >
> > > I don't think this duplication is what Michael meant by "the normal cpu
> >
This form of the earlycon parameter was added by commit fb11ffe74c794a5
("of/fdt: add FDT serial scanning for earlycon") without documentation.
Signed-off-by: Scott Wood
---
Documentation/kernel-parameters.txt | 4
1 file changed, 4 insertions(+)
diff --git a/Documentat
On Wed, 2015-10-07 at 14:49 +0200, Christophe Leroy wrote:
> Le 29/09/2015 02:29, Scott Wood a écrit :
> > On Tue, Sep 22, 2015 at 06:51:13PM +0200, Christophe Leroy wrote:
> > > flush/clean/invalidate _dcache_range() functions are all very
> > > similar and are quite
On Thu, 2015-10-08 at 14:34 +0200, Christophe Leroy wrote:
> Le 29/09/2015 01:39, Scott Wood a écrit :
> > On Tue, Sep 22, 2015 at 06:50:38PM +0200, Christophe Leroy wrote:
> > > Memory: 124428K/131072K available (3748K kernel code, 188K rwdata,
> > > 648K rodata,
On Mon, Nov 02, 2015 at 07:31:34PM +0200, Madalin Bucur wrote:
> diff --git a/drivers/net/ethernet/freescale/dpaa/Makefile
> b/drivers/net/ethernet/freescale/dpaa/Makefile
> new file mode 100644
> index 000..3847ec7
> --- /dev/null
> +++ b/drivers/net/ethernet/freescale/dpaa/Makefile
> @@ -0,0
migrate
disable patchset is applied since it removes a restriction.
Scott Wood (3):
rcu: Acquire RCU lock when disabling BHs
sched: migrate_enable: Use sleeping_lock to indicate involuntary sleep
rcu: Disable use_softirq on PREEMPT_RT
include/linux/rcupdate.h | 4
include/linux/
Without this, rcu_note_context_switch() will complain if an RCU read
lock is held when migrate_enable() calls stop_one_cpu().
Signed-off-by: Scott Wood
---
v2: Added comment.
If my migrate disable changes aren't taken, then pin_current_cpu()
will also need to use sleeping_lock_inc() be
that wouldn't allow blocked BH disablers to be boosted.
Fix this by calling rcu_read_lock() from local_bh_disable(), and update
rcu_read_lock_bh_held() accordingly.
Signed-off-by: Scott Wood
---
Another question is whether non-raw spinlocks are intended to create an
RCU read-side critical s
7.281998] 039: ? rcu_torture_reader+0x1f0/0x1f0 [rcutorture]
[ 137.287920] 039: kthread+0x106/0x140
[ 137.291591] 039: ? rcu_torture_one_read+0x450/0x450 [rcutorture]
[ 137.297681] 039: ? kthread_bind+0x10/0x10
[ 137.301783] 039: ret_from_fork+0x3a/0x50
Signed-off-by: Scott Wood
---
I think the p
On Thu, 2019-08-22 at 09:59 -0400, Joel Fernandes wrote:
> On Wed, Aug 21, 2019 at 06:19:06PM -0500, Scott Wood wrote:
> > I think the prohibition on use_softirq can be dropped once RT gets the
> > latest RCU code, but the question of what use_softirq should default
> > to
On Wed, 2019-08-21 at 16:35 -0700, Paul E. McKenney wrote:
> On Wed, Aug 21, 2019 at 06:19:05PM -0500, Scott Wood wrote:
> > Without this, rcu_note_context_switch() will complain if an RCU read
> > lock is held when migrate_enable() calls stop_one_cpu().
> >
> > Signed
On Fri, Aug 09, 2019 at 06:07:52PM +0800, Jason Yan wrote:
> Add a new helper create_tlb_entry() to create a tlb entry by the virtual
> and physical address. This is a preparation to support boot kernel at a
> randomized address.
>
> Signed-off-by: Jason Yan
> Cc: Diana Craciun
> Cc: Michael Ell
On Fri, 2019-08-09 at 18:07 +0800, Jason Yan wrote:
> This series implements KASLR for powerpc/fsl_booke/32, as a security
> feature that deters exploit attempts relying on knowledge of the location
> of kernel internals.
>
> Since CONFIG_RELOCATABLE has already supported, what we need to do is
>
On Fri, Aug 09, 2019 at 06:07:54PM +0800, Jason Yan wrote:
> This patch add support to boot kernel from places other than KERNELBASE.
> Since CONFIG_RELOCATABLE has already supported, what we need to do is
> map or copy kernel to a proper place and relocate. Freescale Book-E
> parts expect lowmem t
On Tue, 2019-08-27 at 23:05 -0500, Scott Wood wrote:
> On Fri, 2019-08-09 at 18:07 +0800, Jason Yan wrote:
> > Freescale Book-E
> > parts expect lowmem to be mapped by fixed TLB entries(TLB1). The TLB1
> > entries are not suitable to map the kernel directly in a randomized
&g
On Tue, 2019-08-27 at 11:33 +1000, Michael Ellerman wrote:
> Jason Yan writes:
> > A polite ping :)
> >
> > What else should I do now?
>
> That's a good question.
>
> Scott, are you still maintaining FSL bits,
Sort of... now that it's become very low volume, it's easy to forget when
something
On Wed, 2019-08-28 at 19:03 +0800, Jason Yan wrote:
>
> On 2019/8/28 12:54, Scott Wood wrote:
> > On Fri, Aug 09, 2019 at 06:07:54PM +0800, Jason Yan wrote:
> > > +/*
> > > + * To see if we need to relocate the kernel to a random offset
> > > + * vo
On Wed, 2019-08-21 at 16:33 -0700, Paul E. McKenney wrote:
> On Wed, Aug 21, 2019 at 06:19:04PM -0500, Scott Wood wrote:
> > diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
> > index 388ace315f32..d6e357378732 100644
> > --- a/include/linux/rcupdate.h
&g
On Thu, 2019-08-22 at 09:39 -0400, Joel Fernandes wrote:
> On Wed, Aug 21, 2019 at 04:33:58PM -0700, Paul E. McKenney wrote:
> > On Wed, Aug 21, 2019 at 06:19:04PM -0500, Scott Wood wrote:
> > > Signed-off-by: Scott Wood
> > > ---
> > > Another question is wh
On Fri, 2019-08-23 at 18:20 +0200, Sebastian Andrzej Siewior wrote:
> On 2019-08-21 18:19:05 [-0500], Scott Wood wrote:
> > Without this, rcu_note_context_switch() will complain if an RCU read
> > lock is held when migrate_enable() calls stop_one_cpu().
> >
> >
On Fri, 2019-08-23 at 18:17 +0200, Sebastian Andrzej Siewior wrote:
> On 2019-08-22 22:23:23 [-0500], Scott Wood wrote:
> > On Thu, 2019-08-22 at 09:39 -0400, Joel Fernandes wrote:
> > > On Wed, Aug 21, 2019 at 04:33:58PM -0700, Paul E. McKenney wrote:
> > > > On W
On Mon, 2019-08-26 at 09:29 -0700, Paul E. McKenney wrote:
> On Mon, Aug 26, 2019 at 05:25:23PM +0200, Sebastian Andrzej Siewior wrote:
> > On 2019-08-23 23:10:14 [-0400], Joel Fernandes wrote:
> > > On Fri, Aug 23, 2019 at 02:28:46PM -0500, Scott Wood wrote:
> > > &g
On Mon, 2019-08-26 at 17:59 +0200, Sebastian Andrzej Siewior wrote:
> On 2019-08-23 14:46:39 [-0500], Scott Wood wrote:
> > > > Before consolidation, RT mapped rcu_read_lock_bh_held() to
> > > > rcu_read_lock_bh() and called rcu_read_lock() from
> > > > rc
-off-by: Scott Wood
---
v3: Add to pin_current_cpu as well
include/linux/sched.h| 4 ++--
kernel/cpu.c | 2 ++
kernel/rcu/tree_plugin.h | 2 +-
kernel/sched/core.c | 4
4 files changed, 9 insertions(+), 3 deletions(-)
diff --git a/include/linux/sched.h b/include/linux
With these patches, rcutorture works on PREEMPT_RT_FULL.
Scott Wood (5):
rcu: Acquire RCU lock when disabling BHs
sched: Rename sleeping_lock to rt_invol_sleep
sched: migrate_dis/enable: Use rt_invol_sleep
rcu: Disable use_softirq on PREEMPT_RT
rcutorture: Avoid problematic critical
It's already used for one situation other than acquiring a lock, and the
next patch will add another, so change the name to avoid confusion.
Signed-off-by: Scott Wood
---
include/linux/sched.h | 15 ---
kernel/locking/rtmutex.c | 14 +++---
kernel/locking/rwlock
that wouldn't allow blocked BH disablers to be boosted.
Fix this by calling rcu_read_lock() from local_bh_disable(), and update
rcu_read_lock_bh_held() accordingly.
Signed-off-by: Scott Wood
---
v3: Remove change to rcu_read_lock_bh_held(), and move debug portions
of rcu_read_[un]lock_bh() t
7.281998] 039: ? rcu_torture_reader+0x1f0/0x1f0 [rcutorture]
[ 137.287920] 039: kthread+0x106/0x140
[ 137.291591] 039: ? rcu_torture_one_read+0x450/0x450 [rcutorture]
[ 137.297681] 039: ? kthread_bind+0x10/0x10
[ 137.301783] 039: ret_from_fork+0x3a/0x50
Signed-off-by: Scott Wood
---
The prohibi
PT_RT
kernels, until debug checks are added to ensure that they are not
happening elsewhere.
Signed-off-by: Scott Wood
---
v3: Limit to RT kernels, and remove one constraint that, while it
is bad on both RT and non-RT (missing a schedule), does not oops or
otherwise prevent using rcutorture. It wol
On Tue, 2019-09-10 at 13:34 +0800, Jason Yan wrote:
> Hi Scott,
>
> On 2019/8/28 12:05, Scott Wood wrote:
> > On Fri, 2019-08-09 at 18:07 +0800, Jason Yan wrote:
> > > This series implements KASLR for powerpc/fsl_booke/32, as a security
> > > feature that d
On Fri, 2019-08-23 at 12:50 +, Christophe Leroy wrote:
> On mpc83xx with a QE, IMMR is 2Mbytes.
> On mpc83xx without a QE, IMMR is 1Mbytes.
> Each driver will map a part of it to access the registers it needs.
> Some driver will map the same part of IMMR as other drivers.
>
> In order to reduc
On Sat, 2019-09-14 at 18:51 +0200, Christophe Leroy wrote:
>
> Le 14/09/2019 à 16:34, Scott Wood a écrit :
> > On Fri, 2019-08-23 at 12:50 +, Christophe Leroy wrote:
> > > On mpc83xx with a QE, IMMR is 2Mbytes.
> > > On mpc83xx without a QE, IMMR is 1Mbytes.
>
On Mon, 2019-09-16 at 06:42 +, Christophe Leroy wrote:
> @@ -145,6 +147,15 @@ void __init mpc83xx_setup_arch(void)
> if (ppc_md.progress)
> ppc_md.progress("mpc83xx_setup_arch()", 0);
>
> + if (!__map_without_bats) {
> + phys_addr_t immrbase = get_immrbase(
On Thu, 2019-09-12 at 18:17 -0400, Joel Fernandes wrote:
> On Wed, Sep 11, 2019 at 05:57:29PM +0100, Scott Wood wrote:
> > rcutorture was generating some nesting scenarios that are not
> > reasonable. Constrain the state selection to avoid them.
> >
> > Example #1:
() by fix_to_virt()
> ---
> arch/powerpc/include/asm/fixmap.h | 8
> arch/powerpc/platforms/83xx/misc.c | 11 +++
> 2 files changed, 19 insertions(+)
Acked-by: Scott Wood
-Scott
On Tue, 2019-09-17 at 09:59 +0200, Sebastian Andrzej Siewior wrote:
> On 2019-09-11 17:57:27 [+0100], Scott Wood wrote:
> > diff --git a/kernel/cpu.c b/kernel/cpu.c
> > index 885a195dfbe0..32c6175b63b6 100644
> > --- a/kernel/cpu.c
> > +++ b/kernel/cpu.c
&
On Tue, 2019-09-17 at 09:44 +0200, Sebastian Andrzej Siewior wrote:
> On 2019-09-11 17:57:25 [+0100], Scott Wood wrote:
> >
> > @@ -615,10 +645,7 @@ static inline void rcu_read_unlock(void)
> > static inline void rcu_read_lock_bh(void)
> > {
> > local_bh_di
On Wed, 2019-09-11 at 17:57 +0100, Scott Wood wrote:
> kernel/rcu/tree.c | 9 +++--
> 1 file changed, 7 insertions(+), 2 deletions(-)
>
> diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
> index fc8b00c61b32..ee0a5ec2c30f 100644
> --- a/kernel/rcu/tree.c
> ++
On Tue, 2019-09-17 at 12:07 +0200, Sebastian Andrzej Siewior wrote:
> On 2019-09-16 11:55:57 [-0500], Scott Wood wrote:
> > On Thu, 2019-09-12 at 18:17 -0400, Joel Fernandes wrote:
> > > On Wed, Sep 11, 2019 at 05:57:29PM +0100, Scott Wood wrote:
> > > > rcutortu
On Tue, 2019-09-17 at 16:57 +0200, Sebastian Andrzej Siewior wrote:
> On 2019-07-27 00:56:32 [-0500], Scott Wood wrote:
> > This function is concerned with the long-term cpu mask, not the
> > transitory mask the task might have while migrate disabled. Before
> > this patch,
On Tue, 2019-09-17 at 16:42 +0200, Sebastian Andrzej Siewior wrote:
> On 2019-09-17 09:06:28 [-0500], Scott Wood wrote:
> > Sorry, I missed that you were asking about rcu_read_lock_bh() as
> > well. I
> > did remove the change to rcu_read_lock_bh_held().
>
> Sor
On Tue, 2019-09-17 at 16:50 +0200, Sebastian Andrzej Siewior wrote:
> On 2019-09-17 09:36:22 [-0500], Scott Wood wrote:
> > > On non-RT you can (but should not) use the counter part of the
> > > function
> > > in random order like:
> > > local_
On Tue, 2019-09-17 at 18:50 +0200, Sebastian Andrzej Siewior wrote:
> On 2019-07-27 00:56:38 [-0500], Scott Wood wrote:
> > diff --git a/kernel/cpu.c b/kernel/cpu.c
> > index 885a195dfbe0..0096acf1a692 100644
> > --- a/kernel/cpu.c
> > +++ b/kernel/cpu.c
> >
On Tue, 2019-09-17 at 17:31 +0200, Sebastian Andrzej Siewior wrote:
> On 2019-07-27 00:56:36 [-0500], Scott Wood wrote:
> > If migrate_enable() is called while a task is preparing to sleep
> > (state != TASK_RUNNING), that triggers a debug check in stop_one_cpu().
> > Expl
ystem.
Calling calc_load_nohz_start() regardless of whether the tick is already
stopped addresses the issue when going idle. Tracking load changes when
not going idle (e.g. multiple SCHED_FIFO tasks coming and going) is not
addressed by this patch.
Signed-off-by: Scott Wood
---
kernel/time/tick-sched.
On Fri, 2019-09-27 at 14:19 +0200, Sebastian Andrzej Siewior wrote:
> On 2019-09-26 11:52:42 [-0500], Scott Wood wrote:
> > Looks good, thanks!
>
> Thanks, just released.
> Moving forward. It would be nice to have some DL-dev feedback on DL
> patch. For the remaining onc
On Mon, 2019-09-30 at 09:12 +0200, Juri Lelli wrote:
> On 27/09/19 11:40, Scott Wood wrote:
> > On Fri, 2019-09-27 at 10:11 +0200, Juri Lelli wrote:
> > > Hi Scott,
> > >
> > > On 27/07/19 00:56, Scott Wood wrote:
> > > > With the chang
On Tue, 2019-10-01 at 10:52 +0200, Juri Lelli wrote:
> On 30/09/19 11:24, Scott Wood wrote:
> > On Mon, 2019-09-30 at 09:12 +0200, Juri Lelli wrote:
>
> [...]
>
> > > Hummm, I was actually more worried about the fact that we call
> > > free_old_
&
On Wed, 2019-10-09 at 14:10 +0800, Jason Yan wrote:
> Hi Scott,
>
> Would you please take sometime to test this?
>
> Thank you so much.
>
> On 2019/9/24 13:52, Jason Yan wrote:
> > Hi Scott,
> >
> > Can you test v7 to see if it works to load a kernel at a non-zero address?
> >
> > Thanks,
Sor
On Wed, 2019-10-09 at 16:41 +0800, Jason Yan wrote:
> Hi Scott,
>
> On 2019/10/9 15:13, Scott Wood wrote:
> > On Wed, 2019-10-09 at 14:10 +0800, Jason Yan wrote:
> > > Hi Scott,
> > >
> > > Would you please take sometime to test this?
> > >
&g
On Wed, 2019-10-09 at 09:27 +0200, Juri Lelli wrote:
> On 09/10/19 01:25, Scott Wood wrote:
> > On Tue, 2019-10-01 at 10:52 +0200, Juri Lelli wrote:
> > > On 30/09/19 11:24, Scott Wood wrote:
> > > > On Mon, 2019-09-30 at 09:12 +0200, Juri Lelli wrote:
> > >
On Tue, 2019-09-24 at 13:21 +0200, Sebastian Andrzej Siewior wrote:
> On 2019-09-23 19:52:33 [+0200], To Scott Wood wrote:
>
> I made dis:
>
> diff --git a/kernel/cpu.c b/kernel/cpu.c
> index 885a195dfbe02..25afa2bb1a2cf 100644
> --- a/kernel/cpu.c
> +++ b/kernel/cpu.c
&g
On Tue, 2019-09-24 at 17:25 +0200, Sebastian Andrzej Siewior wrote:
> On 2019-09-24 08:53:43 [-0500], Scott Wood wrote:
> > As I pointed out in the "[PATCH RT 6/8] sched: migrate_enable: Set state
> > to
> > TASK_RUNNING" discussion, we can get here inside
On Tue, 2019-09-24 at 18:05 +0200, Sebastian Andrzej Siewior wrote:
> On 2019-09-24 10:47:36 [-0500], Scott Wood wrote:
> > When the stop machine finishes it will do a wake_up_process() via
> > complete(). Since this does not pass WF_LOCK_SLEEPER, saved_state will
> > be
&g
On Tue, 2019-09-17 at 18:00 +0200, Sebastian Andrzej Siewior wrote:
> On 2019-07-27 00:56:37 [-0500], Scott Wood wrote:
> > migrate_enable() currently open-codes a variant of select_fallback_rq().
> > However, it does not have the "No more Mr. Nice Guy" fallback and
On Thu, 2019-09-26 at 18:39 +0200, Sebastian Andrzej Siewior wrote:
> On 2019-07-27 00:56:34 [-0500], Scott Wood wrote:
> > Various places assume that cpus_ptr is protected by rq/pi locks,
> > so don't change it before grabbing those locks.
> >
> > Signed-off-by
On Fri, 2019-09-27 at 10:11 +0200, Juri Lelli wrote:
> Hi Scott,
>
> On 27/07/19 00:56, Scott Wood wrote:
> > With the changes to migrate disabling, ->set_cpus_allowed() no longer
> > gets deferred until migrate_enable(). To avoid releasing the bandwidth
> > while t
On Tue, 2019-09-17 at 09:06 -0500, Scott Wood wrote:
> On Tue, 2019-09-17 at 09:59 +0200, Sebastian Andrzej Siewior wrote:
> > On 2019-09-11 17:57:27 [+0100], Scott Wood wrote:
> > > diff --git a/kernel/cpu.c b/kernel/cpu.c
> > > index 885a195dfbe0..32c6175b63b6 1006
On Sun, 2019-08-18 at 17:49 -0400, Joel Fernandes (Google) wrote:
> When we're in hard interrupt context in rcu_read_unlock_special(), we
> can still benefit from invoke_rcu_core() doing wake ups of rcuc
> threads when the !use_softirq parameter is passed. This is safe
> to do so because:
What is
On Wed, 2020-04-29 at 10:27 +0200, Vincent Guittot wrote:
> On Tue, 28 Apr 2020 at 07:02, Scott Wood wrote:
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index 02f323b85b6d..74c3c5280d6b 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fa
On Mon, 2019-10-21 at 11:34 +0800, Jason Yan wrote:
>
> On 2019/10/10 2:46, Scott Wood wrote:
> > On Wed, 2019-10-09 at 16:41 +0800, Jason Yan wrote:
> > > Hi Scott,
> > >
> > > On 2019/10/9 15:13, Scott Wood wrote:
> > > > On Wed, 2019-10-09
this won't help with local_bh_disable()
(and thus rcutorture) unless something similar is done with the recently
added local_lock.
Signed-off-by: Scott Wood
---
The speedup is smaller than before, due to commit 659252061477862f
("lib/smp_processor_id: Don't use cpumask_equal()&q
These are the unapplied patches from v1, minus the sched deadline
patch, and with stop_one_cpu_nowait() in place of clobbering
current->state.
Scott Wood (3):
sched: migrate_enable: Use select_fallback_rq()
sched: Lazy migrate_disable processing
sched: migrate_enable:
grating us to another cpu).
Signed-off-by: Scott Wood
---
include/linux/stop_machine.h | 2 ++
kernel/sched/core.c | 22 +-
kernel/stop_machine.c| 7 +--
3 files changed, 20 insertions(+), 11 deletions(-)
diff --git a/include/linux/stop_machine.h b/
migrate_enable() currently open-codes a variant of select_fallback_rq().
However, it does not have the "No more Mr. Nice Guy" fallback and thus
it will pass an invalid CPU to the migration thread if cpus_mask only
contains a CPU that is !active.
Signed-off-by: Scott Wood
---
This sce
elect CPUMASK_OFFSTACK if !PREEMPT_RT_FULL" in MAXSMP. However,
even if we ignore the RT tree, checking for MAXSMP in addition to
CPUMASK_OFFSTACK is redundant.
Signed-off-by: Scott Wood
---
arch/x86/Kconfig | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/x86/Kconfig
On Fri, 2019-10-11 at 10:09 -0400, Waiman Long wrote:
> At each invocation of rt_spin_unlock(), cpumask_weight() is called
> via migrate_enable_update_cpus_allowed() to recompute the weight of
> cpus_mask which doesn't change that often.
>
> The following is a sample output of perf-record running
On Tue, 2020-04-28 at 22:37 +0100, Valentin Schneider wrote:
> On 28/04/20 06:02, Scott Wood wrote:
> > Thus, newidle_balance() is entered with interrupts enabled, which allows
> > (in the next patch) enabling interrupts when the lock is dropped.
> >
> >
On Tue, 2020-04-28 at 22:56 +0100, Valentin Schneider wrote:
> On 28/04/20 06:02, Scott Wood wrote:
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index dfde7f0ce3db..e7437e4e40b4 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> &
On Tue, 2020-04-28 at 17:33 -0500, Scott Wood wrote:
> On Tue, 2020-04-28 at 22:56 +0100, Valentin Schneider wrote:
> > On 28/04/20 06:02, Scott Wood wrote:
> > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > > index dfde7f0ce3db..e7437e4e40b4 100644
>
On Wed, 2020-04-29 at 00:09 +0200, Peter Zijlstra wrote:
> On Tue, Apr 28, 2020 at 10:37:18PM +0100, Valentin Schneider wrote:
> > On 28/04/20 06:02, Scott Wood wrote:
> > > Thus, newidle_balance() is entered with interrupts enabled, which
> > > allows
> > > (in
On Wed, 2020-04-29 at 01:02 +0200, Peter Zijlstra wrote:
> On Tue, Apr 28, 2020 at 05:55:03PM -0500, Scott Wood wrote:
> > On Wed, 2020-04-29 at 00:09 +0200, Peter Zijlstra wrote:
> > > Also, if you move it this late, this is entirely the wrong place. If
> > > you
&g
On Wed, 2020-04-29 at 11:05 +0200, Peter Zijlstra wrote:
> On Tue, Apr 28, 2020 at 06:20:32PM -0500, Scott Wood wrote:
> > On Wed, 2020-04-29 at 01:02 +0200, Peter Zijlstra wrote:
> > > On Tue, Apr 28, 2020 at 05:55:03PM -0500, Scott Wood wrote:
> > > > On Wed, 20
On Fri, 2019-10-18 at 14:12 -0400, Waiman Long wrote:
> On 10/12/19 2:52 AM, Scott Wood wrote:
> > Avoid overhead on the majority of migrate disable/enable sequences by
> > only manipulating scheduler data (and grabbing the relevant locks) when
> > the task actually sc
. In v5.0-rt with a previous version of these patches, lazy
migrate disable reduced kernel build time by around 15-20% wall and
70-75% system.
Scott Wood (8):
sched: migrate_enable: Use sleeping_lock to indicate involuntary sleep
sched: __set_cpus_allowed_ptr: Check cpus_mask, not cpus_ptr
migrate_enable() currently open-codes a variant of select_fallback_rq().
However, it does not have the "No more Mr. Nice Guy" fallback and thus
it will pass an invalid CPU to the migration thread if cpus_mask only
contains a CPU that is !active.
Signed-off-by: Scott Wood
---
This sce
This code was unreachable given the __migrate_disabled() branch
to "out" immediately beforehand.
Signed-off-by: Scott Wood
---
kernel/sched/core.c | 7 ---
1 file changed, 7 deletions(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 6e643d656d71..99a3cfccf4d3 10
_disable()
(and thus rcutorture) unless something similar is done with the recently
added local_lock.
Signed-off-by: Scott Wood
---
include/linux/cpu.h| 4 --
include/linux/sched.h | 11 +--
init/init_task.c | 4 ++
kernel/cpu.c | 97 +
kern
on, then the mask update
would be lost.
Signed-off-by: Scott Wood
---
kernel/sched/core.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index c3407707e367..6e643d656d71 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
If migrate_enable() is called while a task is preparing to sleep
(state != TASK_RUNNING), that triggers a debug check in stop_one_cpu().
Explicitly reset state to acknowledge that we're accepting the spurious
wakeup.
Signed-off-by: Scott Wood
---
kernel/sched/core.c | 8
1 file ch
With the changes to migrate disabling, ->set_cpus_allowed() no longer
gets deferred until migrate_enable(). To avoid releasing the bandwidth
while the task may still be executing on the old CPU, move the subtraction
to ->migrate_task_rq().
Signed-off-by: Scott Wood
---
kernel/sched/dead
Various places assume that cpus_ptr is protected by rq/pi locks,
so don't change it before grabbing those locks.
Signed-off-by: Scott Wood
---
kernel/sched/core.c | 6 ++
1 file changed, 2 insertions(+), 4 deletions(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
Without this, rcu_note_context_switch() will complain if an RCU read
lock is held when migrate_enable() calls stop_one_cpu().
Signed-off-by: Scott Wood
---
include/linux/sched.h| 4 ++--
kernel/rcu/tree_plugin.h | 2 +-
kernel/sched/core.c | 2 ++
3 files changed, 5 insertions(+), 3
On Mon, 2019-04-15 at 14:22 -0500, Alan Tull wrote:
> On Thu, Apr 11, 2019 at 11:36 AM Moritz Fischer
> wrote:
>
> Hi Scott,
>
> Thanks!
>
> > Hi Scott,
> >
> > good catch!
> >
> > On Thu, Apr 11, 2019 at 5:49 AM Wu Hao wrote:
> > >
On Sat, 2019-06-22 at 12:13 -0700, Paul E. McKenney wrote:
> On Fri, Jun 21, 2019 at 05:26:06PM -0700, Paul E. McKenney wrote:
> > On Thu, Jun 20, 2019 at 06:08:19PM -0500, Scott Wood wrote:
> > > On Thu, 2019-06-20 at 15:25 -0700, Paul E. McKenney wrote:
> > > > On T
progress isn't going to beat the timeout. I believe I've only
seen this when running heavy loads in addition to rcutorture (though I've
done more testing under load than without); I don't know whether the
forward progress tests are expected to work under such load.
Scott Wood (4)
e()
3. preempt_enable()
4. local_irq_enable()
If need_resched is set between steps 1 and 2, then the reschedule
in step 3 will not happen.
Signed-off-by: Scott Wood
---
TODO: Document restrictions and add debug checks for invalid sequences.
I had been planning to resolve #1 (only as shown, not the ca
Without this, rcu_note_context_switch() will complain if an RCU read
lock is held when migrate_enable() calls stop_one_cpu().
Signed-off-by: Scott Wood
---
include/linux/sched.h| 4 ++--
kernel/rcu/tree_plugin.h | 2 +-
kernel/sched/core.c | 2 ++
3 files changed, 5 insertions(+), 3
that wouldn't allow blocked BH disablers to be boosted.
Fix this by calling rcu_read_lock() from local_bh_disable(), and update
rcu_read_lock_bh_held() accordingly.
Signed-off-by: Scott Wood
---
include/linux/rcupdate.h | 4
kernel/rcu/update.c | 4
kernel/softirq.c
cu_core().
Signed-off-by: Scott Wood
---
kernel/rcu/tree_plugin.h | 10 ++
1 file changed, 2 insertions(+), 8 deletions(-)
diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
index 5d63914b3687..d7ddbcc7231c 100644
--- a/kernel/rcu/tree_plugin.h
+++ b/kernel/rcu/tree_p
On Wed, 2019-06-26 at 11:08 -0400, Steven Rostedt wrote:
> On Fri, 21 Jun 2019 16:59:55 -0700
> "Paul E. McKenney" wrote:
>
> > I have no objection to the outlawing of a number of these sequences in
> > mainline, but am rather pointing out that until they really are outlawed
> > and eliminated, r
On Thu, 2019-06-27 at 11:00 -0700, Paul E. McKenney wrote:
> On Wed, Jun 26, 2019 at 11:49:16AM -0500, Scott Wood wrote:
> > On Wed, 2019-06-26 at 11:08 -0400, Steven Rostedt wrote:
> > > On Fri, 21 Jun 2019 16:59:55 -0700
> > > "Paul E. McKenney" wrote:
>
601 - 700 of 891 matches
Mail list logo