On Fri, Sep 15, 2023 at 11:33:13AM +0000, Joel Fernandes wrote:
> On Fri, Sep 15, 2023 at 12:13:31AM +0000, Joel Fernandes wrote:
> > On Thu, Sep 14, 2023 at 09:53:24PM +0000, Joel Fernandes wrote:
> > > On Thu, Sep 14, 2023 at 06:56:27PM +0000, Joel Fernandes wrote:
> > > > On Thu, Sep 14, 2023 at 08:23:38AM -0700, Paul E. McKenney wrote:
> > > > > On Thu, Sep 14, 2023 at 01:13:51PM +0000, Joel Fernandes wrote:
> > > > > > On Thu, Sep 14, 2023 at 04:11:26AM -0700, Paul E. McKenney wrote:
> > > > > > > On Wed, Sep 13, 2023 at 04:30:20PM -0400, Joel Fernandes wrote:
> > > > > > > > On Mon, Sep 11, 2023 at 4:16 AM Paul E. McKenney
> > > > > > > > <[email protected]> wrote:
> > > > > > > > [..]
> > > > > > > > > > I am digging deeper to see why the rcu_preempt thread
> > > > > > > > > > cannot be pushed out
> > > > > > > > > > and then I'll also look at why is it being pushed out in
> > > > > > > > > > the first place.
> > > > > > > > > >
> > > > > > > > > > At least I have a strong repro now running 5 instances of
> > > > > > > > > > TREE03 in parallel
> > > > > > > > > > for several hours.
> > > > > > > > >
> > > > > > > > > Very good! Then why not boot with
> > > > > > > > > rcutorture.onoff_interval=0 and see if
> > > > > > > > > the problem still occurs? If yes, then there is definitely
> > > > > > > > > some reason
> > > > > > > > > other than CPU hotplug that makes this happen.
> > > > > > > >
> > > > > > > > Hi Paul,
> > > > > > > > So looks so far like onoff_interval=0 makes the issue
> > > > > > > > disappear. So
> > > > > > > > likely hotplug related. I am ok with doing the cpus_read_lock
> > > > > > > > during
> > > > > > > > boost testing and seeing if that fixes it. If it does, I can
> > > > > > > > move on
> > > > > > > > to the next thing in my backlog.
> > > > > > > >
> > > > > > > > What do you think? Or should I spend more time root-causing it?
> > > > > > > > It is
> > > > > > > > most like runaway RT threads combined with the CPU hotplug
> > > > > > > > threads,
> > > > > > > > making scheduling of the rcu_preempt thread not happen. But I
> > > > > > > > can't
> > > > > > > > say for sure without more/better tracing (Speaking of better
> > > > > > > > tracing,
> > > > > > > > I am adding core-dump support to rcutorture, but it is not
> > > > > > > > there yet).
> > > > > > >
> > > > > > > This would not be the first time rcutorture has had trouble with
> > > > > > > those
> > > > > > > threads, so I am for adding the cpus_read_lock().
> > > > > > >
> > > > > > > Additional root-causing might be helpful, but then again, you
> > > > > > > might
> > > > > > > have higher priority things to worry about. ;-)
> > > > > >
> > > > > > No worries. Unfortunately putting cpus_read_lock() around the boost
> > > > > > test
> > > > > > causes hangs. I tried something like the following [1]. If you have
> > > > > > a diff, I can
> > > > > > quickly try something to see if the issue goes away as well.
> > > > >
> > > > > The other approaches that occur to me are:
> > > > >
> > > > > 1. Synchronize with the torture.c CPU-hotplug code. This is a bit
> > > > > tricky as well.
> > > > >
> > > > > 2. Rearrange the testing to convert one of the TREE0* scenarios
> > > > > that
> > > > > is not in CFLIST (TREE06 or TREE08) to a real-time
> > > > > configuration,
> > > > > with boosting but without CPU hotplug. Then remove boosting
> > > > > from TREE04.
> > > > >
> > > > > Of these, #2 seems most productive. But is there a better way?
> > > >
> > > > We could have the gp thread at higher priority for TREE03. What I see
> > > > consistently is that the GP thread gets migrated from CPU M to CPU N
> > > > only to
> > > > be immediately sent back. Dumping the state showed CPU N is running
> > > > ksoftirqd
> > > > which is also a rt priority 2. Making rcu_preempt 3 and ksoftirqd 2
> > > > might
> > > > give less of a run-around to rcu_preempt maybe enough to prevent the
> > > > grace
> > > > period from stalling. I am not sure if this will fix it, but I am
> > > > running a
> > > > test to see how it goes, will let you know.
> > >
> > > That led to a lot of fireworks. :-) I am thinking though, do we really
> > > need
> > > to run a boost kthread on all CPUs? I think that might be the root cause
> > > because the boost threads run on all CPUs except perhaps the one dying.
> > >
> > > We could run them on just the odd, or even ones and still be able to get
> > > sufficient boost testing. This may be especially important without RT
> > > throttling. I'll go ahead and queue a test like that.
> >
> > Sorry if I am too noisy. So far only letting the rcutorture boost threads
> > exist on odd CPUs, I am seeing the issue go away (but I'm running an
> > extended
> > test to confirm).
> >
> > On the other hand, I came up with a real fix [1] and I am currently testing
> > it.
> > This is to fix a live lock between RT push and CPU hotplug's
> > select_fallback_rq()-induced push. I am not sure if the fix works but I have
> > some faith based on what I'm seeing in traces. Fingers crossed. I also feel
> > the real fix is needed to prevent these issues even if we're able to hide it
> > by halving the total rcutorture boost threads.
>
> So that fixed it without any changes to RCU. Below is the updated patch also
> for the archives. Though I'm rewriting it slightly differently and testing
> that more. The main thing I am doing in the new patch is I find that RT
> should not select !cpu_active() CPUs since those have the scheduler turned
> off. Though checking for cpu_dying() also works. I could not find any
> instance where cpu_dying() != cpu_active() but there could be a tiny window
> where that is true. Anyway, I'll make some noise with scheduler folks once I
> have the new version of the patch tested.
>
> Also halving the number of RT boost threads makes it less likely to occur but
> does not work. Not too surprising since the issue actually may not be related
> to too many RT threads but rather a lockup between hotplug and RT..
Again, looks promising! When I get the non-RCU -rcu stuff moved to
v6.6-rc1 and appropriately branched and tested, I will give it a go on
the test setup here.
Thanx, Paul
> ---8<-----------------------
>
> From: Joel Fernandes <[email protected]>
> Subject: [PATCH] Fix livelock between RT and select_fallback_rq
>
> Signed-off-by: Joel Fernandes <[email protected]>
> ---
> kernel/sched/rt.c | 25 +++++++++----------------
> 1 file changed, 9 insertions(+), 16 deletions(-)
>
> diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
> index 00e0e5074115..a089d6f24e5b 100644
> --- a/kernel/sched/rt.c
> +++ b/kernel/sched/rt.c
> @@ -526,6 +526,11 @@ static inline bool rt_task_fits_capacity(struct
> task_struct *p, int cpu)
> }
> #endif
>
> +static inline bool rt_task_fits_in_cpu(struct task_struct *p, int cpu)
> +{
> + return rt_task_fits_capacity(p, cpu) && !cpu_dying(cpu);
> +}
> +
> #ifdef CONFIG_RT_GROUP_SCHED
>
> static inline u64 sched_rt_runtime(struct rt_rq *rt_rq)
> @@ -1641,14 +1646,14 @@ select_task_rq_rt(struct task_struct *p, int cpu, int
> flags)
> unlikely(rt_task(curr)) &&
> (curr->nr_cpus_allowed < 2 || curr->prio <= p->prio);
>
> - if (test || !rt_task_fits_capacity(p, cpu)) {
> + if (test || !rt_task_fits_in_cpu(p, cpu)) {
> int target = find_lowest_rq(p);
>
> /*
> * Bail out if we were forcing a migration to find a better
> * fitting CPU but our search failed.
> */
> - if (!test && target != -1 && !rt_task_fits_capacity(p, target))
> + if (!test && target != -1 && !rt_task_fits_in_cpu(p, target))
> goto out_unlock;
>
> /*
> @@ -1892,21 +1897,9 @@ static int find_lowest_rq(struct task_struct *task)
> if (task->nr_cpus_allowed == 1)
> return -1; /* No other targets possible */
>
> - /*
> - * If we're on asym system ensure we consider the different capacities
> - * of the CPUs when searching for the lowest_mask.
> - */
> - if (sched_asym_cpucap_active()) {
> -
> - ret = cpupri_find_fitness(&task_rq(task)->rd->cpupri,
> + ret = cpupri_find_fitness(&task_rq(task)->rd->cpupri,
> task, lowest_mask,
> - rt_task_fits_capacity);
> - } else {
> -
> - ret = cpupri_find(&task_rq(task)->rd->cpupri,
> - task, lowest_mask);
> - }
> -
> + rt_task_fits_in_cpu);
> if (!ret)
> return -1; /* No targets found */
>
> --
> 2.42.0.459.ge4e396fd5e-goog
>